Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


At Facebook’s 2018 @Scale conference in San Jose, California today, the company announced broad industry backing for Glow, its machine learning compiler designed to accelerate the performance of deep learning frameworks. Cadence, Esperanto, Intel, Marvell, and Qualcomm committed to supporting Glow in future silicon products.

“We created Glow, an open source framework, to be community driven,” Facebook wrote of the announcement. “This approach allows partners to more rapidly design and optimize new silicon products for AI and ML by leveraging community-driven compiler software.”

As the Menlo Park company explained in a blog post, Glow was architected with ease of use in mind. It accepts computation graphs from a variety of machine learning frameworks and works with a range of accelerators. And it packs utilities that can be tweaked and adjusted to support multiple hardware targets.

One example: a memory allocator that can generate code for multiple memory configurations. Among Glow’s other tools are a linear algebra optimizer, a CPU-based reference implementation for testing hardware accuracy, and an instruction scheduler.

“The hardware-independent parts of the compiler focus on math-related optimizations that are not tied to a specific hardware model,” Facebook wrote. “Relying on the existing optimizations and capabilities reduces development time, and the extensive test suite improves a hardware provider’s confidence in the accuracy of the compiler and its conformance to the PyTorch specification.”

Facebook open-sourced Glow in March at its 2018 F8 developer conference in May, where it also launched version 1.0 of its deep learning framework, PyTorch; a PyTorch Language Library for language translation; an object detection model called Detectron; EFL, which teaches machines to reason through gameplay; and Tensor Comprehensions, a C++ library that automatically synthesizes machine learning kernels.

In another move toward platform agnosticism, PyTorch 1.0 — which both Amazon Web Services and Microsoft’s Azure platform support — taps ONNX, an open source project spearheaded by Facebook, Amazon, and Microsoft. It acts as the model export format in PyTorch 1.0, allowing for the integration of accelerated runtimes and hardware-specific libraries.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member