Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

This week, Facebook’s AI team introduced PyTorch 1.1 and Ax for model experiment management. Microsoft also made a splash with the launch of a blockchain service, Unreal Engine support for HoloLens 2 for developers, and new Azure Machine Learning and Azure Cognitive Service announcements.

Amid all that news, a few important stories may have gone unnoticed: Microsoft made generally available FPGA chips for machine model training and inferencing, and the Open Neural Network Exchange (ONNX) now supports Nvidia’s TensorRT and Intel’s nGraph for high-speed inference on Nvidia and Intel hardware.

This comes after Microsoft joined the MLflow Project and open-sourced the high-performance inference engine ONNX Runtime.

Facebook and Microsoft created the ONNX open source project in 2017, which now includes virtually every major global company in AI including AWS, AMD, Baidu, Intel, IBM, Nvidia, and Qualcomm.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

Ahead of the news Thursday, Microsoft Azure’s cloud and AI group head Scott Guthrie spoke to reporters in San Francisco on a range of topics, including Microsoft’s approach to open source projects and AI strategy. More news is anticipated Monday as Microsoft kicks off its annual Build developer conference in Seattle.

“Ultimately, I think what’s compelling about hardware isn’t the hardware work we’re doing itself, it’s what lights up on top,” he said.

Guthrie said he loves ONNX because it gives machine learning practitioners the flexibility to use the best machine learning framework and chip hardware for certain tasks. FPGA chips have been used for years now to run 100% of data encryption and compression acceleration tasks for Azure.

“Even today with the ONNX workloads for AI, the compelling part is you can now build custom models or use our models, again using TensorFlow, PyTorch, Keras, whatever framework you want, and then know that you can hardware-accelerate it whether it’s on the latest Nvidia GPU, whether it’s on the new AMD GPUs, whether it’s on Intel FPGA, whether it’s on someone else’s FPGA or new silicon we might release in the future. That to me is more compelling than ‘do we have a better instruction set at the hardware level’ and generally what I find resonates best with customers.”

Guthrie spoke at length about open source contributions and said overall Microsoft gives back more than Amazon or Google, as part of an evolution at the company in the past 10 years to make tools for DevOps, database, Kubernetes, and AI.

In the 2018 Octoverse Report released last fall, GitHub, which was acquired by Microsoft last year, found that Microsoft, Google, Redhat, and University of California, Berkeley employ the largest number of contributors to open source projects.

“We’ve gone from not being a fan of open source to being a big supporter,” he said. “I think you’re seeing a Microsoft that’s both deeply embracing openness, both as consumers, but also as contributors, and I think that’s unique. If you look at, say, AWS’ contributions to open source, there’s not a lot. There’s a lot of consumption, but there’s not a lot of contribution back, and I think even if you were to look at Google relative to the amount of contributions we’ve made on Azure, I think people are often pleasantly surprised when they add it up.”

PyTorch and TensorFlow are some of the most popular frameworks around today, but “It” frameworks come and go, Guthrie said. The interoperability ONNX brings to the collections of different frameworks, runtimes, compilers, and other tools enables a machine learning ecosystem.

Much of the modern machine learning industry is built on advances in compute power as well as open source projects. It’s that architecture that will enable leaps forward in machine intelligence, and if the employees of tech giants compete to give more back, it’s likely for the greater benefit.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

AI Staff Writer

Microsoft Build 2019: Click Here For Full Coverage