Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.
Microsoft’s eponymous Microsoft Connect(); 2018 developer conference kicked off today in style, with a slew of updates to Azure and IoT Edge services; the open-sourcing of Windows Presentation Foundation, Windows Forms, and Windows UI XAML Library and the expansion of its .NET foundation membership model. But those were just the tip of the iceberg. The Seattle company also revealed Cloud Native Application Bundle (CNAB), an open source, cloud-agnostic specification for packaging and running distributed applications. And it made freely available ONNX Runtime, an inference engine for artificial intelligence (AI) models in the ONNX format, on GitHub.
Cloud Native Application Bundle
Microsoft is this week releasing the CNAB specification, along with Duffle, an open source reference implementation of a CNAB client that can install, upgrade, uninstall, cryptographically sign, and verify the integrity of CNAB bundles. Additionally, Microsoft’s making available an example implementation of a bundle repository server, a Visual Studio Code extension, and an Electron point-and-click bundle installer.
Docker is the first to implement CNAB for containerized applications. It’ll launch as part of Docker App, a new tool for packaging CNAB bundles as Docker images for management in Docker Hub or Docker Enterprise.
“Distributed applications are no longer a futuristic concept,” Microsoft said. “Today’s cloud isn’t operating on one runtime system: It’s not just serverless, just Kubernetes, [or] just VMs. It may not even be from just one provider. And each runtime has its own provisioning tools, Terraform, Ansible, ARM, containers. To succeed in this environment, developers need a package management solution for distributed applications.”
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
CNAB, developed in partnership with Docker, was designed from the ground up to work with Docker, Azure, on-premises runtime environments such as OpenStack, Kubernetes, and everything in between. It lets developers define the resources that can be deployed to a combination of platforms, including workstations, public clouds, offline networks, and IoT environments, and manage discrete resources in a distributed app as a single logical unit.
Moreover, CNAB’s extensible architecture enables users to sign, digitally verify, and attach signatures to bundles even when such features aren’t natively supported by the underlying technology, and to control how bundles can be used. It also supports the exportation of bundles and their dependencies, and the storage of bundles in repositories for remote search, fetch, and install.
ONNX Runtime
Continuing with today’s theme of collaboration, Microsoft made a slew of frameworks and engines available in open source.
The first is Open Neural Network Exchange (ONNX) Runtime, a high-performance inferencing engine for machine learning models in ONNX format. It’s available on GitHub starting today and can be customized and integrated directly into existing codebases or compiled from source to run on Windows 10, Linux, and a variety of other operating systems.
ONNX, for the uninitiated, is a platform-agnostic format for deep learning models that enables interoperability between open source AI frameworks, such as Google’s TensorFlow, Microsoft’s Cognitive Toolkit, Facebook’s Caffe2, and Apache’s MXNet. Microsoft, AWS, and Facebook jointly announced it roughly a year ago in September 2017, and it’s being actively codeveloped by Amazon, Nvidia, Intel, and AMD, among others.
In a phone interview with VentureBeat, Eric Boyd, corporate vice president at Microsoft, said internal teams at Bing Search, Bing Ads, an Office that have incorporated ONNX Runtime are seeing AI model performance twice that of native implementations, and in some cases magnitudes higher. It’s also been incorporated into other Microsoft offerings, he added, including Windows ML and ML.NET.
“With the open-sourcing of ONNX Runtime, we’re encouraging everyone to embrace it,” he said. “It will work in embedded spaces and on Windows and on Linux … It simplifies developers’ lives dramatically.”
Perhaps more importantly, it has the backing of the broader ONNX community. Intel and Microsoft are working together to integrate the nGraph compiler as an execution provider for ONNX Runtime; Nvidia is helping integrate TensorRT; and Qualcomm has expressed its support.
“The introduction of ONNX Runtime is a positive next step in further driving framework interoperability, standardization, and performance optimization across multiple device categories, and we expect developers to welcome support for ONNX Runtime on Snapdragon mobile platforms.” said Gary Brotman, senior director of AI product management at Qualcomm.
The launch of ONNX comes as Microsoft makes Azure Machine Learning service — a cloud platform that enables developers to build, train, and deploy AI models — generally available and releases containerization support for Azure Cognitive Service’s Language Understanding API.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.