Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
As the demand for generative AI continues to grow, Nvidia is doing everything possible to make sure its tech stack for model development and deployment reaches where enterprises prefer to work.
Case in point: The computing giant’s latest partnership with Microsoft to integrate the Nvidia AI enterprise software into Azure machine learning (Azure ML) and introduce deep learning frameworks on Windows 11 PCs.
The move, announced today at the ongoing Microsoft Build developer conference, accelerates enterprise and individual AI efforts, and comes just a few hours after Nvidia announced project helix with Dell to bring generative AI to on-premise deployments.
Integration with Microsoft Azure machine learning
In simple terms, Nvidia AI enterprise can be described as an end-to-end, secure software platform that accelerates the data science pipeline and streamlines the development and deployment of production AI.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
With this partnership, Nvidia is bringing this software layer into Azure ML, providing Azure cloud customers with an enterprise-ready solution to quickly build, deploy and manage customized AI applications. As part of this, Azure ML users get access to more than 100 AI frameworks, pre-trained models and development tools — such as Nvidia rapids — that come with Nvidia AI enterprise as well as Nvida’s accelerated computing resources to speed up the training and inference of targeted AI models, including LLMs.
“The combination of Nvidia AI enterprise software and Azure machine learning will help enterprises speed up their AI initiatives with a straight, efficient path from development to production,” Manuvir Das, Nvidia VP of enterprise computing said in a statement.
Coming to the Azure marketplace
In addition to the integration — which is available in preview by invitation only in the Nvidia community registry — the companies also announced that Nvidia AI enterprise and omniverse cloud will be coming to the Microsoft Azure marketplace.
“What this means is that customers who have existing relationships with Azure can use the contracts they already have in place to access Nvidia AI enterprise and use it either within Azure ML or separately on instances of their choice,” Das said in a press briefing.
The same goes for Omniverse cloud, which provides developers and enterprises with a full-stack cloud environment to design, develop, deploy and manage industrial metaverse applications at scale.
AI on Windows 11
Finally, to help developers build AI models via laptops, Nvidia announced that all of its GPU-accelerated deep learning frameworks will be enabled on Windows. This, the companies said, will be done through Windows subsystem for Linux (WSL), which combines the best of Windows and Linux and allows AI libraries that were built for Linux to run on a Windows laptop.
Nvidia said it has been working closely with Microsoft to deliver GPU acceleration and support for its entire AI software stack inside WSL, allowing developers to use Windows PCs for all their local AI development needs.

While this will make leading generative AI models available on PCs, Das noted that users will still have the option to do large-scale training on Azure.
“Of course, you can use Nvidia AI enterprise and Azure ML to do the training and then push the models down to Nvidia PCs,” he noted. “It’s the same Nvidia stack so it’ll run there.”
Microsoft Build runs through Thursday, May 25.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.