Nvidia CEO Jensen Huang said in a keynote at the company’s Nvidia GTC online event that the company’s Omniverse Cloud, a platform as a service, is now operational and making use of Microsoft’s Azure cloud computing.
Developers and creators can better realize the massive potential of generative AI, simulation and the industrial metaverse with new Omniverse Connectors, which make tools from many vendors interoperable for those designing virtual applications such as digital twins of factories.
>>Follow VentureBeat’s ongoing Nvidia GTC spring 2023 coverage<<
GamesBeat at the Game Awards
We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited!
Omniverse Cloud, a platform-as-a-service unveiled today, equips users with a range of simulation and generative AI capabilities to easily build and deploy industrial metaverse applications. New Omniverse Connectors and applications developed by third parties enable enterprises across the globe to push the limits of industrial digitalization.
“Omniverse is not a tool, but a platform,” said Richard Kerris, vice president of the Omniverse ecosystem at Nvidia, in an interview with VentureBeat. “The platform that allows you to connect and build and operate metaverse applications. We believe in the future of the internet will be 3D and the industrial side of the metaverse will be companies that transition or digitally digitalize their entire workflow.”
Kerris added, “Omniverse is all about the ecosystems, the network of networks. Every time we connect with an ecosystem out there, it brings all of their connections into the Omniverse world as well. And we have new connectors that we’ll be talking about here at GTC starting with the availability now Bentley’s LumenRT. With the others, his really opens up now to hundreds of ways of connecting to Omniverse. And once connected, you also then connect to all of the other networks that are out there.”
The Omniverse capitalizes on Nvidia’s investments in 3D graphics over decades, Kerris said.
Nvidia said the collaboration with Microsoft will provide hundreds of millions of Microsoft enterprise users with access to powerful industrial metaverse and AI supercomputing resources via the cloud.
Microsoft Azure will host two new cloud offerings from Nvidia Omniverse Cloud, a platform-as-a-service giving instant access to a full-stack environment to design, develop, deploy and manage industrial metaverse applications; and Nvidia DGX Cloud, an AI supercomputing service that gives enterprises immediate access to the infrastructure and software needed to train advanced models for generative AI and other groundbreaking applications.
Additionally, the companies are bringing together their productivity and 3D collaboration platforms by connecting Microsoft 365 applications — such as Teams, OneDrive and SharePoint — with Nvidia Omniverse.
“The next wave of computing is being born, between next-generation immersive experiences and advanced foundational AI models, we see the emergence of a new computing platform,” said Satya Nadella, chairman and CEO, Microsoft, in a statement. “Together with Nvidia, we’re focused on both building out services that bridge the digital and physical worlds to automate, simulate and predict every business process, and bringing the most powerful AI supercomputer to customers globally.”
A GTC keynote demo developed by Accenture amplifies the utility of integrating Nvidia Omniverse with Microsoft Teams to enable real-time 3D collaboration. Running on Omniverse Cloud, and leveraging a Teams Meeting featuring Live Share, the Accenture demo showcased how this integration can shorten
the time between decision-making, action and feedback.
“This is something that’s really exciting. But it’s only the start of what we’re doing with Microsoft,” Kerris said. “We’re also not only working to deploy Omniverse Cloud on Azure, but also integrating Omniverse into Microsoft 365.”
Omniverse ecosystem expansion
Omniverse enhances how developers and professionals create, design and deploy massive virtual worlds, AI-powered digital humans and 3D assets.
Its newest additions include:
- New Omniverse Connectors: Third-party connectors now available include Siemens Xcelerator portfolio — including Siemens Teamcenter, Siemens NX and Siemens Process Simulate; Blender; Cesium; Emulate3D by Rockwell Automation; Unity and Vectorworks. This links more of the world’s most advanced applications through the Universal Scene Description (USD) framework.
- Azure Digital Twin, Blackshark.ai, FlexSim and NavVis connectors are coming soon.
- SimReady 3D assets: Over 1,000 new SimReady assets enable easier AI and industrial 3D workflows. KUKA, a leading supplier of intelligent automation solutions, is working with Nvidia and evaluating an adoption of the new SimReady specifications to make customer simulation easier than ever.
- Synthetic data generation: Lexset and Siemens SynthAI are both using the Omniverse Replicator software development kit to enable computer-vision-aided industrial inspection. Datagen and Synthesis AI are using the SDK to create synthetic digital humans for AI training. And Deloitte is providing synthetic data generation services using Omniverse Replicator for customers across domains ranging from manufacturing to telecom.
Bentley Systems’ LumenRT for Nvidia Omniverse is also available now. It enables automatic synchronized changes to visualization workflows for infrastructure digital twins, and applications developed by SyncTwin.
Also available now is Aireal’s OmniStream, a web-embeddable and cloud-based extended reality digital twin platform that allows builders to give photorealistic 3D virtual tours to their buyers. Aireal’s Spaces, a visualization tool that enables automatic generation of home interior design, is coming soon.
Run Omniverse everywhere
Nvidia also introduced systems and services making Omniverse more powerful and easier to access. Next-generation Nvidia RTX workstations are powered by Nvidia Ada Lovelace GPUs, Nvidia ConnectX-6 Dx SmartNICs and Intel Xeon processors.
It brings real-time raytraced subsurface scattering shaders, which has been a dream of computer graphics for 35 years, Kerris said.
“The quality is really going to blow some minds,” he said.
The newly announced RTX 5000 Ada generation laptop GPU enables professionals to access Omniverse and industrial metaverse workloads in the office, at home or on the go.
Nvidia also introduced the third generation of OVX, a computing system for large-scale digital twins running within Nvidia Omniverse Enterprise, powered by Nvidia L40 GPUs and Bluefield-3 DPUs.
Omniverse Cloud will be available to global automotive companies, enabling them to realize digitalization across their industrial lifecycles from start to finish. Microsoft Azure is the first global cloud service provider to deploy the platform-as-a-service.
In his GTC keynote, Huang showcased how Lucid Motors is tapping Omniverse and USD workflows to enable automotive digitalization projects. He also highlighted BMW Group’s use of Omniverse to build and deploy its upcoming electric vehicle factory in Debrecen, Hungary.
“We’ve really honed in in the automotive space because of the natural fit that we’re having there with them. And each of the auto companies are very similar in their workflows, but unique in what they do,” Kerris said. “We’re really seeing the adoption in the auto industry. I think that it won’t be long before I think every auto manufacturer will have Omniverse somewhere in their workflow.”
Core updates coming to Omniverse
Huang also gave a preview of the next Omniverse release coming this spring, which includes updates to Omniverse apps that enable developers and enterprise customers to build on foundation applications to suit their specific workflows.
These include Nvidia USD Composer (formerly Omniverse Create) — a customizable foundation application for designers and creators to assemble large-scale, USD-based datasets and compose industrial virtual worlds.
Another update is Nvidia USD Presenter (formerly Omniverse View) — a customizable foundation application visualization reference app for showcasing and reviewing USD projects interactively and collaboratively.
And Nvidia is also launching Nvidia USD-GDN Publisher — a suite of cloud services that enables developers and service providers to easily build, publish and stream advanced, interactive, USD-based 3D experiences to nearly any device in any location.
Nvidia is also promising an improved developer experience. The new public extension registry enables users to receive automated updates to extensions. New configurator templates and workflows as well as an Nvidia Warp Kernel Node for Omnigraph will enable zero-friction developer workflows for GPU-based coding.
Next-level rendering and materials — Omniverse is offering for the first time a real-time, ray-traced subsurface-scattering shader, enabling unprecedented realism in skin for digital humans. The latest update to Universal Material Mapper lets users seamlessly bring in material libraries from third-party applications, preserving the material structure and full editing capability.
Overall, Nvidia is also promising groundbreaking performance. In a major development to enable massive large-scene performance, USD’s runtime data transfer technology provides an efficient method to store and move runtime data between modules. The scene optimizer allows users to run optimizations at USD level to convert large scenes into more lightweight representations for improved interactions.
The next Omniverse will also have AI training capabilities — Automatic domain randomization and population-based training make complex robotic training significantly easier for autonomous robotics development.
And it will accommodate generative AI — A new text-to-materials extension that allows users to automatically generate high-quality materials solely from a text prompt. To accelerate the usage of generative AI, updates within Omniverse also include text-to-materials and text-to-code generation tools. Additionally, updates to the Audio2Face app include headless mode, a REST application programming interface, improved lip-sync quality and more robust multi-language support including for Mandarin.
Developers can also use AI-generated inputs from technology such as ChatGPT to provide data to Omniverse extensions like Camera Studio, which generates and customizes cameras in Omniverse using
data created in ChatGPT.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.