Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Nvidia today unveiled Omniverse Avatar, a platform for generating immersive AI-driven avatars. The platform enables users to leverage speech AI, computer vision, natural language understanding, and simulation to create avatars that recognize speech and communicate with human users within real-world simulation and collaboration platforms like Nvidia Omniverse, and other digital worlds.
Avatars or AI assistants created with the solution would be able to see and speak on a wide range of subjects and conduct customer service interactions, Nvidia said. The actions will include making personal appointments and reservations to ordering food from restaurants and completing banking transactions.
The release of Omniverse Avatar will provide marketers with a solution that they can use to interact with customers in virtual worlds, and simulation platforms like Nvidia Omniverse, where users can deploy the avatars to facilitate personalized customer service interactions with consumers, and enhance customer satisfaction.
“The dawn of intelligent virtual assistants has arrived,” said Jensen Huang, founder and CEO of Nvidia. “Omniverse Avatar combines Nvidia’s foundational graphics, simulation, and AI technologies to make some of the most complex real-time applications ever created. The use cases of collaborative robots and virtual assistants are incredible and far-reaching.”
Nvidia enters the AI avatars race
The announcement of Omniverse Avatar also means that Nvidia has entered into the AI avatars arms race, competing against other established digital assistant or avatar providers, including Deepbrain, Soul Machines, and AI Foundation, which are also trying to create engaging virtual characters.
However, Omniverse Avatar has the edge over many competitors due to its integration with the Nvidia Omniverse, filled with more than 70,000 individual creators. Now 700 companies, from BMW Group to Epigraph, Ericsson, and Sony Pictures Animation, have access to immersive AI avatars to drive digital experiences in the Omniverse.
At the same time, these avatars are well-positioned to provide meaningful interactions in the Omniverse due to their use of the Megatron 530B Large Language Model, a pre-trained model that gives avatars the ability to recognize, understand, and generate human language.
This language model enables avatars to answer questions on a wide range of subjects, giving them the ability to summarize complex stories into short formats, recognize speech in multiple languages, and then translate it, so they can provide information that human users wouldn’t have access to.
Immersive AI in action
As part of Huang’s keynote address at Nvidia’s GTC event, he highlighted the capabilities of Nvidia AI software and Nvidia’s generated language model Megatron-Turing NLG 530B during two demonstrations.
During the first demonstration, Huang began having a real-time conversation with a digitized, toy version of himself, where he discussed topics from health care diagnosis to climate science.
Then, Huang demoed a customer-service avatar working in a restaurant kiosk that could see and communicate with two customers as they ordered vegetarian burgers, fries, and drinks.
Many other providers like Soul Machines have attempted to create humanlike avatars and struggled to mitigate the uncanny valley effect, where realistic avatars provoke a sense of uneasiness in users. Nvidia avoided this by embracing a lighthearted cartoonish aesthetic that’s unlikely to unsettle users in the Omniverse.
Yet, while the initial demos in the keynote demonstration look promising, it remains to be seen how engaging Nvidia Omniverse Avatars are when they’re in a real customer-facing environment in a virtual world, working with human beings who have high expectations.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.