VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
The universal scene description (USD) language and format are rapidly being adopted as a Rosetta stone to translate data among 3D tools, game engines, digital twins and ecommerce offerings. It still has a way to go, but, it’s the best hope the industry has to unify workflows and user experiences across the metaverse. At the Nvidia GTC Conference, experts weighed in on USD’s history, current use cases, drawbacks and whether it could become the HTML of the 3D Web.
When HTML was first introduced in 1993 it wasn’t a great thing. But it was the first serious effort to unify text, graphics and hyperlinks into a coherent interface, programming language and platform. It trumped the default approaches of the day like gopher and proprietary bulletin board services with funky fonts and poor layouts. And it was extensible across servers everywhere.
USD is in the same position today. It isnt great at everything, but it does the best job among dozens of alternatives for sharing 3D assets across tools, services and platforms in an open and extensible way. USD is already helping companies like Ubisoft and Ikea simplify some 3D workflows and seeing traction with rendering engines like Unity and Unreal. That said, it still has a few limitations in rendering, cutting, and pasting models between worlds, materials, physics, and more sophisticated workflows.
GamesBeat Next 2023
Join the GamesBeat community in San Francisco this October 24-25. You’ll hear from the brightest minds within the gaming industry on latest developments and their take on the future of gaming.
Born at Pixar
It is helpful to go back to the birth of USD to understand why it emerged and how early design decisions shaped its current state. Steve May, vice-president and CTO of Pixar, saw the predecessors to USD when the company was starting to make Toy Story. They were faced with problems describing scenes, lighting, cameras and other assets required to simplify workflows across large teams of artists and technicians.
In the late 1990s, they experimented with concepts like referencing, layers and hierarchies, which form the basis for scaling production. In the mid-2000s, Pixar started incorporating these techniques into a new animation tool called Presto. Brave was the first film to use Presto.
They discovered that the description language in Presto was expressive, but the performance was not there for the artist. “Brave was pretty complex, and even loading scenes into Presto was challenging,” May explained.
Pixar created another form of scene description optimized for performance, so they decided to combine these. This led to the genesis of USD as it has become today. “At its core, it’s about how you describe complex scenes and worlds and how you let many people collaborate,” May said.
USD describes the essential pieces of an environment, set, world or prop. It helps designers characterize each prop, shape, material, lights, and camera view used to describe the scene. When building large complex worlds, it’s difficult to break that problem into pieces that can be represented in layers. This allows multiple artists to work on the same scene without messing up anything. With layers, an artist can be posing a character while a set dresser might be moving around objects in the scene to make a better composition for the background set simultaneously. Then at the end, they can composite those layers to see the result.
Guido Quaroni, senior director of engineering of 3D & immersive at Adobe, who also worked at Pixar in those early days, said they decided to open-source USD for strategic reasons. They wanted to build traction among the various tools Pixar used to reduce the risk of having to reengineer the Pixar pipeline to support new open-source alternatives like Alembic in the future. The fact that it took off also meant they did not need to invest internal resources to write integration plugins for DCC tools like Maya, Katana, and Houdini.
Getting into details at Ubisoft
USD is starting to gain traction for enterprises in other industries. For example, Ubisoft is beginning to use USD to simplify 3D asset exchange and integration across tools. Adeline Aubame, director of technology operations, Ubisoft, said, “We were interested in representing 3D data in a vast ecosystem while remaining performant.”
At this first stage, USD simplifies import/export between DCC tools and rendering engines. This allows Ubisoft to hire talented artists who may not be experts in Ubisoft’s existing tools.
However, her team is already struggling with places where USD lacks certain features they need for making video games. For example, level-of-detail is a popular rendering technique that selectively improves resolution for some parts of the scene. This focuses computing horsepower where it matters for gameplay. It is also essential to find ways that LOD can improve gradually rather than quickly pop into focus.
Aubame hopes the game development community can find ways to add LOD support to USD workflows.
“The great thing of an open format is that the power of many could help solve these problems,” she said
Ikeas rendering challenge
Martin Enthed, innovation manager at Ikea, has spearheaded efforts to perform offline 3D to replace photography for various marketing collateral. He fell in love with USD when he first saw it in 2016. Features like external references, overrides, and layering are all perfect for Ikea’s production pipeline.
“But the problem we have is that it has not been adopted by the tools we are using,” Enthed said.
They have, of course, been experimenting with it and testing how to store things in a 3D database and then push them back out. His interest piqued last year as USD picked up steam across content tool vendors.
Now, it is still focusing on offline rendering use cases, where they generate content for a catalog or image carousel on a web page. He would like to explore more real-time and metaverse use cases. But the main problem is that USD does not go across rendering engines well. “Our main problem right now is the interoperability between renderers,” he said.
One big challenge is variations in the material or surface description tools across vendors. When Ikea generates 3D content for something like a bookshelf or coffee table, it to ensure the surface looks realistic and similar whether someone is looking at it on a PlayStation, PC, mobile, or Qwest headset across different rendering engines.
Enthed is skittish about investing too many resources in digitizing his entire content since most Ikea products have a long lifetime. Some current products date back to 1978. “I need to make a 3D asset today that can be hopefully loaded in 10 years,” Enthed said. “if we talk about HTML for 3D worlds, it needs to be possible to load in any 3d browser engine and look the same.”
Planting seeds for the creatorverse
Unity Editor was one of the first DCC tools to support USD in 2017. This helped Lion King virtual productions exchange content between Maya and Unity to generate 3D animation clips. Lately, Unity has been focusing on improving USD rendering performance, said Natalya Tatarchuk, distinguished technical fellow and chief architect, professional artistry & graphics innovation at Unity.
One big challenge is improving rendering support across devices as varied as PlayStations, PCs, and mobile devices. Her team is also exploring different ways to standardize surface material formats that look good across devices.
Tatarchuk said, “We need to enable people to author once but have that data durably flow across the diverse divergent render backends in a scalable way.” Her team is working with partners like Pixar, Nvidia, and Adobe to address these challenges.
They are also expanding USD support to other tools like SpeedTree for organic rendering and Ziva for faces. The workflow dead ends are a big frustration in which USD works for some phases of the 3D development lifecycle but must be manually patched for others. These gaps are common across all main DCCs such as Maya, Blender, Houdini, and experience engines such as Unity and Unreal. All the major vendors will have to work together on USD to bridge these gaps. In some cases, this may require a leap of faith. “Without that leap of faith, we will always be waiting,” she said.
Down the road, she hopes that this could help position the industry for what she calls the 3D “creatorverse.” This will mirror the kinds of 2D sharing and the explosion of users built on apps like YouTube, Snapchat, Instagram, and Tiktok.
These tools make it easy to grab an image or video, transform it, and then share it with friends. “This is impossible to do in the creatorverse of real-time 3D,” she said.
Another limitation is the lack of standards for describing interactivity. For example, how does a content creator describe how to interact with content and how it flows across time. The industry also needs to develop standards to describe procedurals characterizing elements like rigging and standardized animation curves.
“This is super important, but no one is willing to compromise,” she said. “Everybody needs to come together and find some middle ground for some of these complicated aspects. We can find choices that may be imperfect in some contexts. Some ground is given, and some ground is gained because we need to be able to solve the missing pieces.”
It is not possible to cut and paste a 3D character across applications today. The industry will need to agree on standards for procedurals, engine components, virtual cameras, lights, and controlling gameplay and behavior.
Marc Petit, general manager for Unreal Engine at Epic Games, sees hope for new standards like glTF, which is a good solution for authoring and transporting 3D content. He also believes it’s essential to be able to drag 3D objects across worlds, such as allowing players to drive cars from Minecraft to Fortnight.
In his role at Adobe, Quaroni wants to make it easier to share 3D content across various Adobe tools and services. “Ideally, they should be able to copy and paste between each other, but this is not easy because architecturally, there are different ways they represent the data,” he said.
As a first step, he is working on improving lossless interoperability. Down the road, Quaroni is exploring how USD might be used to change the way people think about managing documents and files. He is also exploring how this approach could improve interoperability across 3D tools.
Quaroni explained, “We are in the creatorverse space, creating assets for the metaverse. We need to assume it is not just Adobe’s tools. Let’s start thinking that way rather than trying to make everyone use our tools only. Ultimately the circle will come back to support our tools in this model.”
Moving into digital twins
Nvidia catalyzed recent interest in USD as it began to standardize support across its entire toolchain. Frank DeLise, vice president at Nvidia, said USD started as an internal problem to improve collaboration between humans, tools and AI to enable new digital twins use cases like simulating roads and factories. “We realized we needed an open standard to build these worlds on,” he said.
None of the tools for new use cases like autonomous vehicles, robots and large virtual worlds could talk to each other. It was essential to allow anyone to contribute to these worlds and move assets across different views, whether rendered in Unreal, Unity, or web views.
The USD format is picking up steam for exchanging data, but the USD runtime side of the equation is still missing. Nvidia is working with other leaders to figure out how to build a runtime version of USD to improve performance. “In the near term, many of these engines and viewers will have their own representations,” DeLise said.
With the right architecture and industry collaboration, USD could spark the same growth in the metaverse that HTML initiated for the web. For example, Nvidia is gradually exploring ways to add functionality through microservices and connections to other tools. Nvidia has started open-sourcing various USD schemas such as PhysX for physics rendering and material definition language (MDL) for describing surface rendering.
This also helps rethink the way to develop connections to other tools. For example, Nvidia can bring in AI tools through USD microservice connections. DeLise believes the 3D graphics industry is still in the early journey of connecting all the tools that support features that have become common on the web.
“I think USD will be a great way to start describing that world, but there will be a lot of work to figure out to do those behaviors, microservices, and connections to get to that level,” DeLise said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.