Nvidia is using the underlying architecture of its Drive autonomous vehicle platform to enable product development in other verticals with AI systems that deal with vast amounts of critical data, such as video surveillance, robotics, and health care.
Called Project Maglev, the initiative to transfer the data framework to industries beyond autonomous vehicles started roughly 18 months ago, Nvidia VP of AI infrastructure Clément Farabet told VentureBeat in a phone interview.
“It’s [Maglev] being used to support other applications we have, mostly around medical imaging, and health care,” he said. “Another big effort around video surveillance for smart cities, and these other product teams are also building their AIs on top of the same platform.”
To deal with large amounts of data, Maglev uses semi-autonomous methods to collect and label data in in order to scale data-intensive initiatives and is currently only being used internally at Nvidia.
Farabet also spelled out the details of Project Maglev today at Facebook Scale, a conference about rapid tech deployment being held this week in San Jose.
The Drive platform provides end-to-end services for autonomous driving initiatives. It factors in a range of inputs from radar, lidar, and other vision systems to Nvidia’s Xavier hardware or Pegasus software for autonomous vehicles.
Each car involved with the Drive platform can produce up to a petabyte of data every week from sensors inside and outside a vehicle. Nearly 1,500 human people are involved in the data-labeling process for the Drive platform, and they label 20 million objects a month. Maglev utilizes Drive’s data architecture to coordinate actions between a number of neural networks and manage the “complete and total explosion of test cases” encountered when trying to build a safe and reliable autonomous driving system.
“A lot of the base infrastructure to manipulate or manage large-scale datasets, get them prioritized for labeling, get them labeled, push the results into training and testing — these things are quite agnostic and common to anyone developing AI applications,” he said. “What self-driving cars really brought to this project is pushing us to solve that problem not just for benefits of, say, 10 terabytes but push that all the way up to hundreds of petabytes. So we believe that it’s really going to be Nvidia’s solution to scale to essentially help solve large problems for AI.”
These Maglev details emerge less than a day after the start of Nvidia’s GTC conference in Japan, where CEO Jensen Huang unveiled the TensorRT Hyperscale services and Tesla T4 GPU chip, both made specifically for the inference of AI models.
Nvidia also debuted its Clara platform for medical hardware and software, as well as robotics partnerships with companies like Canon and Yamaha.
The new inference engine and T4 have also been used to drive inference for Maglev initiatives, Farabet said.