Nvidia today shed light on an expanded collaboration with Mercedes-Benz to roll out an in-vehicle computing system and AI infrastructure starting in 2024, which was first revealed last January. The two companies say the platform will launch across the fleet of next-generation Mercedes-Benz vehicles, imbuing those vehicles with upgradable automated driving functions.
The efforts build on a longstanding collaboration between Nvidia and Mercedes. At the 2018 Consumer Electronics Show, the companies showcased a concept cockpit dubbed the Mercedes-Benz User Experience, which infused AI into car infotainment systems. And in July 2018, Nvidia and Mercedes along with Bosch announced a partnership to operate a robo-taxi service in San Jose.
A headlining feature of the forthcoming Nvidia-designed system for Mercedes vehicles, which will be based on the former’s Drive product, is the ability to automate driving of regular routes from any address to address. In addition, the platform will allow customers to download in-car safety, convenience, entertainment, and subscription apps and services via an over-the-air in-car system akin to Tesla’s.
Nvidia’s Drive AGX Orin will power the new platform. It slots alongside Nvidia’s existing AGX Drive platforms — AGX Drive Xavier and AGX Drive Pegasus — and it’s architected to run a large number of apps and AI models while achieving safety standards such as ISO 26262 ASIL-D. At the heart of Orin is a system-on-chip comprising 17 billion transistors in total that integrates with Nvidia’s graphics chip architecture and Hercules cores, both of which are complemented by AI and machine learning accelerator cores that deliver 200 trillion operations per second (TOPS) compared with Pegasus’ 320 TOPS and Xavier’s 30 TOPS. Orin can handle over 200Gbps of data while consuming only 60 watts to 70 watts of power (at 200 TOPS), all told.
The Nvidia-Mercedes platform will also benefit from access to the models at the core of Drive. Nvidia plans to make available AI subsystems tailored to tasks like traffic light and sign recognition, object-spotting of vehicles and pedestrians, path perception, and gaze detection and gesture recognition. One model recently spotlighted on the company’s blog automatically generates control outputs for cars’ high beams using signals derived from road conditions.
Each Drive model can be customized and enhanced with Nvidia’s newly released suite of tools, which enable training using a range of machine learning development techniques. There’s active learning, for example, which improves accuracy and reduces data collection costs by automating data selection using AI; federated learning, which enables the use of data sets across countries and with other parties while maintaining data privacy; and transfer learning, which leverages pretraining and fine-tuning to develop models for specific apps and capabilities.
Nvidia and Mercedes intend to jointly develop the AI and automated vehicle applications capable of level 2 and 3 self-driving, as well as automated parking functions up to level 4. According to the Society of Automotive Engineers, level 2 entails systems that take full control of vehicles but require drivers to be prepared to intervene at any time, while level 3 allows drivers to safely turn their attention away from driving tasks and level 4 requires no driver attention for safety.
The reveal of Nvidia’s and Mercedes’ self-driving platform comes after Ford unveiled an autonomous driving system to rival Tesla’s Autopilot. First available on the Mach-E followed by other models in Ford’s 2021 lineup, notably the all-new F-150, Active Drive Assist can control vehicle speed and steering through cameras and radar on pre-mapped roads. Meanwhile, GM recently pledged to expand its semi-autonomous highway assist system, Super Cruise, to 22 vehicles by 2023, including 10 by next year.