Nvidia is bringing its Nvidia Drive AV software with enhanced Level 2 point-to-point driver assistance capabilities to US roads, expected by end of this year – starting with the all-new Mercedes-Benz CLA. The CLA is the German OEM’s first vehicle featuring the MB.OS platform, which introduces advanced driver assistance features powered by Nvidia’s full-stack Drive AV software, AI infrastructure and accelerated compute.
This design may enable over-the-air updates of future upgrades and new features, including planned enhancements to MB.Drive Assist Pro, which may be available ex-factory and through the Mercedes-Benz store.
The CLA recently achieved a five-star European New Car Assessment Program EuroNCAP safety rating. The performance of MB.Drive active safety features in accident mitigation and avoidance contributed to this top safety score.
“As the automotive industry embraces physical AI, Nvidia is the intelligence backbone that makes every vehicle programmable, updatable and perpetually improving through data and software,” said Ali Kani, vice president of automotive at Nvidia. “Starting with Mercedes-Benz and its incredible new CLA, we’re celebrating a stunning achievement in safety, design, engineering and AI-powered driving that will turn every car into a living, learning machine.”
Dual-stack architecture
Nvidia Drive AV uses an AI end-to-end stack for core driving, alongside a parallel classical safety stack – built on the Nvidia Halos safety system – that adds redundancy and safety guardrails. As a result, vehicles can learn from vast amounts of real and synthetic driving data to assist in safely navigating complex environments and scenarios with humanlike decision-making.
For consumers, this means greater confidence and comfort, knowing there are built-in redundancy and fail-safe checks designed to help support a safe, secure journey. Halos ensures the vehicle operates within a defined safety parameters.
This unified architecture enables advanced level 2 automated driving capabilities with expanded functionality, including point-to-point urban navigation through complex city environments, advanced active safety with proactive collision avoidance and automated parking in tight spaces. In addition, it allows for cooperative steering between the system and driver.
Human-like urban driving with end-to-end AI models
Nvidia deep learning models power a new generation of AI-assisted urban driving systems. These models interpret traffic holistically, allowing vehicles to navigate intelligently through lane selection, turns and route-following in congested or unfamiliar areas; understand vulnerable road users (pedestrians, cyclists, scooter riders) and respond proactively – such as by yielding, nudging or stopping – to prevent collisions; and assist drivers during trips to navigate safely from any address to any address – for example, from home to work.
Nvidia and Mercedes-Benz are also transforming vehicle manufacturing through a digital-first approach using Nvidia Omniverse libraries.
Using digital twins of factories and assembly lines, engineers can design, plan and optimize operations virtually, reducing downtime and accelerating iteration. Omniverse and the Nvidia Cosmos platform let developers test and validate intelligent driving software in simulated environments before real-world deployment.
Cloud-to-car development
All Nvidia-powered intelligent driving systems share a cloud-to-car development pipeline designed to transform real-world data into billions of simulated miles, helping to accelerate improvement:
- Training infrastructure: Nvidia DGX systems use massive-scale GPU computing to train Drive AV foundation models on diverse, global datasets. These models capture human driving behavior across millions of real-world scenarios.
- Simulation and validation: Nvidia Omniverse and Cosmos simulation environments enable physically accurate testing and scenario generation. Developers can validate new features in thousands of edge cases before deployment, transforming data from real-world miles into billions of virtual ones.
- In-vehicle compute and Hyperion architecture: Nvidia Drive AGX accelerate compute processes perception, sensor fusion and decision-making in real time, handling complex urban and highway scenarios simultaneously. Drive Hyperion provides the compute and sensor architecture that adds sensor redundancy and diversity for a safe, highly advanced automated driving experience.
This closed-loop approach ensures rapid iteration on driving algorithms, delivers exceptional accuracy through massive-scale training, enables safety validation in edge cases that are rare or hazardous in real-world driving, and supports scalable deployment across multiple vehicle platforms.
