Nvidia Unveils Alpamayo: Open‑Source AI Models for Autonomous Vehicles
Key Points
- Nvidia launched Alpamayo at CES 2026, offering open‑source AI models for autonomous vehicles.
- Alpamayo 1 is a 10 billion‑parameter vision‑language‑action model that reasons through complex driving scenarios.
- The model’s code is available on Hugging Face, allowing developers to fine‑tune and build custom tools.
- Cosmos generative world models create synthetic data to augment over 1,700 hours of real driving footage.
- AlpaSim, an open‑source simulation framework on GitHub, enables safe, large‑scale testing of autonomous systems.
- Developers can use Alpamayo for auto‑labeling, decision evaluation, and training of smaller, faster vehicle models.
At CES 2026, Nvidia announced Alpamayo, a new family of open‑source AI models, simulation tools, and datasets designed to give autonomous vehicles human‑like reasoning capabilities. Central to the suite is Alpamayo 1, a 10 billion‑parameter vision‑language‑action model that breaks down driving problems into steps, evaluates possibilities, and selects the safest actions. The code is released on Hugging Face, and developers can fine‑tune it, create auto‑labeling systems, or combine real and synthetic data generated by Nvidia’s Cosmos world models. An open dataset of more than 1,700 hours of driving footage and the AlpaSim simulation framework are also available to accelerate safe, large‑scale testing.
Launch at CES 2026
Nvidia introduced the Alpamayo family at the CES 2026 trade show, positioning it as a "ChatGPT moment for physical AI" that enables machines to understand, reason, and act in the real world. CEO Jensen Huang emphasized that Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments, and explain their decisions.
Alpamayo 1: A Reasoning Engine for Vehicles
The core of the offering is Alpamayo 1, a 10 billion‑parameter chain‑of‑thought, reason‑based vision‑language‑action (VLA) model. According to Nvidia’s vice president of automotive, Ali Kani, the model solves complex edge cases—such as navigating a traffic‑light outage at a busy intersection—by breaking problems into steps, reasoning through every possibility, and selecting the safest path.
Open‑Source Availability and Developer Flexibility
Alpamayo’s underlying code is publicly available on Hugging Face. Developers can fine‑tune the model into smaller, faster versions for specific vehicle development needs, use it to train simpler driving systems, or build tools on top of it, such as auto‑labeling systems that automatically tag video data or evaluators that assess whether a car made a smart decision.
Cosmos Generative World Models and Synthetic Data
Nvidia also highlighted its Cosmos platform, a suite of generative world models that create realistic representations of physical environments. By generating synthetic data, developers can augment the more than 1,700 hours of real‑world driving data released as an open dataset, enabling training and testing on a broader range of scenarios.
AlpaSim: Open‑Source Simulation Framework
To support safe, large‑scale testing, Nvidia released AlpaSim, an open‑source simulation framework hosted on GitHub. AlpaSim recreates real‑world driving conditions—including sensor inputs and traffic dynamics—allowing developers to validate autonomous driving systems without risking physical hardware.
Implications for the Autonomous‑Vehicle Ecosystem
The Alpamayo rollout provides the autonomous‑vehicle community with a comprehensive stack: a reasoning‑focused AI model, extensive real‑world and synthetic datasets, and a simulation environment for validation. By making these components open source, Nvidia aims to accelerate innovation, lower development costs, and improve safety across the industry.