Nvidia’s Vera Rubin microarchitecture is the next-generation standard for its chipsets, replacing the Blackwell platform.
Photo Credit: Nvidia
Nvidia’s Vera Rubin platform and Alpamayo AI models showcased at CES 2026 under Jensen Huang’s keynote
Nvidia kickstarted the Consumer Electronics Show (CES) 2026 on Monday with several artificial intelligence (AI) announcements. Among them, the biggest introduction was Vera Rubin, the Santa Clara-based tech giant's newest AI platform, which replaces Blackwell. The company also unveiled six new chipsets and a supercomputer built on the new architecture, expanded its catalogue of open-source AI models, and shared advancements made by it in the physical AI space. All of these announcements were made during Nvidia CEO Jensen Huang's keynote session.
During his keynote address, Huang introduced the Vera Rubin platform. Just like its predecessor, Blackwell, the new architecture will become the standard for the upcoming chipsets aimed at AI workflows, enterprise systems, and supercomputers. Interestingly, the new AI platform is named after American astronomer Vera Florence Cooper Rubin, who is known for providing evidence for dark matter by studying galaxy rotation curves.
“Rubin arrives at exactly the right moment, as AI computing demand for both training and inference is going through the roof. With our annual cadence of delivering a new generation of AI supercomputers — and extreme codesign across six new chips — Rubin takes a giant leap toward the next frontier of AI,” said Huang.
The core idea behind Vera Rubin is extreme co-design, meaning Nvidia engineered the platform's components from the ground up to share data quickly, reduce costs, and improve efficiency for training and running AI models. The company also introduced six key chipset families that will be bundled into rack-scale systems called Vera Rubin NVL servers. These include the Nvidia Vera CPU, Nvidia Rubin GPU, Nvidia NVLink 6 Switch, Nvidia ConnectX-9 SuperNIC, Nvidia BlueField-4 data processing unit (DPU) and the Nvidia Spectrum-6 Ethernet Switch.
As per the company's press release, the new architecture will accelerate agentic AI, advanced reasoning, and large-scale mixture-of-experts (MoE) model inference. Compared to Blackwell, it is said to offer up to 10x lower cost and up to 4x fewer GPUs to run the same tasks.
Nvidia also mentioned some of the companies that will adopt Vera Rubin-based chipsets in the coming months. These include Amazon Web Services (AWS), Anthropic, Dell Technologies, Google, HPE, Lenovo, Meta, Microsoft, OpenAI, Oracle, Perplexity, Thinking Machines Lab, and xAI.
Alongside its system architecture, Nvidia detailed a suite of open models and data tools intended to accelerate AI across industries. Among the releases is the Nvidia Alpamayo family, a set of open, large-scale reasoning models and simulation frameworks designed to support safe, reasoning-based autonomous vehicle development. The family includes a reasoning-capable vision-language-action (VLA) model, simulation tools such as AlpaSim, and Physical AI Open Datasets that cover rare and complex driving scenarios.
Alpamayo is part of what Huang called a “ChatGPT moment for physical AI,” where machines begin to understand, reason and act in the real world, including explaining their decisions. The open nature of the models, simulation frameworks and data sets is intended to encourage transparency and faster progress among industry developers and researchers working on Level 4 advanced driver assistive systems (ADAS).
Apart from this, Nvidia's Nemotron family for agentic AI, Cosmos platform for physical AI, Isaac GR00T for robotics, and Clara for biomedical AI have also been made available to the open community.
Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.