Nvidia CEO Jensen Huang said on Monday that demand for computing resources is “skyrocketing” as the artificial intelligence (AI) race intensifies. At CES 2026 in Las Vegas, Huang detailed Nvidia’s next-generation hardware, an expanded partnership with Mercedes-Benz on self-driving cars, and his vision for the company as a full-stack AI powerhouse redefining modern computing.
AI demand hits record highs
“The amount of computation necessary for AI is skyrocketing. The demand for Nvidia GPUs is skyrocketing. It's skyrocketing because models are increasing by a factor of 10, an order of magnitude every single year,” Huang said during his keynote presentation.
He added,
“Everybody's trying to get to the next level and somebody is getting to the next level. And so therefore, all of it is a computing problem. The faster you compute, the sooner you can get to the next level of the next frontier.”
Huang described the AI landscape as “an intense race,” driven by advancements in generative and agentic AI models that have pushed the limits of computing infrastructure. He said Nvidia’s architecture and systems were built to address this global demand curve, with new chips already entering mass production.
New Rubin architecture enters full production
Nvidia confirmed that its new Rubin and Vera chips are now in full production, a milestone in its accelerated computing roadmap. Huang said the combination of the Rubin GPU and Vera CPU represents a major leap forward in performance efficiency.
Introducing the Rubin computing platform, which succeeds the Blackwell generation, Huang said,
“Vera Rubin is designed to solve the problem of soaring demand for AI computation. It has already entered the full production stage.”
The Rubin platform integrates six interconnected processors built for large-scale AI workloads. Nvidia reports a 3.5× jump in model training performance and a 5× increase in inference capabilities compared to previous generations. The company also claims the new chips deliver eight times more inference compute per watt, aligning with global efforts to create greener data infrastructure.
According to Nvidia, major cloud providers such as Amazon Web Services, OpenAI, and Anthropic plan to incorporate Rubin systems into their AI supercomputing clusters later this year.
Nvidia expands into autonomous vehicles
Huang also confirmed that the first self-driving car jointly developed with Mercedes-Benz will operate on U.S. roads within the first quarter of the year. He said the rollout will extend to Europe in the second quarter and Asia in the second half of 2026.
“The first self-driving car made by Nvidia and Mercedes will start operating in the U.S. in the first quarter and expand to Europe in the second quarter and Asia in the second half of the year,” Huang announced.
The vehicles use Nvidia’s Orin-based dual computing system and the AI model Alpamayo, which processes sensor input, steering, and braking simultaneously. Huang noted that Alpamayo includes a safety layer that reverts control to advanced driver-assistance systems (ADAS) when AI confidence levels drop. The vehicle also achieved top-tier safety ratings under Europe’s NCAP program.
Huang explained that this model demonstrates Nvidia’s “AI full-stack strategy” by spanning chips, software, simulation, and model operations. He said autonomous cars represent the most complex AI systems yet, as they rely on learning, reasoning, and real-time simulation within the same platform.
The trillion-dollar race for AI infrastructure
Globally, Nvidia expects the AI infrastructure market to reach up to $4 trillion over the next five years as demand for GPUs, data centers, and advanced compute hardware escalates. Industry investment in AI infrastructure by cloud providers and technology firms has surged last year.
The growing overlap between AI training workloads and blockchain infrastructure is also notable. Bitcoin miners, facing higher difficulty and energy costs, may repurpose their computation resources for AI to pursue more profitable opportunities.
Nvidia positions itself at the core of the AI era
“Software is no longer programming but learning, and it runs on GPUs, not CPUs,” Huang said.
He argued that Nvidia has evolved beyond chip manufacturing to become a company that “reinvents the entire AI stack from chips to infrastructure, models, and applications.”
He concluded by emphasizing that the company aims to be a “system builder in the AI era,” responsible for both design and deployment across the computing landscape.

Disclaimer: This publication and its writers do not hold positions in Nvidia or other companies mentioned in this article. All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource, and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice. HODL FM strongly recommends contacting a qualified industry professional.





