Jensen Huang Just Spent 3 Hours Telling the World That AI Is His Whole Thing (And He Has $1 Trillion in Orders to Prove It)

pudgy blog gtc2026

Jensen Huang walked onto the stage at SAP Center in San Jose on Monday with the energy of someone who knows exactly what he’s holding. Three hours later, he walked off having dropped more announcements than most companies make in a year. Welcome to Nvidia GTC 2026 — the conference where the world’s most valuable company (currently worth around $4.5 trillion) explains, at length, why that valuation might still be an underestimate.

Here’s what you actually need to know.

$1 Trillion in Orders. No, That’s Not a Typo.

Huang opened with a number that would make most people’s heads spin: Nvidia expects $1 trillion in purchase orders for its Blackwell and Vera Rubin systems through 2027. Last year, the company was projecting a $500 billion revenue opportunity between those two chip families. That number has now doubled.

“If they could just get more capacity, they could generate more tokens, their revenues would go up,” Huang said, describing the demand pouring in from startups and enterprises alike. The company’s quarterly revenue is already running at around $78 billion, up 77% year over year. At this point, the question isn’t whether Nvidia will keep growing — it’s whether the laws of physics will eventually put a ceiling on it.

Vera Rubin and the Machine That Drinks Power Wisely

Vera Rubin — Nvidia’s next-generation GPU system, named after the astronomer who discovered dark matter — is scheduled to ship later this year. Huang spent significant time on its efficiency story: 10x more performance per watt than its predecessor, Grace Blackwell. That matters because energy consumption has become one of the AI industry’s central headaches. Data centers are already straining power grids; a 10x efficiency jump is exactly the kind of thing that makes governments, utilities, and customers breathe a little easier.

The system is made up of 1.3 million components. Fitting it all together apparently requires the precision of a Swiss watchmaker and the ambition of a Saturn V rocket.

Groq Is Now an Nvidia Thing (And It Has a New Chip)

Remember when Nvidia paid $20 billion for AI chip startup Groq last December — its largest acquisition ever? Well, Monday was the debut of the first chip born from that deal: the Groq 3 Language Processing Unit (LPU).

The Groq 3 is built for a specific job: low-latency inference. It sits alongside Nvidia’s high-throughput GPUs in what Huang described as a complementary architecture — one optimized for speed, one for volume. The Groq LPX rack holds 256 LPUs, and Nvidia claims it can increase tokens-per-watt performance of its Rubin GPUs by 35 times when used together.

“We united two processors of extreme differences, one for high throughput, one for low latency,” Huang explained. For enterprises building real-time AI applications, this pairing could be a genuine game-changer. The Groq 3 LPU is expected to ship in Q3 2026.

Kyber: The Architecture That Comes After All This

Huang didn’t just talk about what’s shipping now — he teased what comes after Vera Rubin. The next leap is called Kyber, a rack architecture that integrates 144 GPUs in vertically stacked compute trays to boost density and reduce latency. Kyber will debut in the Vera Rubin Ultra system, currently slated for 2027.

The roadmap is getting crowded: Blackwell (shipping), Vera Rubin (later this year), Vera Rubin Ultra with Kyber (2027). Nvidia is essentially building out AI infrastructure like Intel used to build CPUs — except with a lot more robots.

Disney Brought a Robot. Of Course Disney Brought a Robot.

In what was arguably the most photogenic moment of the three-hour keynote, Huang introduced Embo — an Olaf-inspired robot built through a collaboration between Nvidia and Disney. Embo joined Huang on stage, waddling around with the kind of expressive, emotive movement that makes robotics suddenly feel very personal.

The collaboration is deeper than a PR stunt. Disney is using Nvidia’s AI and robotics stack to push the boundaries of what entertainment robots can do — think theme park characters that actually respond naturally to guests, improvise, and don’t just loop through a fixed set of animations. It’s a long way from a guy in a Mickey Mouse suit, and frankly, it’s impressive.

NemoClaw: Building AI Agents Just Got an Official Toolkit

About two-thirds through his keynote, Huang turned to the phenomenon of OpenClaw — the open-source AI agent platform that’s been taking over the internet since January. Its founder, Austrian developer Peter Steinberger, even made a cameo at the GTC pre-show (before his move to OpenAI last month).

Nvidia’s response? NemoClaw — a new reference stack designed specifically for OpenClaw, built to help enterprises deploy autonomous AI agents using Nvidia hardware. Huang’s demo made it sound disarmingly simple: “It finds OpenClaw, it downloads it. It builds you an AI agent.”

Whether it’s actually that smooth in production remains to be seen. But the fact that Nvidia is building official infrastructure around open-source agentic platforms signals something real: the age of AI agents is not a side project anymore.

Autonomous Cars Are Finally Getting Serious

Huang also detailed a partnership with Uber: the ride-hail giant will launch a fleet powered by Nvidia’s Drive AV software across 28 cities on four continents by 2028. Alongside that, automakers including Nissan, BYD, Geely, Isuzu, and Hyundai confirmed they’re building Level 4 autonomous vehicles on Nvidia’s Drive Hyperion platform.

Level 4 means the car can handle almost all driving tasks without human intervention — it just can’t go everywhere. That’s a meaningful milestone. And with five major manufacturers committing to the same hardware stack, Nvidia is quietly becoming the operating system of the autonomous vehicle industry.

Data Centers in Space (Yes, Really)

In the category of “things that sound like science fiction but aren’t anymore,” Huang briefly touched on the possibility of building data centers in outer space. He acknowledged it’s a serious engineering challenge — mainly the energy logistics — but didn’t dismiss it as fantasy. Given that Nvidia is already deploying AI infrastructure faster than most countries can build power plants, reaching for orbit doesn’t seem entirely out of character.

The Big Picture

GTC 2026 was a reminder that Nvidia isn’t just a chip company anymore. It’s the backbone of a massive global build-out — AI models, autonomous vehicles, robotics, agentic software, and apparently outer space. The $1 trillion order pipeline is eye-catching, but the real story is the breadth: hardware, software, developer tools, industry partnerships, all moving in the same direction at once.

Whether this represents the foundation of a genuinely transformative era or a very expensive bubble is still an open question. Nvidia’s stock rose about 2% on Monday — which, for a $4.5 trillion company, means roughly $90 billion in market cap added in a day. Either way, Jensen Huang’s leather jacket isn’t going anywhere.


Want more AI in your life? (The cute kind.)

Pudgy Cat covers the AI world so you don’t have to survive a three-hour keynote alone. And if you want something tangible to show your love of the future: check out the Pudgy Cat shop for kawaii merch that doesn’t require a $20B acquisition to enjoy. 🐱

Sources:
CNBC — Nvidia GTC 2026: Jensen Huang sees $1 trillion in orders
CNET — Nvidia GTC 2026 Live Blog
Nvidia Newsroom — NemoClaw announcement

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart
Scroll to Top