Skip to content
Swing beat

Nvidia jumps ahead of itself and reveals next-gen “Rubin” AI chips in keynote tease

"I'm not sure yet whether I'm going to regret this," says CEO Jensen Huang at Computex 2024.

Benj Edwards
Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024.
Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024. Credit: SAM YEH/AFP via Getty Images
Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024. Credit: SAM YEH/AFP via Getty Images

On Sunday, Nvidia CEO Jensen Huang reached beyond Blackwell and revealed the company's next-generation AI-accelerating GPU platform during his keynote at Computex 2024 in Taiwan. Huang also detailed plans for an annual tick-tock-style upgrade cycle of its AI acceleration platforms, mentioning an upcoming Blackwell Ultra chip slated for 2025 and a subsequent platform called "Rubin" set for 2026.

Nvidia's data center GPUs currently power a large majority of cloud-based AI models, such as ChatGPT, in both development (training) and deployment (inference) phases, and investors are keeping a close watch on the company, with expectations to keep that run going.

During the keynote, Huang seemed somewhat hesitant to make the Rubin announcement, perhaps wary of invoking the so-called Osborne effect, whereby a company's premature announcement of the next iteration of a tech product eats into the current iteration's sales. "This is the very first time that this next click as been made," Huang said, holding up his presentation remote just before the Rubin announcement. "And I'm not sure yet whether I'm going to regret this or not."

Nvidia Keynote at Computex 2023.

The Rubin AI platform, expected in 2026, will use HBM4 (a new form of high-bandwidth memory) and NVLink 6 Switch, operating at 3,600GBps. Following that launch, Nvidia will release a tick-tock iteration called "Rubin Ultra." While Huang did not provide extensive specifications for the upcoming products, he promised cost and energy savings related to the new chipsets.

During the keynote, Huang also introduced a new ARM-based CPU called "Vera," which will be featured on a new accelerator board called "Vera Rubin," alongside one of the Rubin GPUs.

Much like Nvidia's Grace Hopper architecture, which combines a "Grace" CPU and a "Hopper" GPU to pay tribute to the pioneering computer scientist of the same name, Vera Rubin refers to Vera Florence Cooper Rubin (1928–2016), an American astronomer who made discoveries in the field of deep space astronomy. She is best known for her pioneering work on galaxy rotation rates, which provided strong evidence for the existence of dark matter.

A calculated risk

NVIDIA CEO Jensen Huang reveals the "Rubin" AI platform for the first time during his keynote at Computex 2024 on June 2, 2024.
Nvidia CEO Jensen Huang reveals the "Rubin" AI platform for the first time during his keynote at Computex 2024 on June 2, 2024.
Nvidia CEO Jensen Huang reveals the "Rubin" AI platform for the first time during his keynote at Computex 2024 on June 2, 2024. Credit: Nvidia

Nvidia's reveal of Rubin is not a surprise in the sense that most big tech companies are continuously working on follow-up products well in advance of release, but it's notable because it comes just three months after the company revealed Blackwell, which is barely out of the gate and not yet widely shipping.

At the moment, the company seems to be comfortable leapfrogging itself with new announcements and catching up later; Nvidia just announced that its GH200 Grace Hopper "Superchip," unveiled one year ago at Computex 2023, is now in full production.

With Nvidia stock rising and the company possessing an estimated 70–95 percent of the data center GPU market share, the Rubin reveal is a calculated risk that seems to come from a place of confidence. That confidence could turn out to be misplaced if a so-called "AI bubble" pops or if Nvidia misjudges the capabilities of its competitors. The announcement may also stem from pressure to continue Nvidia's astronomical growth in market cap with nonstop promises of improving technology.

Accordingly, Huang has been eager to showcase the company's plans to continue pushing silicon fabrication tech to its limits and widely broadcast that Nvidia plans to keep releasing new AI chips at a steady cadence.

"Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm, and we push everything to technology limits," Huang said during Sunday's Computex keynote.

Despite Nvidia's recent market performance, the company's run may not continue indefinitely. With ample money pouring into the data center AI space, Nvidia isn't alone in developing accelerator chips. Competitors like AMD (with the Instinct series) and Intel (with Gaudi 3) also want to win a slice of the data center GPU market away from Nvidia's current command of the AI-accelerator space. And OpenAI's Sam Altman is trying to encourage diversified production of GPU hardware that will power the company's next generation of AI models in the years ahead.

Photo of Benj Edwards
Benj Edwards Senior AI Reporter
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
Staff Picks
evan_s
It feels like announcing rubin this far ahead has to be targeted at keeping the stock prices high and the AI hype going. I'm not saying AI won't be a thing going forward but nvidia's valuation based on the hype surrounding AI is pretty inflated.
tipoo
The crazy thing is that even after Blackwell was unveiled, where usually you'd Osborne yourself, H100 demand kept increasing.

Anyone who wants to stay in the game and be a significant AI player just doesn't seem to have another choice but buy what Nvidia has currently with the stickiness and all the action happening in CUDA libraries with massive prebuilt libraries and armies of ML developers. Even the AMD option being 10K vs 30-35K doesn't seem to change Nvidia being the one with quasi-infinite demand despite there being a 3.5x cheaper option, it's largely in the software ecosystem though the hardware is also very good. So I can see his confidence in mentioning this.

That said, I've seen the bullwhip effect many times now, long term AI will be huge, but there's always a chance we've gotten frothy short term. I'd never bet against Jen-Hsun long term, probably one of the greatest executions in CEOs right now, and he seems to be able to do it without shitposting on X.
Prev story
Next story