Print this page
Published in News

Nvidia and AMD crank up AI chip war

by on29 September 2025


Both sides rejigging their offerings

Nvidia and AMD are frantically revising their next-gen AI designs in an effort to out-muscle each other, with power budgets and memory bandwidth figures spiralling upwards.

According to SemiAnalysis, the scrap between AMD’s Instinct MI450 and Nvidia’s Vera Rubin will be the fiercest yet.

AMD’s data centre boss Forrest Norrod hyped up the MI450 as the firm’s “Milan moment”, invoking the launch of EPYC 7003 CPUs, and bragged that the new AI accelerators would be more attractive than Nvidia’s kit.

Both designs have been rejigged repeatedly. SemiAnalysis claims the MI450X’s TGP rose by 200W, prompting Rubin’s numbers to balloon by 500W to a ridiculous 2300W. Memory bandwidth has also been ramped, with Rubin hitting 20TB/s per GPU, up from 13TB/s, while AMD is touting roughly 19.6TB/s.

Rumoured specs put the MI450 on HBM4 with up to 432GB per GPU and ~40 PFLOPS of dense compute, while Vera Rubin is pegged at 288GB of HBM4 per GPU and ~50 PFLOPS. Both are slated for 2026, using TSMC’s N3P process and chiplet designs.

This is a big shift for AMD, which has historically trailed Nvidia’s AI products by at least a cycle. Norrod insists the MI450 will be competitive enough that customers won’t hesitate to ditch CUDA-land. Nvidia, for its part, is already saying that Rubin is being lined up by OpenAI, giving it bragging rights before AMD even ships silicon.

The AI chip fight now looks set to be fought watt for watt and terabyte for terabyte. Which side actually gets adopted at scale will come down to who can ship on time and keep the power bills from turning data centres into ovens.

Rate this item
(1 Vote)