Each NVL72 rack crams in 72 of Nvidia’s latest B200 GPUs alongside 36 Grace CPUs, lashed together with NVLink 5. That much silicon doesn’t come cheap: one of these beasts will set you back around $3.1 million compared to just $190,000 for a rack stuffed with H100s.
Despite that sticker shock, Morgan Stanley says the economics favour the GB200 racks every time. In a 100MW AI factory, their spreadsheets show profit margins of 77.6 per cent, with only Google’s TPU v6e pods coming close at 74.9 per cent.
Google’s TPU rental prices are wrapped in secrecy, though the bankers estimate they cost 40 to 50 per cent less to rent than Nvidia’s racks. Even so, Nvidia's new flagship still spits out the fattest margins.
AMD is being left in the dust. The same report puts MI300-based factories at a miserable -28.2 per cent margin, while MI355 racks sink all the way to -64 per cent. Running an AMD rig would be like setting fire to your data centre budget just to keep warm.
Morgan Stanley’s numbers assume a 100MW site comes with $660 million in infrastructure costs depreciated over ten years, while GPU purchases range from $367 million at the low end to a whopping $2.273 billion, depreciated over four. Toss in electricity and cooling at global average rates, and the Total Cost of Ownership rankings look grim. Nvidia’s NVL72 tops out at $806.58 million, just edging ahead of AMD’s MI355X platform at $774.11 million.
Nvidia might be gouging like there’s no tomorrow, but if the bankers are right, everyone else is just haemorrhaging cash trying to keep up.