Nvidia and its companions have initiated full-scale production of Blackwell GPUs for AI and HPC, in addition to servers based mostly on them, introduced Jensen Huang, chief govt of the corporate, at CES. All the main cloud service suppliers now have Blackwell techniques up and working, and Nvidia’s companions have techniques that may match into all knowledge facilities worldwide.
“Blackwell is in full production,” stated Huang. “It’s unbelievable what it seems like, so initially […] each single cloud service supplier now has techniques up and working.”
Whereas Nvidia’s Blackwell GPUs for AI and HPC purposes considerably improve compute efficiency and compute efficiency per watt in comparison with Hopper-era processors, in addition they eat considerably extra energy. This makes putting in them into knowledge facilities more durable as they require extra cooling and energy. If a Hopper-based mostly rack consumes 40 kW, then a Blackwell-based rack with 72 GPUs reportedly consumes as much as 120 kW.
Dell was the first company to start shipments of Blackwell-based machines in mid-November to pick cloud service suppliers, however it’s not the one firm to supply such servers as we speak. Nvidia says that with over 200 different configurations from over a dozen server makers, so there are now Blackwell-based techniques that may match into a variety of knowledge facilities.
“Now we have techniques right here from about 15 pc makers it’s being made in about 200 different SKUs, 200 different configurations,” Huang stated. “There are liquid cooled, air cooled, x86, Nvidia Grace CPU variations, NVL36×2, NVL72×1. An entire bunch of different sorts of techniques in order that we will accommodate nearly each single knowledge middle in the world nicely.”
These machines are mass-produced as we speak, based on the pinnacle of Nvidia. Curiously, earlier reviews claimed that Nvidia had canceled twin-rack 72-means GB200-based mostly NVL36×2 techniques as they didn’t supply compelling worth and selected to deal with the only-rack NVL72 and NVL36 choices. Apparently, this isn’t the case, and a few firms both produce twin-rack NVL36×2 techniques as we speak or plan to take action in the long run.
“These techniques are being manufactured in some 45 factories, which tells you the way pervasive synthetic intelligence is and the way a lot the trade is leaping onto synthetic intelligence in this new computing mannequin,” Huang stated.