Elon Musk’s Texas Tesla Gigafactory is increasing to include an AI supercomputer cluster, and Supermicro’s CEO is a huge fan of the cooling answer. Charles Liang, founder and CEO of Supermicro, took to X (previously Twitter) to have a good time Musk’s use of Supermicro’s liquid cooling know-how for both Tesla’s new cluster and xAI’s comparable supercomputer, which can be on the best way.
Pictured collectively amongst server racks, Liang and Musk want to “lead the liquid cooling know-how to giant AI data centers.” Liang estimates the influence of Musk main the transfer to liquid cooling AI data centers “might result in preserving 20 billion bushes for our planet,” clearly referring to the enhancements that would be had if liquid cooling had been adopted in any respect data centers worldwide.
AI data centers are well-known for his or her large energy attracts, and Supermicro hopes to scale back this pressure by pushing liquid cooling. The corporate claims direct liquid cooling might supply as much as an 89% discount in electrical energy prices of cooling infrastructure in comparison with air cooling.
Thanks @elonmusk for main the liquid cooling know-how to giant AI data centers! This may increasingly result in preserving 20 billion bushes for our planet❤️ pic.twitter.com/oJ48Dw3YVFJuly 2, 2024
In a previous Tweet, Liang clarified that Supermicro’s objective is “to spice up DLC [direct liquid cooling] adoption from <1% to 30%+ in a 12 months.” Musk is deploying Supermicro’s cooling at a main scale for his Tesla Gigafactory supercomputer cluster. The new growth to the prevailing Gigafactory will home 50,000 Nvidia GPUs and extra Tesla AI {hardware} to coach Tesla’s Full Self Driving function.
The growth is popping heads because of the supermassive followers underneath development to relax the liquid cooling, which Musk additionally not too long ago highlighted in an X publish of his personal (increase tweet beneath). Musk estimates the Gigafactory supercomputer will draw 130 megawatts on deployment, with development as much as 500MW anticipated after Tesla’s proprietary AI {hardware} can be put in. Musk claims that the power’s development is almost full, and it’s deliberate to be prepared for deployment within the subsequent few months.
Sizing for ~130MW of energy & cooling this 12 months, however will enhance to >500MW over subsequent 18 months or so. Aiming for about half Tesla AI {hardware}, half Nvidia/different.Play to win or don’t play in any respect.June 20, 2024
Tesla’s Gigafactory supercomputer cluster is to not be confused with Elon’s different multi-billion greenback supercomputer cluster, the X/xAI supercomputer, which can be presently underneath development. That is proper: Elon Musk is constructing not one however two of the world’s largest GPU-powered AI supercomputer clusters. The xAI supercomputer is a bit extra well-known than Tesla’s, with Musk already having ordered 100,000 of Nvidia’s H100 GPUs. xAI will use its supercomputer to coach GrokAI, X’s quirky AI chatbot various that’s out there to X Premium subscribers.
Additionally anticipated to be prepared “within a few months,” the xAI supercomputer will additionally be liquid-cooled by Supermicro and already has a deliberate improve path to 300,000 Nvidia B200 GPUs next summer. Based on latest experiences, getting the xAI cluster online is a barely better precedence for Musk than Tesla, as Musk reportedly ordered Nvidia to ship thousands of GPUs originally ordered for Tesla to X instead in June. The transfer was reported to have delayed Tesla’s supercomputer cluster’s development by months, however like a lot Musk-centric information, exaggeration is very seemingly.