In May, The Information reported that the American billionaire and xAI company were planning to gather and connect 100,000 specialized graphics cards into a supercomputer called "Gigafactory of Computing", set to operate next fall.
According to Tom's Hardware, Musk is working hard to ensure the system is completed on schedule. He chose Supermicro to provide cooling solutions for both Tesla and xAI's supercomputer data centers.
"Thanks Musk for pioneering liquid cooling tech for massive AI data centers. This move could help preserve 20 billion trees for the planet," Charles Liang, founder and CEO of Supermicro, wrote on X on July 2.
AI data centers typically consume large amounts of electricity. Supermicro hopes their liquid solution will AI data centers typically consume large amounts of electricity. Supermicro hopes their liquid solution will cut infrastructure power costs by up to 89% compared to traditional air cooling.
In a previous tweet, Musk estimated the Gigafactory supercomputer would use 130 MW (130 million watts) of power when deployed. Combined with Tesla's AI hardware after installation, the plant's power consumption could reach 500 MW. The billionaire said construction is nearly complete, with equipment installation expected to begin in the coming months.
Musk is building two supercomputer clusters for Tesla and xAI startup, both estimated to be worth billions of dollars. According to Reuters, if 100,000 H100 GPUs are successfully connected, the Gigafactory of Computing would be the world's most powerful supercomputer, four times larger than the current largest GPU cluster.
Elon Musk founded xAI last July, directly competing with OpenAI and Microsoft. The startup later launched the AI model Grok, challenging ChatGPT. Earlier this year, Musk said training the next Grok 2 model would require about 20,000 Nvidia H100 GPUs, while Grok 3 might need 100,000 H100 chips.
0 Comments