Microsoft chatgpt bing millions dollars supercomputer openai – Breaking News & Latest Updates 2026
Skip to main content

Microsoft spent hundreds of millions of dollars on a ChatGPT supercomputer

Microsoft says it connected tens of thousands of Nvidia A100 chips and reworked server racks to build the hardware behind ChatGPT and its own Bing AI bot.

Microsoft says it connected tens of thousands of Nvidia A100 chips and reworked server racks to build the hardware behind ChatGPT and its own Bing AI bot.

Microsoft logo
Microsoft logo
Illustration: The Verge
Emma Roth
is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Microsoft spent hundreds of millions of dollars building a massive supercomputer to help power OpenAI’s ChatGPT chatbot, according to a report from Bloomberg. In a pair of blog posts published on Monday, Microsoft explains how it created Azure’s powerful artificial intelligence infrastructure used by OpenAI and how its systems are getting even more robust.

To build the supercomputer that powers OpenAI’s projects, Microsoft says it linked together thousands of Nvidia graphics processing units (GPUs) on its Azure cloud computing platform. In turn, this allowed OpenAI to train increasingly powerful models and “unlocked the AI capabilities” of tools like ChatGPT and Bing.

Scott Guthrie, Microsoft’s vice president of AI and cloud, said the company spent several hundreds of millions of dollars on the project, according to a statement given to Bloomberg. And while that may seem like a drop in the bucket for Microsoft, which recently extended its multiyear, multibillion-dollar investment in OpenAI, it certainly demonstrates that it’s willing to throw even more money at the AI space.

Microsoft’s already working to make Azure’s AI capabilities even more powerful

Microsoft’s already working to make Azure’s AI capabilities even more powerful with the launch of its new virtual machines that use Nvidia’s H100 and A100 Tensor Core GPUs, as well as Quantum-2 InfiniBand networking, a project both companies teased last year. According to Microsoft, this should allow OpenAI and other companies that rely on Azure to train larger and more complex AI models.

“We saw that we would need to build special purpose clusters focusing on enabling large training workloads and OpenAI was one of the early proof points for that,” Eric Boyd, Microsoft’s corporate vice president of Azure AI, says in a statement. “We worked closely with them to learn what are the key things they were looking for as they built out their training environments and what were the key things they need.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.