Microsoft has made a major strategic leap in the AI hardware race, landing access to over 100,000 of Nvidia’s cutting-edge GB300 chips—without needing to build all new infrastructure—by investing a massive $33 billion in neocloud providers like Nebius, CoreWeave, Lambda, and Nscale
The centerpiece is a $19.4 billion partnership with Nebius, which equips Microsoft with racks of Nvidia GB300 NVL72 servers (each rack packs 72 B300 GPUs). These server units are estimated to cost about $3 million apiece, meaning Nebius alone may be holding upwards of $4 billion in hardware for Microsoft. Instead of building and powering every facility itself, Microsoft rents compute capacity from these specialized intermediaries, rapidly scaling up its AI backbone while reserving in-house data centers for paying customers.
Other highlights:
- Microsoft is simultaneously building a new 315-acre data center campus in Wisconsin, which will house hundreds of thousands of Nvidia GPUs and have enough fiber optic cable to circle the globe more than four times. This site comes with a self-sustaining power supply, signaling ambitions to reduce outside dependency.
- The surge in AI data centers is straining local energy grids—wholesale power prices near major AI installations have jumped 267% in five years, raising environmental and regulatory concerns.
- Nvidia’s own $100 billion investment in OpenAI is amplifying market concentration worries, putting companies like Microsoft squarely in the crosshairs of antitrust debate.
These moves show Microsoft’s readiness to spread its AI workloads across diverse, nimble compute networks while prepping its own super-sized campuses for the future. In the battle for AI scale, these complex partnerships blur the line between collaboration and market dominance.