
Jack Loughran Tue 18 Nov 2025
Collected at: https://eandt.theiet.org/2025/11/18/microsoft-creates-ai-superfactory-linking-twin-datacentres-dedicated-fibre-network
Microsoft has switched on an “AI superfactory” that it claims will accelerate AI breakthroughs and train new models on a scale that has previously been impossible.
The facility is actually composed of two separate data centres – one in Atlanta, the other in Wisconsin – that are linked together with a new type of dedicated fibre network that allows data to flow between them extremely quickly.
Both data centres share the same architecture and have been designed to use minimal water for cooling their operations.
Microsoft said the fibre network will eventually connect multiple sites that will allow AI workloads to be run across hundreds of thousands of advanced GPUs simultaneously. The company has deployed 120,000 miles of dedicated fibre for the network, increasing its overall mileage by more than 25% in one year.
“This is about building a distributed network that can act as a virtual supercomputer for tackling the world’s biggest challenges in ways that you just could not do in a single facility,” said Alistair Speirs, Microsoft general manager focusing on Azure infrastructure.
“A traditional data centre is designed to run millions of separate applications for multiple customers,” he added. “The reason we call this an AI superfactory is it’s running one complex job across millions of pieces of hardware. And it’s not just a single site training an AI model, it’s a network of sites supporting that one job.”
Each of the data centres houses hundreds of thousands of Nvidia Blackwell GPUs – the most popular chip used for AI purposes – that can relay data between them and share memory within a specially designed 72-GPU server rack.
The distributed network is designed to enable them to support training models with hundreds of trillions of parameters. Rather than train AI as one single, massive task, it allows for pre-training, fine-tuning, reinforcement learning, evaluation and synthetic data generation.
Both data centres have been designed to minimise their footprint by being spread over two storeys – an unusual set-up for this kind of facility. This increased density poses new challenges around how to dissipate heat – especially as AI chips typically run hotter than traditional silicon.
Microsoft has engineered a closed-loop cooling system for both sites that takes the hot liquid out of the building to be chilled and returned to the GPUs. This required a new configuration of pipes, pumps and chillers to meet the cooling challenges of such big sites. The water used in the Atlanta hub is equivalent to what 20 homes consume in a year and is replaced only if water chemistry indicates it is needed.
“To make improvements in the capabilities of the AI, you need to have larger and larger infrastructure to train it,” said Mark Russinovich, CTO at Microsoft Azure. “The amount of infrastructure required now to train these models is not just one data center, not two, but multiples of that.”

Leave a Reply