How Data Centres are Being Rebuilt Around Cooling

Cooling as the AI Factory’s Hidden Backbone

Artificial intelligence has changed the way we think about data centres. In the past, compute was the bottleneck. Now, it’s heat. GPUs used in AI training and inference are doubling in density, with some chips consuming more than 700 watts each. 

Racks that once drew a manageable amount of power are now approaching one megawatt per cluster. The cost of cooling already accounts for around 40% of a typical data center’s energy budget, making it the second-largest consumer of power after the IT load itself. This creates a stark reality: without advanced cooling, AI simply cannot scale. 

A simple analogy puts it in perspective cooling an AI rack with traditional fans is like trying to cool an oven with a hand fan. It may work briefly, but it’s ultimately futile. Cooling has therefore moved from being a back-office utility to the very foundation of productivity in what industry leaders are now calling “AI factories.”

Defining the AI Factory

The phrase “AI factory” is increasingly used to describe modern data centers optimized specifically for high-performance AI and HPC (high-performance computing) workloads. Unlike traditional web-hosting facilities, AI factories must be built to handle extreme power densities and prolonged thermal loads. Cooling is not just about comfort or efficiency; it is a matter of uptime and risk mitigation. If temperatures rise above threshold, performance drops, hardware fails, and power costs spiral.

Leading players such as NVIDIA, with its SuperPOD clusters, and Google, with its TPU arrays, are building integrated infrastructures where compute, networking, and cooling are designed as a single ecosystem. In Africa, where hyperscale data center investments are accelerating, developers are also beginning to confront these thermal realities. A crucial decision emerges: should operators continue with air cooling, knowing it will soon hit limits, or switch to liquid-based systems that require higher upfront investment but deliver far greater efficiency?

The debate often comes down to cost. Cold plate vs air cooling GPUs cost comparisons show that while air cooling is cheaper to install, cold plates and direct liquid systems reduce energy consumption and enable higher density deployments. Over time, the total cost of ownership favors liquid approaches, even if the CapEx barrier is higher at the start.

Cooling Technologies in Play

For decades, air cooling has been the industry standard. It is inexpensive, easy to deploy, and well understood. However, at rack densities above 20–30 kilowatts, air simply cannot carry away heat fast enough. Operators are forced to consider alternatives.

Cold plates and direct liquid cooling are rapidly becoming mainstream. These bring coolant directly to the chip, transferring heat far more effectively than fans and airflows. The downside is complexity: liquid lines, leak detection, chilled water loops, and distribution infrastructure raise capital costs. But the long-term benefits include reduced energy bills and the ability to scale density beyond what air cooling could ever handle.

A middle-ground approach uses rear-door heat exchangers such as Schneider Electric’s ChilledDoor® and Chilled Distribution Units (CDUs). These allow operators to gradually transition from air to liquid without rebuilding entire facilities, making them attractive for retrofits.

At hyperscale, water cooling becomes the inevitable solution. Water has a vastly higher thermal transfer capacity compared to air, making it indispensable for racks consuming hundreds of kilowatts. This brings sustainability questions to the forefront how data centers source, treat, recycle, and discharge water is now a matter of scrutiny.

The Water Question

Water has become the most sensitive resource in the cooling debate. Data centers can consume millions of gallons per year, raising concerns about local water stress, especially in arid regions. This is where Water Usage Effectiveness (WUE) comes in. Much like Power Usage Effectiveness (PUE) transformed energy efficiency conversations, WUE now measures liters of water consumed per kilowatt-hour of IT power delivered. It has quickly become a key performance indicator for responsible operators.

Policymakers, environmental groups, and communities are asking: Do data centers recycle water? What happens to the water used by data centers? How much water does a data center use per day? These questions reflect broader worries about sustainability and pollution. Untreated discharge, if mismanaged, can introduce harmful chemicals into ecosystems, making data center water pollution another pressing issue.

Google’s data centers have been at the center of this debate. The company has pledged to match every liter consumed in its facilities with replenishment projects in local watersheds. It has also invested heavily in closed-loop cooling systems and advanced treatment technologies to reduce its draw on municipal sources.

Also read: Why Liquid Cooling Is Now Non-Negotiable for AI Data Centers

Africa, the issue is particularly acute. Many of the continent’s urban centers already face constrained water supplies. If hyperscale facilities are to succeed, they must integrate water recycling and stewardship into their blueprints. A “good WUE” for a data center will soon become not just a technical metric but a requirement for regulatory approval.

Global Case Studies & Signals

Across the world, examples show how cooling strategy is evolving. In Buffalo, Italy, and India, Schneider Electric and its partners are rolling out integrated liquid cooling systems, demonstrating how end-to-end solutions can be scaled globally. Companies like Motivair are enabling operators to retrofit with liquid solutions incrementally, using CDUs and rear-door exchangers.

Hyperscalers like Google and Microsoft are under pressure to prove their water stewardship, publishing annual sustainability reports that detail recycling percentages and treatment methods. These moves are not just PR; they respond to real community pushback about resource use.

The impact is significant. Liquid cooling, when fully deployed, can cut cooling energy overhead by as much as 70%, dramatically reducing operating expenses while improving thermal stability. This makes the financial argument as compelling as the environmental one.

Africa’s Data Center Growth Story

Africa represents one of the most exciting but challenging frontiers for hyperscale data centers. Demand is surging driven by cloud adoption, AI, fintech, and a rapidly digitizing economy but infrastructure gaps remain wide. Grid reliability, high electricity tariffs, and limited water access make cooling strategies especially critical.

Already, major players are racing to dominate. Teraco, Africa Data Centres, and Liquid Intelligent Technologies are widely recognized as the “big three” leaders in African colocation and hyperscale capacity. Each is expanding its footprint in key hubs such as Johannesburg, Nairobi, Lagos, and Cairo.

Investors and developers are closely watching which facilities emerge as benchmarks. For example, What is the largest data center in Africa? has become a marker of market leadership, with Johannesburg currently hosting some of the continent’s biggest sites. Yet scale alone will not be enough. In African metros, cooling efficiency can make or break ROI. Water and power constraints mean that operators who fail to adopt advanced, recycling-heavy cooling systems may find their growth capped.

Cooling as ROI, Not Overhead

The lesson is clear: in the AI era, cooling is no longer a back-office utility. It is the lever that defines whether data center investments succeed or fail. For engineers, this means carefully weighing the CapEx of liquid and cold plate systems against the OpEx savings and density gains they deliver. For policymakers, it means mandating water stewardship, enforcing WUE standards, and encouraging recycling technologies. For investors, cooling design has become a proxy for efficiency, scalability, and sustainability.

Cooling is not an afterthought it is the foundation of the AI factory. The future of AI data centres will not be defined by chips alone but by how we cool them and how responsibly we use water and energy in the process.

Related posts

Schneider Electric Highlights Innovation in 800 VDC Power Systems in support of NVIDIA’s next generation GPUs

Top 10 ESG Investment Trends Driving Africa’s Green Economy in 2025

Africa loses $100 billion a year as energy and industrial projects stall