Air Cooling Hits the Wall

For decades, the data center industry kept its servers cool with a simple, reliable approach: push cold air through the equipment and exhaust the hot air. Fans, raised floors, hot-aisle/cold-aisle containment, and industrial-scale air conditioning units formed the backbone of thermal management for the world’s computing infrastructure. Air cooling was well-understood, inexpensive to deploy, and adequate for the power densities that conventional servers generated.

That era is ending. The explosive growth of artificial intelligence workloads has pushed chip power densities beyond what air can effectively manage. A single Nvidia H100 GPU dissipates 700 watts of heat. The Blackwell-generation B200 draws up to 1,000 watts in air-cooled configurations and 1,200 watts when liquid-cooled. The latest Blackwell Ultra B300, announced at GTC 2025, pushes the envelope to 1,400 watts per GPU. When you pack eight of these chips into a single server, along with the CPUs, memory, networking, and power delivery components, a single rack can generate 100 to 140 kilowatts of heat — five to seven times what a traditional enterprise computing rack produces. Nvidia’s GB300 NVL72 rack system draws 132-140 kW on its own and requires liquid cooling as a baseline, not an option.

Air, as a heat transfer medium, simply cannot keep up. Its thermal conductivity is roughly 25 times lower than water, and its heat capacity per unit volume is approximately 3,500 times lower. At the power densities demanded by AI training and inference hardware, moving enough air through equipment to prevent thermal throttling requires enormous fan speeds, creating noise, vibration, and energy consumption that make air-cooled AI data centers impractical at scale.

The data center liquid cooling market reflects this reality. Grand View Research values it at approximately $6.6 billion in 2025, projecting it to reach $29.5 billion by 2033 at a compound annual growth rate of around 20%. Other analysts project even higher figures depending on methodology, but every major forecast agrees on the trajectory: double-digit annual growth driven by AI infrastructure buildout. Industry analysts at Dell’Oro Group estimate that liquid cooling adoption in new AI data center builds could reach 40% by 2026, up from single digits just two years ago.

The Technology Landscape: Three Approaches to Liquid Cooling

The liquid cooling market encompasses three distinct technological approaches, each with different characteristics, maturity levels, and use cases.

Direct-to-Chip (Cold Plate) Cooling

The most immediately deployable liquid cooling technology is direct-to-chip cooling, also called cold plate cooling. In this approach, metal cold plates are mounted directly on the hottest components — GPUs, CPUs, and sometimes memory — with liquid circulating through channels in the plates to absorb heat. The heated liquid is then pumped to a heat rejection system (typically a cooling tower or dry cooler) outside the data center.

Direct-to-chip cooling is the most conservative transition from air cooling because it retains many familiar elements. Servers still sit in standard racks. Air still cools the lower-power components like memory, storage, and networking. Only the highest-heat components receive liquid cooling. This hybrid approach makes direct-to-chip cooling the preferred first step for organizations transitioning from air-cooled facilities.

Nvidia’s reference designs for its Blackwell GPU platforms include direct-to-chip liquid cooling as a supported — and for the highest-TDP variants, required — configuration. Several server OEMs now offer liquid-cooled GPU servers as standard products. The Open Compute Project (OCP) has standardized on 25% propylene glycol (PG-25) as the working fluid, enabling a modular approach where facility piping stays fixed while CDUs and servers upgrade around it. The technology is mature, field-proven, and represents the majority of liquid cooling deployments today.

Single-Phase Immersion Cooling

Immersion cooling takes a more radical approach: submerging entire servers in a non-conductive liquid, typically an engineered dielectric fluid. The liquid absorbs heat from all components simultaneously, eliminating the need for fans, heat sinks, and cold plates. Heated liquid circulates to external heat exchangers by convection or pumping.

Single-phase immersion means the cooling fluid remains liquid throughout the process — it absorbs heat and warms up but does not boil. This simplifies the system design and reduces the risk of fluid loss. The entire server, including its electronic components, sits submerged in a tank of fluid.

The advantages of immersion cooling are substantial. By eliminating fans, it reduces server power consumption by 10-15%. By cooling all components uniformly, it eliminates hot spots that can cause premature component failure. By operating servers in a sealed, particle-free environment, it dramatically extends hardware lifespan. And by enabling much higher rack densities — immersion-cooled racks can handle 200 kilowatts or more — it reduces the physical footprint of data centers.

Immersion cooling is the fastest-growing segment of the liquid cooling market. The immersion cooling sub-market alone was valued at roughly $1.7-2.6 billion in 2025, depending on the analyst, with projected growth to $10-16 billion by the mid-2030s. In early 2025, Submer expanded into full data center design and construction, creating dedicated business units to build liquid-cooled facilities for AI workloads. Asperitas partnered with Cisco to integrate immersion cooling into Cisco’s Unified Compute System. LG, SK Enmove, and GRC announced a joint venture to develop next-generation immersion systems. However, immersion cooling requires purpose-built facilities and specialized operational procedures — you cannot retrofit an air-cooled data center for immersion without significant facility modifications.

Two-Phase Immersion Cooling

Two-phase immersion cooling uses a fluid with a low boiling point that evaporates when it contacts hot components. The vapor rises to a condenser at the top of the tank, where it liquefies and drips back down. This phase change absorbs significantly more heat per unit of fluid than single-phase cooling, enabling even higher power densities. In January 2026, Submer and Inspur deepened their collaboration to deploy two-phase immersion systems for hyperscale racks in China, targeting power densities exceeding 100 kW per rack.

Two-phase immersion is the most thermodynamically efficient cooling approach, but it faces a major disruption: the PFAS regulatory crackdown. The fluorinated cooling fluids that two-phase systems depend on — most notably 3M’s Novec series — are classified as PFAS (“forever chemicals”) due to their environmental persistence. In December 2022, 3M announced it would exit all PFAS manufacturing by end of 2025, and the last order deadline for Novec fluids passed in March 2025. The EU has signaled intent to restrict all PFAS chemicals broadly, and the US EPA has tightened PFAS reporting and disposal rules. While alternative suppliers are stepping in with both PFAS-based replacement fluids and newer PFAS-free chemistries, the supply chain uncertainty has made two-phase immersion a riskier bet for new deployments. Operators who committed to Novec-based systems must now navigate a transitional period of fluid sourcing and potential reformulation.

Two-phase immersion cooling remains deployed primarily in research environments, specialized high-performance computing installations, and select hyperscaler trials. Microsoft has conducted two-phase immersion trials for AI training clusters, reporting significant energy savings. But the combination of fluid cost, regulatory headwinds, and operational complexity means single-phase immersion and direct-to-chip cooling dominate commercial-scale deployment for now.

The Coolant Distribution Unit: The Heart of the System

Regardless of which liquid cooling approach is used, the system depends on coolant distribution units (CDUs) to manage the flow of cooling liquid between the IT equipment and the heat rejection infrastructure. CDUs control temperature, pressure, and flow rate, and provide the interface between the facility’s cooling water and the precision cooling circuits that contact electronic components.

The CDU market has become fiercely competitive, with established thermal management companies (Vertiv, Schneider Electric, CoolIT Systems) competing with specialists (GRC, LiquidCool Solutions, Asetek) for market share. CoolIT announced a 2 MW CDU offering in April 2025 — a scale that would have been unthinkable just a few years ago. Vertiv launched the Mega Mod HDX in January 2026, a modular system designed specifically for high-density AI compute environments. CDU design is a critical differentiator — efficiency, reliability, and maintainability vary significantly between manufacturers, and a CDU failure can take an entire rack offline.

The integration of CDUs into existing data center infrastructure is one of the primary challenges in liquid cooling adoption. Air-cooled data centers were designed around airflow — raised floors, plenum spaces, CRAC units. Retrofitting these facilities for liquid cooling requires plumbing infrastructure, leak detection systems, and fluid management capabilities that were not part of the original design.

New data center construction is increasingly being designed “liquid-ready” from the ground up, with plumbing infrastructure, higher floor loading capacity (liquid-filled tanks are heavy), and facility designs optimized for liquid cooling. The OCP Global Summit in 2025 featured extensive working sessions on standardizing liquid cooling interconnects, leak detection protocols, and facility-to-IT interfaces. This represents a fundamental shift in data center architecture that will play out over the next decade.

Advertisement

Waste Heat: From Problem to Opportunity

One of the most compelling aspects of liquid cooling is the opportunity to capture and reuse waste heat. Air-cooled data centers exhaust warm air at temperatures too low for most practical applications — typically 35-45 degrees Celsius. Liquid cooling systems, particularly those using high-temperature cooling loops, can deliver waste heat at 60-80 degrees Celsius, a temperature range useful for district heating, agricultural greenhouses, industrial processes, and other applications.

Northern European countries have led the way in data center waste heat recapture. In Finland, Microsoft and energy utility Fortum are partnering to supply around 40% of district heating demand in the Espoo-Kauniainen-Kirkkonummi region — serving 250,000 people — using waste heat from new data centers, scheduled to go live by 2026. In Ireland, the Tallaght District Heating Scheme saved 1,100 tonnes of CO2 in its first year by redirecting waste heat from a nearby Amazon data center. Across Scandinavia and Central Europe, data center waste heat integration into district heating networks is becoming routine.

Regulation is accelerating adoption. The EU’s revised Energy Efficiency Directive (EED) now requires that data centers commissioned after July 1, 2026 must demonstrate a minimum 15% waste heat reuse rate, rising to 20% by mid-2028. Member States must develop waste heat action plans by 2030. Germany’s national Energy Efficiency Act goes further, establishing mandatory PUE thresholds and waste heat utilization requirements for new data centers. The European Commission is preparing a Data Centre Energy Efficiency Package, planned for adoption in April 2026, that will introduce a rating scheme and launch work on minimum performance standards for data centers across Europe.

The economics are increasingly favorable. 2026 benchmarks show delivered heat costs from data centers in the range of 12-30 EUR/MWh for heat network operators, compared to 35-55 EUR/MWh for gas boilers. In regions with established district heating infrastructure, data center waste heat is becoming a genuine commodity.

In regions without district heating infrastructure, creative applications are emerging. Data center waste heat is being used for aquaculture, greenhouse agriculture, and industrial drying processes. These applications transform data centers from pure energy consumers into components of circular energy systems.

The potential is significant. The IEA estimates global data center electricity consumption at around 415 TWh in 2024, projected to reach 650-1,050 TWh by the end of the decade. Virtually all of that energy is eventually converted to heat. Capturing even a fraction for productive use could displace millions of tons of heating fuel annually.

Adoption Barriers and Industry Response

Despite compelling economics and clear technical necessity, liquid cooling adoption faces several barriers that are slowing deployment.

Skills gap: The data center workforce has decades of experience with air cooling and limited familiarity with liquid systems. Leak anxiety is pervasive — the idea of running liquid near electronic equipment triggers deep institutional resistance. Training programs and certification standards for liquid cooling operations are still maturing, though Vertiv’s 2025 Management and Operations Innovation Day focused specifically on building liquid cooling service expertise at scale.

Supply chain constraints: The rapid growth of liquid cooling demand has strained the supply chains for specialized components: engineered coolants, precision plumbing, CDUs, and immersion tanks. Lead times for some components extend to six months or longer, limiting how quickly new deployments can be built. The 3M PFAS exit has further complicated coolant sourcing for two-phase systems.

Standardization gaps: Unlike air cooling, where standards for rack dimensions, airflow patterns, and power distribution are well-established, liquid cooling standards are still evolving. Different manufacturers use different connector types, fluid specifications, and system architectures, creating interoperability challenges and lock-in risks. The OCP’s liquid cooling specifications and the standardized PG-25 coolant are helping, but full interoperability remains years away.

Retrofitting costs: While new data centers can be designed for liquid cooling from the start, the existing global inventory of air-cooled data centers represents trillions of dollars of infrastructure that cannot be easily or cheaply converted. Retrofitting is possible but expensive, and many facility owners are choosing to build new liquid-cooled facilities rather than convert existing ones.

The industry is responding to these barriers with increasing urgency. The OCP has published modular technology cooling system specifications aimed at standardization. Major CDU manufacturers are expanding production capacity. Training programs are proliferating. And the economic case for liquid cooling is becoming so overwhelming for AI workloads that adoption barriers are being overcome through sheer market pressure.

The Next Five Years: Liquid Becomes the Default

The trajectory of the liquid cooling market is clear: within five years, liquid cooling will be the default thermal management approach for new data center construction serving AI and high-performance computing workloads. Air cooling will persist for legacy enterprise computing and lower-density applications, but no new AI-focused data center will be designed without liquid cooling capability.

The market implications are substantial. The projected multi-billion-dollar growth represents the creation of an entirely new segment of the data center ecosystem. Companies that position themselves early in this transition — whether as manufacturers, integrators, or operators with liquid cooling expertise — will capture disproportionate value.

The regulatory landscape is reinforcing the shift. The EU’s waste heat mandates and PUE reporting requirements give liquid cooling an additional advantage over air cooling, since liquid systems are far more effective at capturing usable waste heat. As similar regulations spread globally, liquid cooling will transition from a performance necessity to a compliance requirement.

For the data center industry, the liquid cooling transition is as significant as the shift from proprietary server hardware to commodity x86 servers in the early 2000s, or the shift from owned data centers to cloud computing in the 2010s. It represents a fundamental change in the physical infrastructure of computing, driven by the insatiable thermal demands of artificial intelligence.

Advertisement

🧭 Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria Medium — Algeria’s nascent data center market (6 facilities nationwide) means liquid cooling is not yet an operational concern, but any new builds or expansions must factor in cooling strategy from day one given the country’s hot climate (summer temperatures exceeding 45C in southern regions)
Infrastructure Ready? No — Algeria lacks liquid cooling supply chains, trained technicians, and CDU manufacturing or distribution. Current facilities rely entirely on air cooling. No district heating networks exist to absorb waste heat
Skills Available? No — No local training programs or certifications for liquid cooling operations. Mechanical and HVAC engineering talent exists but requires specialized retraining for data center liquid cooling systems
Action Timeline 12-24 months — Algeria’s Ministry of Post, Telecommunications, and Digital Technology should begin incorporating liquid cooling readiness into specifications for any planned data center projects, including the Huawei partnership facilities
Key Stakeholders Ministry of Digital Technology, Algerie Telecom, Mobilis, Djezzy, national cloud initiative planners, university engineering programs, Sonatrach (potential waste heat applications for industrial processes)
Decision Type Strategic — Understanding liquid cooling is essential for anyone planning Algeria’s digital infrastructure future; building air-only facilities in 2026 locks in technology that will be obsolete for AI workloads within 5 years

Quick Take: Algeria’s hot climate actually makes liquid cooling more relevant, not less — air cooling efficiency drops significantly at high ambient temperatures. Any new data center investment in Algeria should be designed liquid-ready from the start, even if initial deployments use air cooling. The Ministry should study EU waste heat regulations as a model, since Algeria’s industrial zones could benefit from circular heat reuse.

Sources & Further Reading