5
CONSTRAINT

The Liquid Cooling Mandate

Chips are generating so much heat that air literally can't carry it away. Every future AI rack needs plumbing.

Status: Tipping point now — air cooling is already obsolete for frontier AI racks
1

What's Technically Happening

Thermal density is the forcing function. Power consumed by a chip becomes heat that must be removed. An NVIDIA H100 GPU dissipates roughly 700W. A Blackwell GB200 dissipates around 1,200W. Rubin dies will push higher. In a rack with 72 of these GPUs, total thermal load exceeds 120 kW. The Rubin Ultra Kyber NVL576 rack is spec'd at 600 kW.

Air cooling hits a physics wall around 30–50 kW per rack. Above that density, the volume of air required to carry heat out of a rack exceeds what a practical fan array can deliver without excessive noise, power consumption, and airflow turbulence. The Blackwell generation already pushed past this limit, with direct-to-chip liquid cooling as the default. Rubin NVL72 is 100% liquid-cooled — NVIDIA has confirmed there is no air-cooled variant. Every future frontier AI rack will be liquid-cooled.

The architecture stack has five levels: (1) direct-to-chip cold plates that sit on top of GPU and CPU dies, (2) coolant distribution manifolds routing fluid through the rack, (3) rear-door heat exchangers that capture residual air-cooled heat from secondary components, (4) coolant distribution units (CDUs) that move heat out of the rack, and (5) facility-level water or refrigerant loops that reject heat outside the building. The global liquid cooling market hit $6B in 2026, growing at 28.7% CAGR. Immersion cooling (submerging entire servers in dielectric fluid) is the fastest-growing segment at 34% CAGR.

A parallel sub-trend is water scarcity. Traditional evaporative cooling consumes millions of gallons per day per large data center. Phoenix — one of the country's largest data center markets — is now classified as water-scarce. Microsoft announced zero-water evaporation designs for its Phoenix and Mt. Pleasant (Wisconsin) builds starting in 2026. WUE (water usage effectiveness) is becoming a hard constraint alongside PUE. This creates parallel demand for closed-loop systems, dielectric coolants, dry coolers, and refrigerant-based heat rejection.

2

In Plain English

Heat is the enemy. Every watt of electricity going into a GPU comes out the other side as heat, and heat is the single factor that limits how dense you can pack computing into a physical room. The harder you run a GPU, the more heat it makes, and heat is like a bouncer who will throw your chip off the dance floor if it can't stay cool.

For decades, we cooled computers by blowing cold air over them. Fans sucked air through the rack, across the chips, out the back. Simple, cheap, reliable. But there's a physics limit: air can only carry so much heat. Above roughly 30 to 50 kilowatts of heat per rack, you'd need hurricane-force winds to move the air fast enough — and even that wouldn't work. The Blackwell generation of NVIDIA GPUs already hit that ceiling. The Rubin generation blows straight through it. The Kyber rack coming in 2027 needs to dissipate 600 kilowatts per rack — roughly the thermal output of 400 home ovens running full blast inside a single cabinet.

So you switch from air to liquid. Water (or a specialized coolant) carries heat roughly 3,000 times more efficiently than air. Instead of blowing air, you pipe liquid through cold plates that sit right on top of the chip, pulling heat away and carrying it out of the rack to a central loop. NVIDIA has said flatly: the Rubin NVL72 rack has no air-cooled option. If you want to run one, you need plumbing. This is no longer an upgrade — it is a requirement.

There's a second twist. Traditional data center cooling uses a lot of water, mostly through evaporative cooling towers outside the building. A large data center can consume millions of gallons of water a day, which is a real problem in places like Phoenix where water is scarce. So the industry is simultaneously racing to develop "zero-water" cooling, where the heat is rejected without any evaporation. Microsoft is piloting this in Phoenix and Wisconsin. Over the next few years every new AI data center will be liquid-cooled at the rack level and increasingly water-neutral at the facility level. The winners are the companies making cold plates, manifolds, coolant distribution units, rear-door heat exchangers, dry coolers, and specialty fluids.

3

Who Benefits Most

Beneficiaries are ranked by the directness of their exposure. Tickers that exist in our explorer link to the company brief.

Primary beneficiaries

Direct, first-order exposure. If the trend plays out, these are the names that capture the majority of the value.

VRTVertiv Holdings

Dominant supplier of rack-level power and thermal management. Liebert cooling line and rear-door heat exchangers are designed into most frontier AI builds. Direct beneficiary across the full stack.

MODModine Manufacturing

Modine. Coolant distribution units, dry coolers, and thermal management for data center liquid loops. Pure-play exposure and fastest-growing segment in its entire business.

JCIJohnson Controls

Johnson Controls. Chillers, building management systems, and facility-level cooling. Scale matters when you are cooling a gigawatt facility.

Secondary beneficiaries

Real exposure but competing with alternatives or dependent on adjacent calls.

CARRCarrier Global

Carrier. High-efficiency chillers and refrigerant systems. Commercial HVAC heritage now applied to liquid cooling and heat rejection.

TTTrane Technologies

Trane Technologies. Chillers and thermal systems for large facilities. Direct competitor to Carrier and JCI at hyperscale tier.

IRIngersoll Rand

Ingersoll Rand. Compressor and fluid management systems used in liquid cooling loops.

APDAir Products

Air Products. Industrial gases and specialty fluids for immersion cooling and refrigerant recovery.

Picks and shovels

Enabling suppliers whose revenue scales with the trend regardless of which frontline vendor wins.

FIXComfort Systems USA

Comfort Systems USA. Mechanical contractor who actually installs the cooling loops and plumbing inside the data center. Labor is a real bottleneck.

EMEEMCOR Group

EMCOR. National-scale mechanical and electrical contractor with crews available for large hyperscale installs.