L10Pillar 2: Put Chips on Server· Pillar 2: Put Chips on Server

Cool the Chips

Component-Level Thermal Management

Supply Constraint

7/10
7/10

How hard it is to add capacity in this layer. Suppliers, lead times, capital intensity, geographic concentration.

Demand Pull

8/10
8/10

How much of this layer's revenue is AI-driven today and how fast that mix is growing.

Liquid cooling mandatory for AI GPUs. Cold plates sit directly on GPU packages. Distinct from building-level HVAC (L19).

Layer Dependencies

Cold plates and liquid cooling loops sit directly on GPU packages inside the server. This is chip-level thermal management — distinct from building-level HVAC and chillers in L19. As GPU power consumption rises past 1000W per chip, component-level cooling becomes critical.

Deep Dive

The Liquid Cooling Mandate runs directly through this layer, but at the component level — not the building level. L10 is about the cold plates, direct-to-chip cooling loops, and thermal interface materials that sit directly on GPU packages inside the server. This is distinct from L19 (building-level HVAC and chillers). The physics is simple: an NVIDIA B200 GPU dissipates 1000W+ from a die roughly the size of a postage stamp. Air cannot remove that much heat from that small an area. A cold plate with liquid coolant flowing through micro-channels can.

Ecolab entered this space decisively in early 2026 by acquiring CoolIT Systems for $4.75B — the leading pure-play maker of direct-to-chip liquid cooling systems for AI data centers. CoolIT's cold plates sit on top of GPU packages and circulate liquid coolant through precision-machined channels, transferring heat away from the die surface at rates air cooling physically cannot match. The acquisition signals that component-level thermal management has graduated from a niche technology to a critical infrastructure layer.

The technical challenge is thermal interface material (TIM) performance. The TIM — a thin paste or pad between the GPU die and the cold plate — is the weakest link in the thermal chain. If TIM thermal resistance is too high, even perfect cold plate design can't save the GPU from throttling. Honeywell's PTM7900 phase-change TIM has become the de facto standard for high-performance AI servers because it maintains low thermal resistance over thousands of thermal cycles without pump-out degradation.

This layer is thin by design — one company (ECL via CoolIT) — because the cold plate market was fragmented among private companies until the Ecolab acquisition consolidated it. Boyd Corporation (private), Cooler Master, and Aavid (part of Boyd) compete at smaller scale. The thinness is a signal: as GPU power consumption crosses 1000W, expect more M&A activity here as thermal management becomes a recognized infrastructure tier rather than a server component afterthought.

CHAIN INSIGHT

GPU die power exceeds 1000W on a postage-stamp-sized surface. Cold plates with liquid coolant are the only viable heat removal path — air cooling fails above ~50kW per rack.

Companies in This Layer

CoolIT acquisition
Ecolab

Acquired CoolIT Systems for $4.75B in 2026 — the leading pure-play maker of GPU cold plates and direct-to-chip liquid cooling systems for AI data centers.

First-mover in DC liquid load banks, industrial thermal expertise, project-based
Thermon Group

Industrial heating and cooling solutions — liquid load banks for data center cooling validation and thermal management systems for critical infrastructure.

Precision machining capability but not proprietary; competitive market, no pricing power
NN Inc

Precision liquid cooling and grid components manufacturer

Fluorochemistry expertise for immersion cooling, but broad chemical company with commodity TiO2 and PFAS overhang
Chemours

Immersion cooling fluids for data center thermal management