L07Pillar 2: Put Chips on Server· Pillar 2: Put Chips on Server

Storage for the Server

Data Storage Systems

Supply Constraint

5/10
5/10

How hard it is to add capacity in this layer. Suppliers, lead times, capital intensity, geographic concentration.

Demand Pull

6/10
6/10

How much of this layer's revenue is AI-driven today and how fast that mix is growing.

Storage capacity can be added incrementally. AI training datasets growing faster than any other data category.

Layer Dependencies

SSDs and hard drives plug into the server motherboard alongside GPUs (L06) and system memory (L08). AI training requires rapid random access to massive datasets. Storage connects to GPU clusters through network switches (L15).

Deep Dive

The Optical Rewiring of AI is this layer's defining trend. As GPU clusters scale from thousands to hundreds of thousands of GPUs, electrical copper interconnects hit fundamental physics limits: signal loss, heat generation, and crosstalk all increase with cable length and data rate. The solution is optical — converting electrical signals to photons at the chip edge and transmitting over fiber.

NVIDIA's GB300 NVL72 rack requires 1.6T optical transceivers at every port. Each rack needs dozens of transceivers. A 100,000-GPU cluster needs hundreds of thousands. Coherent (formerly II-VI) and Lumentum make the indium phosphide (InP) laser chips that sit at the core of every transceiver. POET Technologies and Broadcom are working on silicon photonics alternatives that integrate optical components directly onto the chip — potentially cheaper at scale but not yet proven at the data rates required.

The supply chain stretch reveals the hidden constraint: InP substrates. Indium phosphide wafers are grown by a handful of companies — Sumitomo Electric (Japan), AXT Inc (US), and Wafer Technology (UK). The InP substrate market is tiny compared to silicon — perhaps $500M globally — but without it, no 1.6T transceiver ships. This is a classic chokepoint where a $500M input constrains a $50B network buildout.

Ciena and Semtech provide the DSP (digital signal processing) chips that clean up optical signals at the receiving end. As data rates push to 1.6T and beyond, DSP complexity increases quadratically — more computation per bit to compensate for signal degradation. This creates its own power and cooling burden, which feeds back into the 800V Power Architecture and Liquid Cooling Mandate trends.

CHAIN INSIGHT

InP (indium phosphide) substrate supply is the hidden chokepoint — a $500M input market constraining a $50B network buildout. Sumitomo Electric and AXT are near-monopoly suppliers.

Companies in This Layer

#2 custom ASIC + #1 optical DSP — dual-layer AI moat with 5-year AWS deal
Marvell Technology

Custom silicon for Amazon (Trainium/Inferentia) and Microsoft. Also PAM4 optical DSPs. Spans L06 + L07.

ONTAP hybrid cloud data management platform with 30-year enterprise installed base
NetApp

Enterprise data storage and cloud data services. All-flash arrays for AI training data. Strong software-defined storage capabilities.

DirectFlash architectural advantage unreplicated for a decade plus Evergreen subscription switching costs
Pure Storage

All-flash storage optimized for AI training and inference. DirectFlash proprietary architecture delivers consistent latency and high IOPS required for GPU cluster storage.

HDD duopoly + HAMR technology lead
Seagate Technology

Mass storage — hard drives and SSDs. AI training datasets require enormous storage volumes. Hamr technology for high-capacity drives.

Mass storage
Western Digital

Hard drives and NAND flash storage. Spinning out flash business (SanDisk). AI data lake storage at scale.