L08Pillar 2: Put Chips on Server· Pillar 2: Put Chips on Server

Memory for the Server

System DRAM & Memory IP

Supply Constraint

8/10
8/10

How hard it is to add capacity in this layer. Suppliers, lead times, capital intensity, geographic concentration.

Demand Pull

8/10
8/10

How much of this layer's revenue is AI-driven today and how fast that mix is growing.

HBM is inside the GPU package (L05). System DRAM sits on the motherboard separately. SK Hynix 62%+ HBM share.

Layer Dependencies

System DRAM from Micron, Samsung, SK Hynix sits on the server motherboard separately from HBM (which is inside the GPU package). RMBS licenses memory interface IP used by every memory chip. Memory connects to CPU and GPU through the motherboard.

Deep Dive

Arista Networks is the structural winner in this layer and has been for three years. They supply the spine and leaf switches that connect GPU racks inside AI clusters. The Scale-Up Fabric Wars trend defines the competitive dynamic: as clusters grow beyond single-rack scale, the network fabric becomes the performance bottleneck.

NVIDIA's NVLink connects GPUs within a server. InfiniBand (also NVIDIA, via Mellanox) connects servers within a rack. But rack-to-rack and pod-to-pod connectivity runs on Ethernet — Arista's territory. The 800G-to-1.6T transition happening now doubles port bandwidth but also doubles the complexity of the switching silicon, optical transceivers, and cable management. Arista's R-Series switches, built on Broadcom Memory-supercycle's Memory-supercycle Memory-supercycle Jericho3-AI ASICs, are designed specifically for AI fabric.

Cisco remains the legacy incumbent but has been losing AI data center share to Arista consistently. Their pivot to Silicon One custom ASICs is intended to close the gap. The competitive outcome matters because the winner of AI networking captures the highest-margin hardware position in the data center after the GPU itself.

Broadcom's networking division deserves attention here too — their custom switching ASICs (Memory-supercycle Jericho, Memory-supercycle Ramon) are inside most Arista and Cisco switches. Broadcom wins regardless of which switch vendor leads, similar to how EDA wins regardless of which chip design wins.

CHAIN INSIGHT

The 800G-to-1.6T port transition doubles bandwidth but also doubles switch ASIC complexity and optical transceiver count. Arista is the structural leader; Cisco is fighting to stay relevant.

Companies in This Layer

Dominant HBM producer with 50-62% share, 12-18 month yield lead, NVIDIA primary partner
SK Hynix (Korea)

62%+ HBM market share. NVIDIA's primary memory supplier. Dominant in HBM3E, ramping HBM4. THE memory bottleneck.

Unique vertical integration (HBM + foundry + packaging) with world largest memory capacity
Samsung Electronics (Korea)

Second-largest HBM supplier. Playing catch-up to SK Hynix on HBM3E yields. Massive NAND and DRAM capacity.

HBM oligopoly with NVIDIA validation lock-in and sole Western producer geopolitical moat
Micron Technology

Third-largest memory maker. HBM3E production ramping. Only Western HBM manufacturer — strategic importance for US supply chain security.

1600+ patents, JEDEC influence, spec-level switching costs
Rambus

Memory interface IP licensing. Every HBM and DDR chip uses Rambus-patented interface technology. GB200 HBM memory incorporates Rambus IP.

Ecosystem monopoly
NVIDIA

80%+ AI accelerator market share. GB200/B200 Blackwell is the standard. CUDA ecosystem moat is 15+ years deep. Every dollar of AI capex touches NVIDIA.

Custom ASIC dominance — 5+ hyperscaler programs, $73B backlog, largest design force globally
Broadcom

Dominant custom ASIC designer. Google TPU, Meta MTIA, other hyperscaler custom silicon. Also networking ASICs (Tomahawk). Spans L06 + L08.

Eclipse high-power thermal handler + HBM inspection, but competitive vs TER/Advantest
Cohu

Semiconductor test and inspection equipment — Eclipse handler for AI chip testing, HBM inspection/metrology, and PMIC test for data center power management.

One of 5 global NAND manufacturers, Kioxia JV access, BiCS8 technology — but NAND is cyclical commodity
SanDisk

NAND flash pure-play post-WDC spinoff