Memory for the Server
System DRAM & Memory IP
Supply Constraint
8/10How hard it is to add capacity in this layer. Suppliers, lead times, capital intensity, geographic concentration.
Demand Pull
8/10How much of this layer's revenue is AI-driven today and how fast that mix is growing.
HBM is inside the GPU package (L05). System DRAM sits on the motherboard separately. SK Hynix 62%+ HBM share.
Layer Dependencies
System DRAM from Micron, Samsung, SK Hynix sits on the server motherboard separately from HBM (which is inside the GPU package). RMBS licenses memory interface IP used by every memory chip. Memory connects to CPU and GPU through the motherboard.
Deep Dive
Arista Networks is the structural winner in this layer and has been for three years. They supply the spine and leaf switches that connect GPU racks inside AI clusters. The Scale-Up Fabric Wars trend defines the competitive dynamic: as clusters grow beyond single-rack scale, the network fabric becomes the performance bottleneck.
NVIDIA's NVLink connects GPUs within a server. InfiniBand (also NVIDIA, via Mellanox) connects servers within a rack. But rack-to-rack and pod-to-pod connectivity runs on Ethernet — Arista's territory. The 800G-to-1.6T transition happening now doubles port bandwidth but also doubles the complexity of the switching silicon, optical transceivers, and cable management. Arista's R-Series switches, built on Broadcom Memory-supercycle's Memory-supercycle Memory-supercycle Jericho3-AI ASICs, are designed specifically for AI fabric.
Cisco remains the legacy incumbent but has been losing AI data center share to Arista consistently. Their pivot to Silicon One custom ASICs is intended to close the gap. The competitive outcome matters because the winner of AI networking captures the highest-margin hardware position in the data center after the GPU itself.
Broadcom's networking division deserves attention here too — their custom switching ASICs (Memory-supercycle Jericho, Memory-supercycle Ramon) are inside most Arista and Cisco switches. Broadcom wins regardless of which switch vendor leads, similar to how EDA wins regardless of which chip design wins.
The 800G-to-1.6T port transition doubles bandwidth but also doubles switch ASIC complexity and optical transceiver count. Arista is the structural leader; Cisco is fighting to stay relevant.
Companies in This Layer
62%+ HBM market share. NVIDIA's primary memory supplier. Dominant in HBM3E, ramping HBM4. THE memory bottleneck.
Second-largest HBM supplier. Playing catch-up to SK Hynix on HBM3E yields. Massive NAND and DRAM capacity.
Third-largest memory maker. HBM3E production ramping. Only Western HBM manufacturer — strategic importance for US supply chain security.
Memory interface IP licensing. Every HBM and DDR chip uses Rambus-patented interface technology. GB200 HBM memory incorporates Rambus IP.
80%+ AI accelerator market share. GB200/B200 Blackwell is the standard. CUDA ecosystem moat is 15+ years deep. Every dollar of AI capex touches NVIDIA.
Dominant custom ASIC designer. Google TPU, Meta MTIA, other hyperscaler custom silicon. Also networking ASICs (Tomahawk). Spans L06 + L08.
Semiconductor test and inspection equipment — Eclipse handler for AI chip testing, HBM inspection/metrology, and PMIC test for data center power management.