L06Pillar 1: Make the Chip· Pillar 1: Make the Chip

The Finished Chip

Compute & AI Silicon

Supply Constraint

6/10
6/10

How hard it is to add capacity in this layer. Suppliers, lead times, capital intensity, geographic concentration.

Demand Pull

9/10
9/10

How much of this layer's revenue is AI-driven today and how fast that mix is growing.

$300B+ hyperscaler capex flows through this layer. NVIDIA 80%+ AI accelerator share.

Layer Dependencies

Every chip here depends on TSMC (L04) for manufacturing, ASML (L02) for lithography, HBM (L08) for memory, and packaging (L05). These finished chips flow to server assembly (L13) where they become part of complete servers.

Deep Dive

This is the demand signal that drives everything beneath it. NVIDIA's GPU roadmap — B200, B300, Rubin, and beyond — dictates how much CoWoS capacity TSMC needs (L05), how much HBM SK Hynix must produce (L04), how much EUV time ASML's machines must deliver (L02), and ultimately how much power the data center must supply (L14-L15).

The Custom Silicon Surge trend splits this layer into two distinct markets. The merchant market (NVIDIA, AMD) sells GPUs to anyone. The captive market (Google TPU, Amazon Trainium, Microsoft Maia, Meta MTIA) designs chips exclusively for internal use. Both markets are growing, but the competitive dynamics differ sharply. Merchant GPUs compete on software ecosystem (CUDA). Captive ASICs compete on total cost of ownership for specific workloads.

Broadcom sits at the center of the custom silicon trend — they design ASICs for Google, Meta, and others through their Custom Silicon division. Marvell does the same for Amazon and Microsoft. Both companies are evolving from networking chip companies into AI ASIC design houses, which is a higher-margin, higher-growth business.

The Inference Architecture Shift trend also flows through this layer. Training requires the largest, most expensive GPUs (B200/B300 class). Inference can potentially run on smaller, more efficient chips — creating an opening for Groq, Cerebras, and custom inference ASICs. If inference demand grows 10x faster than training (which many expect), the chip mix shifts away from NVIDIA's most expensive products toward a more diverse silicon ecosystem. That's the risk embedded in this layer.

CHAIN INSIGHT

NVIDIA's GPU roadmap is the demand signal that cascades through every upstream layer. But the Custom Silicon Surge and Inference Architecture Shift create structural diversification risk for NVIDIA's market share.

Companies in This Layer

Unproven
Intel (Foundry Services)

Attempting to become a leading-edge foundry with IFS 18A node. >60% yield reported. Massive US fab investment. Still unproven at scale.

Ecosystem monopoly
NVIDIA

80%+ AI accelerator market share. GB200/B200 Blackwell is the standard. CUDA ecosystem moat is 15+ years deep. Every dollar of AI capex touches NVIDIA.

Custom ASIC dominance — 5+ hyperscaler programs, $73B backlog, largest design force globally
Broadcom

Dominant custom ASIC designer. Google TPU, Meta MTIA, other hyperscaler custom silicon. Also networking ASICs (Tomahawk). Spans L06 + L08.

#2 custom ASIC + #1 optical DSP — dual-layer AI moat with 5-year AWS deal
Marvell Technology

Custom silicon for Amazon (Trainium/Inferentia) and Microsoft. Also PAM4 optical DSPs. Spans L06 + L07.

Instruction set monopoly — 95% GM, near-zero churn, RISC-V is only long-term threat
ARM Holdings

CPU architecture IP licensed to every smartphone and increasingly every AI chip. Every custom ASIC uses ARM cores. Launching first in-house chip (AGI) in 2026.

Strong #2
Advanced Micro Devices

#2 AI accelerator with MI300X/MI350. ROCm software stack gaining traction. CPU + GPU combined offerings.

First-mover retimer leader + Scorpio pivot, but replicable tech and NVIDIA integration risk
Astera Labs

PCIe/CXL retimers for AI infrastructure. Extends signal reach 3x. Critical for connecting GPUs across racks.

AEC first-mover but thin moat — optical convergence and NVLink competition
Credo Technology

Active Electrical Cable (AEC) solutions and high-speed connectivity ICs. Cost-effective alternative to optical for shorter AI cluster interconnects. Growing design wins at hyperscalers.

Inference play
Qualcomm

AI inference chips (AI200/AI250 launching 2026-27). Mobile AI dominance. Potential challenger to NVIDIA in inference.

Speed to market
Super Micro Computer

Dominant AI server assembler. Fastest time-to-market for new NVIDIA platforms. Modular building block design philosophy. Accounting investigation overhang.

Sole-source GPU board drilling, 25+ years air bearing engineering, diversified cash flow
Novanta

Precision motion, photonics, and vision components — sole-source supplier of air bearing spindles for GPU board drilling, plus robotics/automation subsystems.