The Custom Silicon Surge
Every major hyperscaler is now designing its own AI chip. NVIDIA's monopoly on AI compute is ending.
What's Technically Happening
The hyperscaler AI chip lineup as of Q1 2026: Google TPU v7 "Ironwood" (4.6 PFLOPS FP8, 192 GB HBM3E, TSMC 3nm). Amazon Trainium 3 (2.52 PFLOPS FP8, 144 GB HBM3E, TSMC 3nm). Microsoft Maia 200 (750W TDP, 216 GB HBM3E, TSMC 3nm). Meta MTIA v2 (training and inference accelerator). Apple M-series derivatives extending to server inference.
All custom hyperscaler chips are manufactured at TSMC 3nm (moving to N2 through 2026–2027) and all require CoWoS advanced packaging. Custom ASIC revenue is growing at an estimated 44.6% CAGR, against NVIDIA GPU revenue growing at roughly 30%. Industry estimates project NVIDIA's share of AI inference compute could fall from 90%+ today to 20–30% by 2028 as hyperscalers shift inference workloads onto in-house silicon. Training market share remains more durable for NVIDIA through at least 2027 because of CUDA's software moat.
Google is already the largest single owner of AI compute capacity globally — approximately one quarter of worldwide installed AI accelerators, primarily running on its own TPU silicon rather than NVIDIA. This is the proof-of-concept that custom silicon at scale is economically viable.
The design-services tier: Broadcom (AVGO) partners directly with Google, Meta, and reportedly OpenAI on ASIC design — taking the architecture spec and turning it into tape-out-ready silicon. Marvell (MRVL) holds similar relationships with Amazon and Microsoft. Alchip, Socionext, and GUC are Taiwan-based competitors. Broadcom's AI revenue (custom ASIC plus networking) now approaches NVIDIA's data center GPU revenue in absolute dollar terms.
The strategic logic for hyperscalers: (1) escape NVIDIA's 60%+ gross margins, (2) optimize silicon for specific model architectures and quantization schemes, (3) reduce supply concentration risk, and (4) control the roadmap independently.
In Plain English
For years, if you wanted to train an AI model, you bought NVIDIA chips. There was no other choice. NVIDIA knew this and priced accordingly — their profit margins on data center GPUs are around 60 to 70%. Imagine being Google and spending tens of billions of dollars per year on NVIDIA chips, knowing that most of that money is going to NVIDIA's bottom line. At some point you start asking the obvious question: why are we not making our own chips?
And that's what every hyperscaler is now doing. Google has been running its own chip (the TPU) for about a decade and runs more AI compute on TPUs than on NVIDIA GPUs. Amazon has Trainium. Microsoft has Maia. Meta has MTIA. Every one of these chips is designed in-house, taped out on TSMC's 3-nanometer process, and aimed squarely at workloads where NVIDIA previously had a monopoly — especially inference (running a trained model for end users), which is now roughly two-thirds of all AI compute.
The interesting part is that none of these hyperscalers does the hard chip design work alone. Turning an architecture specification into real silicon ready for TSMC is extraordinarily complex, and there are only a handful of companies in the world who specialize in it. The two biggest are Broadcom (which partners with Google, Meta, and reportedly OpenAI) and Marvell (partnering with Amazon and Microsoft). These companies don't make their own brand of AI chip — they're the invisible engineering partner behind everyone else's. That means as the custom silicon wave grows, Broadcom and Marvell both benefit without having to pick a winner among their customers.
And underneath everything, there's TSMC. Whether NVIDIA designs the chip or Google designs the chip, it all ends up being manufactured in the same Taiwanese fabs using the same machines from ASML. That's why TSMC's dominance only gets stronger as custom silicon grows — more customers, same factory. Same story for the software companies (Cadence, Synopsys) whose tools every chip designer uses to actually design these things. The only pure loser in this story is NVIDIA's pricing power over the long run, but even that's tempered by the fact that NVIDIA still dominates training, and its CUDA software ecosystem is extremely sticky.
Who Benefits Most
Beneficiaries are ranked by the directness of their exposure. Tickers that exist in our explorer link to the company brief.
Primary beneficiaries
Direct, first-order exposure. If the trend plays out, these are the names that capture the majority of the value.
Broadcom. Dominant ASIC design-services partner for hyperscaler custom silicon. Google, Meta, and reportedly OpenAI are confirmed customers. AI ASIC revenue approaching NVIDIA's GPU revenue in absolute scale. Best pure-play public exposure to this trend.
Marvell. Amazon and Microsoft ASIC partner. Second-largest beneficiary after Broadcom and with more upside given smaller starting base.
Taiwan Semiconductor. Every custom ASIC is fabricated here. TSMC is indifferent to which customer wins — they all pay TSMC. Compounding beneficiary across training and inference.
Synopsys. EDA software used to design every custom chip. Every new hyperscaler SKU drives tool-license revenue. Half of the EDA duopoly.
Cadence. The other half of the EDA duopoly. Every custom silicon program requires Cadence or Synopsys tools (usually both).
Secondary beneficiaries
Real exposure but competing with alternatives or dependent on adjacent calls.
Arm Holdings. Custom ASIC designs frequently embed Arm CPU cores for control-plane processing. Arm's royalty stream compounds with ASIC volume.
Astera Labs. Connectivity chips (retimers, smart switches) bridging custom ASICs into memory and fabric. ASIC proliferation = more connectivity content per system.
Picks and shovels
Enabling suppliers whose revenue scales with the trend regardless of which frontline vendor wins.
Intel. Indirect beneficiary if its 18A foundry gains any share of the custom silicon market. Strategic optionality, not yet commercial scale.
GlobalFoundries. Mature-node foundry supplying companion chips, I/O dies, and power management silicon inside custom ASIC systems.