AMZN
Amazon (AWS)
Summary
What they do:
Operates AWS, the largest cloud infrastructure platform (31%+ global market share), serving as the primary compute backbone for AI training, inference, and deployment — sitting at the top of the AI infrastructure stack where all upstream silicon, networking, and power converge into workloads.
Why they matter:
AWS's $100B AI capex decision cascades backward through the entire supply chain — NVIDIA, Arista, Broadcom, TSMC, and power providers all size their investments off AWS's demand signal, making it the single most reflexive node in AI infrastructure.
Recent performance:
Q4 2025 EPS $1.95 missed consensus by 3%. Next earnings April 23, 2026 (after close); EPS consensus $1.66.
Our Verdict
Consensus hyperscaler with ~35-40% AI exposure through AWS, custom silicon (Trainium/Inferentia), and AI services — massive infrastructure scale and Bedrock platform positioning, but at attractive valuation relative to cloud peers, market hasn't fully priced in the AI margin expansion story.
Structural trends
Structural
76
/ 100
Moat
10/10
Cloud leader
AI Exp.AI Exposure
High~37% AI
Play Type
ConsensusAI Growth
~30%
Rel. Value
54
ATTRACTIVEPriceLIVE
$253.09
-0.64%
Live via Yahoo Finance · refreshes every 5 min
Market Cap
$2.7T
P/E Ratio
35.2
P/S Ratio
N/A
52W High
$258.60
52W Low
$169.35
52W Chg
49.4%
Beta
N/A
AWS operates through a globally distributed network of data centers organized into regions and availability zones. As of 2026, AWS operates 33 regions and 105 availability zones across six continents. Each region contains multiple data center clusters with tens of thousands of servers, network switches, power distribution systems, and cooling infrastructure. A single large AWS region (like us-east-1 in Virginia) contains an estimated 200,000+ servers; across 33 regions, AWS operates roughly 6+ million servers globally. For AI workloads, density is even higher — GPU-heavy clusters are concentrated in specific availability zones optimized for low-latency interconnects and power density.
The 2026 Talen Energy nuclear power purchase agreement is emblematic of AWS's scale ambitions. The deal covers 2,000 MW of baseload power — enough to supply electricity to run an estimated 30-40 million chips continuously (assuming 25-50W per chip). At current build rates, AWS is deploying the equivalent of a new small nuclear power plant every quarter just to keep pace with AI capex demand. Power is no longer a bottleneck; it is a competitive advantage in a power-constrained world.
A single large AWS region's annual operating cost is approximately $3-5 billion. The global infrastructure costs approximately $100-120 billion annually in opex (labor, power, land, maintenance, cooling). With $100B in capex, AWS is spending roughly 80% of its annual opex footprint in a single year on new capacity — an aggressive buildout that signals either extraordinary confidence in enterprise AI demand or extraordinary tolerance for stranded assets and margin pressure.
Human scale reference
The Talen Energy PPA (2,000 MW) is equivalent to powering 2 million homes. AWS's total energy footprint across all regions consumes roughly 5-7% of U.S. electricity generation.
Supply Chain Dependencies
Upstream Suppliers
gpu_supplier · weight 0.7
custom_silicon · weight 0.6
networking_supplier · weight 0.5
ip_customer · weight 0.6
hyperscaler_customer · weight 0.4
optical_customer · weight 0.4
optical_customer · weight 0.4
networking_customer · weight 0.4
server_customer · weight 0.4
power_customer · weight 0.4
power_customer · weight 0.4
power_customer · weight 0.4
power_customer · weight 0.5
construction_customer · weight 0.4
fiber_customer · weight 0.4
cooling_customer · weight 0.5
colocation_customer · weight 0.5
Downstream Customers
The Catch
AWS's $100B capex bet is binary and reflexive. The company is essentially saying: "We are so confident that enterprise AI adoption will generate sufficient revenue that we are willing to commit 80% of our annual opex footprint to new capacity in a single year." If that confidence is misplaced — because models plateau in capability, regulatory barriers fragment markets, customers build private cloud infrastructure, or competitors commoditize AI services — AWS will have massively over-provisioned capacity. The result: stranded capital, multi-year margin compression, and a decade-long earnings grind as AWS works off excess capacity at compressed pricing power. AWS's capex cut would cascade backward reflexively, triggering immediate earnings cuts at NVIDIA, Arista, Broadcom, and Talen Energy — making AWS a systemic pressure point in a demand downturn. AWS's confidence and AWS's doubt are both contagious, but doubt hits harder.
If They Win
If AWS maintains market leadership and enterprise AI adoption materializes at scale, the company transforms from "the cloud" into the computational infrastructure backbone of the AI era. Enterprise AI workloads compound at 20-30% annually through 2035. Custom silicon (Trainium for training, Inferentia for inference, Graviton for general-purpose) mature into high-margin revenue streams that reduce NVIDIA dependence and improve gross margins toward 75%+. AWS's power supply moat (Talen Energy nuclear PPA plus additional baseload contracts) becomes increasingly valuable as competitors struggle to secure equivalent capacity. The reflexive supply chain effect cascades backward: NVIDIA remains dominant but becomes AWS's supplier rather than rival. Arista becomes the de facto networking standard for enterprise AI clusters. TSMC expands fabs specifically for AWS custom chips. AWS becomes the electricity grid of artificial intelligence: every enterprise AI model, every research lab, every SaaS provider eventually routes compute through AWS's infrastructure. The company's moat widens from dominant to insurmountable.
Others in Run the Workload
Not financial advice. All scores generated via AI algorithms using public data.