The Deal: 3.5 Gigawatts of TPU Capacity
Anthropic has signed an expanded partnership with Google and Broadcom for approximately 3.5 gigawatts of next-generation TPU compute capacity, expected to come online starting in 2027. Broadcom has committed to designing and supplying future generations of Google’s TPUs through 2031, giving Anthropic a multi-year hardware roadmap.
This new capacity adds to the 1 GW already coming online in 2026 under a Google Cloud agreement announced in October 2025. Combined, Anthropic is positioning itself with 4.5 GW of dedicated TPU capacity — enough to power a small city and far exceeding what any single AI company has publicly committed to before.
The scale is staggering. A single gigawatt can power approximately 750,000 homes. Anthropic’s total committed capacity reflects the explosive growth in compute demand as frontier AI models require ever-larger training clusters and inference fleets.
Financial Context: $30 Billion Revenue Run Rate
The partnership comes amid Anthropic’s rapid commercial acceleration. The company’s annual revenue run rate has surpassed $30 billion — up from roughly $9 billion at the end of 2025. This 3x growth in under a year reflects surging demand for Claude across enterprise, government, and developer platforms.
Analysts at Mizuho estimated that Broadcom would record $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027, making Anthropic one of Broadcom’s largest customers. The deal underscores how AI companies are becoming major drivers of semiconductor revenue, rivaling traditional enterprise and consumer electronics buyers.
Multi-Chip Strategy: TPUs, Trainium, and GPUs
A distinguishing element of Anthropic’s infrastructure approach is its multi-platform strategy. Claude trains and runs on a range of AI hardware — AWS Trainium chips, Google TPUs, and NVIDIA GPUs — allowing workloads to be matched to the chips best suited for them.
This diversification provides several strategic advantages. First, it reduces dependency on any single hardware vendor, particularly NVIDIA, whose GPU supply remains constrained. Second, it enables cost optimization by routing different workload types to the most efficient hardware. Third, it provides leverage in negotiations with all three hardware ecosystems.
For the broader AI industry, Anthropic’s approach may become the template. As training and inference costs dominate AI company budgets, the ability to arbitrage across chip architectures becomes a competitive advantage rather than a technical curiosity.
Advertisement
Infrastructure Arms Race: Who Else Is Building at Scale
Anthropic is not alone in pursuing gigawatt-scale infrastructure. Microsoft and OpenAI have announced plans for the Stargate project, targeting 5 GW of data center capacity. Google is building out massive TPU clusters across multiple regions. Amazon Web Services continues to expand its Trainium chip program.
What distinguishes the Anthropic-Broadcom deal is the direct relationship between an AI model company and a chip designer, bypassing the traditional cloud provider intermediary model. This vertical integration mirrors trends in other industries where dominant buyers secure supply chain positions upstream.
The global data center construction pipeline now exceeds $500 billion in committed investment through 2030, with AI workloads driving the majority of new capacity. Power availability — not chip supply — has become the primary constraint, explaining why deals are now denominated in gigawatts rather than chip counts.
Energy and Sustainability Implications
The sheer energy scale raises sustainability questions. At 4.5 GW of committed capacity, Anthropic’s operations alone would consume more electricity than many small nations. The AI industry’s total power demand is projected to reach 100 GW by 2030, roughly equivalent to Japan’s total electricity consumption.
This is driving investment in nuclear power, long-duration energy storage, and renewable procurement agreements specifically for AI workloads. Google has signed nuclear power purchase agreements, while Amazon has invested in small modular reactor companies. The intersection of AI infrastructure and energy policy is becoming one of the defining technology governance challenges of the decade.
What This Means for the Market
The Anthropic-Google-Broadcom deal reshapes market dynamics in several ways. For Broadcom, it validates the custom silicon strategy as a viable alternative to NVIDIA’s merchant GPU model. For Google, it demonstrates that its TPU ecosystem can attract and retain the most compute-intensive AI workloads. For Anthropic, it provides the hardware foundation to compete with OpenAI and Google DeepMind on frontier model development.
The deal also signals that the AI infrastructure buildout is entering a new phase. The first phase (2023-2025) was characterized by GPU scarcity and cloud provider dominance. The second phase (2026-2028) will be defined by direct partnerships between AI companies and chip designers, gigawatt-scale facilities, and power infrastructure as the primary constraint.
Frequently Asked Questions
Why is Anthropic partnering with Broadcom instead of buying NVIDIA GPUs?
Anthropic uses a multi-chip strategy that includes NVIDIA GPUs, AWS Trainium, and Google TPUs. The Broadcom partnership specifically covers custom-designed Google TPUs, which offer advantages in cost efficiency and availability compared to NVIDIA’s supply-constrained GPUs. By diversifying across chip architectures, Anthropic reduces vendor dependency and optimizes costs for different workload types.
What does 3.5 gigawatts of compute capacity actually mean?
One gigawatt can power approximately 750,000 homes. Anthropic’s 3.5 GW commitment from 2027 — plus 1 GW already coming in 2026 — represents enough power for roughly 3.4 million homes. This energy will power millions of TPU chips running Claude’s training and inference workloads, reflecting the enormous computational demands of frontier AI models.
How does this deal affect cloud computing costs for businesses?
The massive scale of this deal should eventually help moderate AI compute costs. As dedicated TPU capacity comes online, Anthropic can offer Claude API services at more competitive prices. However, the broader trend of gigawatt-scale infrastructure investment suggests AI compute demand continues to outpace supply, keeping premium pricing for frontier model access in the near term.
Sources & Further Reading
- Anthropic Expands Partnership with Google and Broadcom — Anthropic
- Broadcom to Supply Anthropic with 3.5 Gigawatts of Google TPU Capacity from 2027 — Tom’s Hardware
- Anthropic Ups Compute Deal with Google and Broadcom Amid Skyrocketing Demand — TechCrunch
- Anthropic’s Gigawatt-Scale TPU Deal with Broadcom Creates a Structural Advantage — Futurum Group
- Anthropic Reveals $30bn Run Rate, Plan to Use New Google TPU — The Register





