Two Companies, $44 Billion, and the Future of Enterprise AI
The numbers have become almost absurd. OpenAI crossed $25 billion in annualized revenue at the end of February 2026, up from $21.4 billion at year-end 2025 and roughly $6 billion at the end of 2024. That is a fourfold increase in 14 months, a growth trajectory that outpaces Salesforce, Snowflake, and every other enterprise software company at comparable scale.
Meanwhile, Anthropic, the company behind Claude AI, doubled its annualized revenue from $9 billion to $19 billion in under four months. By March 2026, the gap between the two companies had compressed to just $6 billion. Some reports indicate Anthropic may have already crossed $30 billion in annualized revenue, potentially overtaking OpenAI for the first time.
Together, these two companies are generating more annualized revenue than the combined output of the next dozen AI startups. Gartner forecasts worldwide AI spending will total $2.52 trillion in 2026, a 44% increase year-over-year. OpenAI and Anthropic are positioned to capture a disproportionate share of enterprise budgets within that expanding market, and both are preparing to go public in moves that will test whether the market can absorb their ambitions.
The Revenue Composition Story
The headline numbers obscure important structural differences in how each company generates revenue.
OpenAI: Consumer scale, enterprise ambition. ChatGPT serves more than 900 million weekly active users globally, and the company counts over 9 million paying business customers. OpenAI’s revenue is more diversified across consumer subscriptions, API usage, and enterprise contracts. The sheer volume of its consumer base provides a massive data advantage that feeds model improvement, creating a flywheel where more users produce better models that attract more users.
Anthropic: Enterprise-first, developer-driven. Enterprises account for roughly 80% of Anthropic’s revenue, with eight of the Fortune 10 using Claude. Claude Code, Anthropic’s coding assistant, has been the primary growth driver, generating $2.5 billion in annualized revenue on its own. The product was adopted at a pace that surprised even Anthropic’s internal projections.
The enterprise spending data reveals a competitive dynamic that favors Anthropic’s focus. In March 2026, Anthropic overtook OpenAI in average spend per customer for the first time. Companies now spend an average of $1,548 per month on Anthropic compared to $1,014 on OpenAI, a 53% premium. According to industry analysis, Claude wins 70% of new enterprise deals against OpenAI.
This divergence matters. OpenAI has the users. Anthropic has the enterprise wallets. The question is which metric matters more for long-term value.
The IPO Race
Both companies are preparing for public listings that will be among the largest technology IPOs in history.
OpenAI is targeting a listing as early as Q4 2026 with a valuation of up to $1 trillion. In February 2026, it closed a $110 billion funding round, the largest private technology financing ever recorded, at a pre-money valuation of $730 billion. OpenAI plans to double its workforce to 8,000 by year-end as it builds out enterprise sales, infrastructure, and its emerging “superapp” strategy.
Anthropic is in active discussions with Goldman Sachs and JPMorgan Chase about what could be a $60 billion-plus raise targeting October 2026, with bankers privately estimating a valuation between $400 billion and $500 billion.
Combined, these two IPOs could introduce over $1.4 trillion in new AI market capitalization to public markets. The timing is not coincidental. Both companies need public market capital to sustain their extraordinary burn rates, and both want to go public while AI enthusiasm remains high and before any potential market correction.
Advertisement
The Profitability Problem Neither Has Solved
Behind the revenue fireworks sits a financial reality that both companies share: neither is profitable, and both are burning cash at rates that would be alarming in any other industry.
OpenAI’s projected losses for 2026 reach $14 billion, with analysts estimating the company’s annual cash burn could reach $57 billion by 2027. That translates to more than $150 million spent every single day. The company’s infrastructure costs, driven by the GPU clusters needed to train and serve its models, consume the vast majority of revenue.
Anthropic faces a similar dynamic. The company has earmarked $12 billion for model training and $7 billion for inference infrastructure in 2026 alone. For every dollar of revenue generated, more than half disappears into compute costs. Even Anthropic’s impressive revenue growth has not translated into a path to profitability.
This creates an unusual market structure: two companies with extraordinary revenue growth, massive user bases, and no clear timeline to positive unit economics. The bull case argues that AI infrastructure costs will decline as hardware efficiency improves, model distillation reduces serving costs, and scale advantages lower per-query expenses. The bear case notes that both companies are locked in an arms race where every efficiency gain gets reinvested into more expensive models, preventing margins from ever materializing.
The Duopoly Dynamic
The concentration of enterprise AI spending in two providers raises questions that go beyond individual company analysis.
Vendor lock-in at unprecedented speed. Enterprises are building AI-dependent workflows, fine-tuning models, and training employees on specific platforms at a pace that creates deep switching costs within months rather than years. Organizations that standardize on OpenAI’s API or Claude’s enterprise platform face significant migration costs if they want to change providers later.
Pricing power. As the two dominant providers, OpenAI and Anthropic have significant ability to raise prices once customers are locked in. Current pricing reflects a market-share acquisition strategy, not long-term unit economics. Enterprise CIOs should expect price increases once market positions are consolidated.
Innovation concentration. With the two leading companies spending a combined $30 billion or more annually on model training, the barrier to entry for new competitors rises continuously. While open-source models from Meta, Mistral, and others provide alternatives, the performance gap between frontier proprietary models and open-source alternatives persists in the most demanding enterprise use cases.
Regulatory attention. Revenue concentration at this scale inevitably attracts antitrust scrutiny. If two companies capture a majority of enterprise AI spending, regulators in the EU, US, and other jurisdictions will examine whether this concentration harms competition, limits choice, or creates systemic risk.
Where the Challengers Stand
The duopoly narrative oversimplifies a market that still includes significant players.
Google integrates Gemini across its cloud platform, Workspace suite, and Android ecosystem, competing on distribution rather than standalone AI revenue. Samsung’s commitment to deploy Gemini on 800 million devices in 2026 gives Google an on-device distribution advantage neither OpenAI nor Anthropic can match.
Meta continues investing billions in open-source LLM development through its Llama family of models, offering enterprises a self-hosted alternative that avoids both OpenAI and Anthropic lock-in. For organizations with privacy concerns or regulatory constraints, Meta’s open approach provides a credible alternative.
Amazon embeds AI deeply within its AWS infrastructure through Bedrock and its own Nova models, competing at the infrastructure layer where enterprises already have relationships and spending.
However, in terms of pure AI-native revenue, no competitor is within an order of magnitude of OpenAI or Anthropic. The duopoly may face challenges from distribution advantages (Google), open-source pressure (Meta), and infrastructure integration (Amazon), but on direct revenue, the two leaders are pulling away.
What Enterprise Buyers Should Watch
For organizations making AI procurement decisions, the duopoly creates both opportunities and risks.
Negotiate now, not later. Both companies are in a market-share acquisition phase where pricing is aggressive and terms are flexible. Enterprise agreements locked in during 2026 will likely prove more favorable than those negotiated after IPOs when shareholder pressure to improve margins intensifies.
Build for portability. Standardize on abstraction layers and model-agnostic architectures where possible. The cost of switching between OpenAI and Anthropic will only increase as integrations deepen.
Monitor the profitability timeline. Neither company has demonstrated sustainable unit economics. If AI infrastructure costs do not decline as projected, or if the competitive arms race continues to absorb efficiency gains, price increases and service changes could follow.
The $44 billion in combined annualized revenue between OpenAI and Anthropic represents the fastest enterprise technology market formation in history. Whether this scale produces lasting companies or an AI spending bubble that eventually corrects depends on a question that remains stubbornly unresolved: can these companies ever turn breathtaking revenue into actual profit?
Frequently Asked Questions
Why are both OpenAI and Anthropic losing money despite massive revenue growth?
Both companies spend more on infrastructure than they earn in revenue. OpenAI projects $14 billion in losses for 2026 with potential $57 billion annual cash burn by 2027. Anthropic has earmarked $12 billion for training and $7 billion for inference infrastructure in 2026 alone. The GPU clusters needed to train and serve frontier AI models consume the vast majority of revenue. Neither company has demonstrated that AI infrastructure costs will decline fast enough to produce profitability.
How does the AI duopoly affect enterprise customers who depend on these platforms?
Enterprise customers face three risks: vendor lock-in (switching costs increase rapidly as AI workflows deepen), pricing power (both companies will likely raise prices after IPOs when shareholder pressure for margins intensifies), and innovation concentration ($30B+ combined annual training spend raises barriers for competitors). Organizations should negotiate long-term contracts now while pricing is aggressive and build model-agnostic abstraction layers to preserve portability.
Could open-source AI models break the OpenAI-Anthropic duopoly?
Open-source models from Meta (Llama), Mistral, and others provide viable alternatives for many use cases, especially for organizations with privacy or sovereignty concerns. However, a performance gap persists between frontier proprietary models and open-source alternatives in the most demanding enterprise applications. The duopoly may face sustained pressure from open-source competition and distribution advantages (Google, Amazon), but in direct AI-native revenue, no competitor is within an order of magnitude of the two leaders.
Sources & Further Reading
- OpenAI Tops $25 Billion in Annualized Revenue as Anthropic Narrows Gap — The Information
- Anthropic ARR Surges to $19 Billion on Claude Code Strength — Yahoo Finance
- Anthropic Turns the Tables on OpenAI in Critical Revenue Category — Axios
- OpenAI vs Anthropic: The Real Spending Data Behind the AI Race — Cledara
- Gartner Says Worldwide AI Spending Will Total $2.5 Trillion in 2026 — Gartner
- OpenAI CEO and CFO Split on IPO Timing Amid $14B Loss Forecast — WinBuzzer





