For three years, the AI industry was obsessed with a single metric: how much compute it takes to train the next frontier model. GPT-4 reportedly cost over $100 million to train. Gemini Ultra cost more. Each generation required exponentially more GPUs, more power, more money. The massive infrastructure buildout — with hyperscalers committing between $600 billion and $690 billion in capital expenditure for 2026 — was justified by the assumption that training would keep getting bigger.
En bref : AI inference — the process of running trained models to generate outputs — now accounts for roughly two-thirds of all AI compute, up from one-third in 2023. This shift is driving a new generation of specialized inference hardware from companies like Cerebras and reshaping cost structures, with the price per token dropping approximately 10x per year. The economics of inference, not training, will determine which AI companies survive.
The Great Inversion
Training a large language model is an event. It happens once — or perhaps a few times as a model is refined and updated. It requires enormous, tightly synchronized clusters of GPUs working in concert for weeks or months. It is capital-intensive, technically demanding, and increasingly concentrated among a handful of organizations with the resources to attempt it.
Inference is the opposite of all of those things. It happens continuously, every time a user sends a message to ChatGPT, every time an enterprise application calls an AI API, every time a code assistant generates a suggestion. Inference runs 24 hours a day, 7 days a week, for as long as the model is in production. And as AI adoption accelerates, inference volume is growing exponentially.
The numbers tell the story. In 2023, training accounted for roughly two-thirds of all AI compute and inference one-third. By 2025, Deloitte estimated the split was approximately even. In 2026, inference is projected to account for roughly two-thirds of all compute — a complete inversion in just three years.
Deloitte’s 2026 technology predictions frame this shift starkly: although pre-training growth is slowing, the compute demands from post-training scaling (which uses approximately 30 times the compute needed to train the original foundational model), test-time scaling (reasoning models that require more than 100 times the compute of a simple inference), and increased usage mean the world likely needs more data centers, not fewer.
The Futurum Group predicts that inference revenue will surpass training revenue in 2026. This is not because training is shrinking. Training is still growing. It is because inference demand is exploding — driven by AI products now reaching hundreds of millions of daily users.
The Economics of Tokens
The economics of inference are fundamentally different from training, and understanding this difference is crucial for anyone building or investing in AI.
Training is a fixed cost. You spend $100 million (or $500 million, or $1 billion) to produce a model. Once the model is trained, that cost is sunk. The question then becomes: how cheaply can you run the model to serve customers?
Inference is a variable cost. Every token generated costs electricity, chip time, and memory bandwidth. For a production AI system, inference can account for 80% to 90% of the total lifetime compute cost. Training is the capital expenditure; inference is the operating expenditure. And as any business operator knows, it is the operating costs that determine profitability.
The cost per token has been dropping at a remarkable rate. According to analysis by Andreessen Horowitz, inference costs for equivalent model performance are declining roughly 10x per year. GPT-4-equivalent performance that cost $20 per million tokens in late 2022 now costs approximately $0.40 per million tokens. The rate of decline varies dramatically depending on the specific performance benchmark — from 9x to as much as 900x per year for certain capability levels — but the overall trend is unmistakable.
This deflation is both the opportunity and the threat. For AI consumers, falling inference costs mean that AI capabilities become economically viable for an ever-wider range of applications. For AI providers, the same deflation means that revenue per query shrinks relentlessly, requiring massive volume growth to maintain revenue.
The Inference Chip Challengers
The shift toward inference has opened a competitive front that NVIDIA did not face during the training-dominated era. Training requires massive GPU clusters with ultra-high-bandwidth interconnects — NVIDIA’s core strength. Inference, by contrast, is more atomizable. Individual queries can be handled independently, reducing the need for tightly coupled clusters and opening the door to alternative architectures.
Two companies emerged as the most prominent challengers: Cerebras and Groq.
Cerebras took the audacious approach of building the largest chip ever manufactured. The CS-3, powered by the company’s wafer-scale engine, places an entire AI accelerator on a single silicon wafer rather than cutting it into individual chips. The result is a system with massive on-chip SRAM bandwidth — approximately 21 petabytes per second — that eliminates the data movement bottleneck limiting conventional GPU inference.
Independent benchmarks from Artificial Analysis demonstrate the performance advantage. Cerebras achieved 2,100 output tokens per second on 70B-class models, and approximately 2,500 tokens per second on Llama 4 Maverick, compared to roughly 1,000 tokens per second on NVIDIA Blackwell for the same model. On OpenAI’s gpt-oss-120B, Cerebras delivers around 2,700 tokens per second. The speed advantage is not incremental — it is transformational for applications that require real-time responsiveness.
The market validated Cerebras’s approach in January 2026 when OpenAI signed a $10 billion inference deal with the company. The agreement covers approximately 750 megawatts of computing capacity, with initial systems deployed during Q1 2026 and scaling through 2028. For OpenAI — whose ChatGPT now serves over 900 million weekly active users — inference cost and speed directly impact the viability of its consumer and enterprise business models.
Groq pioneered the Language Processing Unit (LPU), a chip architecture designed specifically for the sequential token generation that characterizes LLM inference. Groq’s public cloud service demonstrated inference speeds that generated widespread attention in early 2025, offering models like Llama and Mixtral at speeds that made existing GPU-based services feel sluggish.
But Groq’s trajectory took a dramatic turn on Christmas Eve 2025, when NVIDIA effectively acquired the company’s key assets through what industry observers described as an acquihire. The deal, valued at approximately $20 billion — nearly three times Groq’s most recent $6.9 billion valuation — brought Groq’s LPU technology under NVIDIA’s umbrella. Founder and CEO Jonathan Ross and President Sunny Madra joined NVIDIA, while Groq continues as a nominally independent company. Jensen Huang stated the plan is to integrate Groq’s low-latency processors into the NVIDIA AI factory architecture. NVIDIA’s message was clear: if you build a better inference chip, we will absorb you.
SambaNova, the third notable challenger, has taken a different approach with its reconfigurable dataflow architecture (RDA). While generating less headline attention than Cerebras or Groq, SambaNova has built a meaningful enterprise business focused on inference-heavy workloads. The company’s fifth-generation SN50 chip, purpose-built for agentic inference and scheduled to ship in H2 2026, delivers 5x more compute per accelerator than its predecessor. SambaNova targets enterprises requiring private AI deployments where data security requirements preclude cloud-based inference.
Advertisement
The Edge Inference Frontier
The inference shift is not limited to data centers. A growing category of AI workloads is moving to the edge — running on devices, in local servers, or in small regional facilities rather than in hyperscale cloud environments.
The drivers are compelling. Latency-sensitive applications (real-time translation, autonomous vehicles, industrial control) cannot tolerate the round-trip delay to a cloud data center. Privacy-sensitive applications (healthcare, financial services, government) may not be permitted to send data to external servers. And cost-sensitive applications may find that local inference hardware, once purchased, is cheaper than ongoing cloud API charges. The Computerworld analysis from CES 2026 identified cost efficiency, edge processing, and data sovereignty as the three primary forces pushing enterprises toward on-premises inference deployment.
The hardware ecosystem for edge inference is expanding rapidly. NVIDIA’s Jetson platform, Qualcomm’s AI Engine, Apple’s Neural Engine, and Intel’s Meteor Lake NPUs all target inference workloads at the device and edge levels. Microsoft’s Copilot+ PCs and similar devices from other manufacturers include dedicated neural processing units designed to run AI models locally without cloud connectivity.
The implications are profound. If inference increasingly moves to the edge, the hyperscalers’ massive data center investments may not capture as much of the AI compute market as their capital expenditure would suggest. The compute market may bifurcate: centralized training in massive AI factories, distributed inference at the edge.
How the Shift Reshapes Business Models
The training-to-inference transition is not merely a hardware story. It is fundamentally reshaping AI business models.
For model providers like OpenAI, Anthropic, and Google, the shift means that training costs become the cost of entry — the R&D expense required to have a competitive product — while inference costs determine profitability. A model that is slightly less capable but dramatically cheaper to run may generate more profit than a frontier model that costs a fortune to serve. This explains the industry’s growing focus on model distillation, quantization, and other techniques that trade modest capability reductions for major inference efficiency gains.
For cloud providers, the shift means that AI revenue increasingly comes from inference API calls rather than training cluster rentals. This favors providers with the lowest per-token costs and the broadest distribution. AWS’s model-agnostic Bedrock platform, which lets customers choose among multiple models and pay per inference, is structurally better positioned for an inference-dominated market than a strategy tied to a single model family.
For enterprises, the shift makes AI economics more predictable. Training costs were lumpy and unpredictable — a failed training run could waste millions. Inference costs are continuous and proportional to usage, making them easier to budget, optimize, and tie to business outcomes. This predictability is accelerating enterprise AI adoption.
For startups, the falling cost of inference is the great equalizer. Building a frontier model requires billions of dollars that only a handful of organizations possess. Building an application on top of inference APIs requires comparatively modest capital. The inference cost deflation curve means that applications that were economically unviable last year may work this year, and applications that work this year will be dramatically more profitable next year.
The Infrastructure Implications
The inference shift has concrete implications for data center design and infrastructure investment.
Inference workloads are fundamentally different from training workloads in their infrastructure requirements. Training requires massive, tightly coupled GPU clusters with ultra-low-latency interconnects. Inference workloads are more distributed, can tolerate higher latency between chips, and benefit from different memory and bandwidth ratios.
Inference-optimized chips are generally cheaper and more power-efficient than the high-end GPUs required for training. Deloitte estimates that the market for inference-optimized chips will exceed $50 billion in 2026. These chips can be deployed in smaller, more distributed facilities rather than the massive centralized AI factories required for training.
This does not mean that the massive data center buildout is misguided. But it does mean that the capital allocation within that buildout is shifting. Approximately 75% of aggregate hyperscaler capex in 2026 will fund AI-related infrastructure, representing roughly $450 billion in AI-specific spending. More dollars will flow toward inference-optimized hardware, distributed deployment architectures, and edge infrastructure. Fewer dollars, proportionally, will go toward ever-larger training clusters — though training will remain enormously capital-intensive in absolute terms.
What Comes Next
The inference era is just beginning. Several trends will amplify the shift in the coming years.
Reasoning models — systems like OpenAI’s o-series that use extended thinking during inference — dramatically increase inference compute per query. Deloitte estimates that test-time scaling can require more than 100 times the compute of a simple inference call. As reasoning capabilities become standard, inference demand will grow even faster.
AI agents — autonomous systems that take multiple actions to complete tasks — multiply inference volume. An agent completing a complex task might make dozens or hundreds of model calls, each requiring inference. As agents move from demos to production deployments, they will drive inference demand in ways that simple chatbot interactions do not.
Multimodal models — systems that process and generate images, video, and audio alongside text — require substantially more inference compute per query than text-only models. As multimodal capabilities become the default, the compute intensity of an average inference call will increase.
The companies and investors who understood the training era built fortunes. The companies and investors who understand the inference era will build the next ones. The shift is underway. The economics are clear. And the implications — for hardware design, cloud strategy, business models, and the fundamental cost structure of artificial intelligence — are only beginning to play out.
Advertisement
Decision Radar (Algeria Lens)
| Dimension | Assessment |
|---|---|
| Relevance for Algeria | High — falling inference costs directly lower the barrier for Algerian companies and institutions to deploy AI applications, while edge inference reduces dependence on international cloud connectivity |
| Infrastructure Ready? | Partial — Algeria has limited cloud infrastructure for training, but edge inference devices (smartphones, laptops with NPUs) are already in widespread use; local inference servers could operate without international bandwidth |
| Skills Available? | Partial — Algerian developers can build applications on inference APIs with existing programming skills, but inference optimization (quantization, model distillation, hardware-specific tuning) requires specialized training |
| Action Timeline | Immediate — Algerian startups and enterprises should build on inference APIs now, taking advantage of the annual cost deflation to launch applications that will become more profitable over time |
| Key Stakeholders | Algerian tech startups, university AI labs, telecom operators (for edge deployment), government digital services, healthcare and education technology providers |
| Decision Type | Strategic — the inference cost curve creates a window for early movers to build AI-powered applications and services before the market saturates |
Quick Take: The inference revolution is arguably the most important trend in AI for Algeria. Falling inference costs mean that Algerian companies do not need to train their own models — they can build valuable applications on top of existing models at costs that drop dramatically each year. Edge inference further reduces dependence on international bandwidth, a persistent bottleneck for Algerian tech. The time to build AI applications is now; waiting only allows competitors to establish first-mover advantages.
Sources & Further Reading
- Why AI’s Next Phase Will Likely Demand More Computational Power, Not Less — Deloitte
- Cerebras Inks Transformative $10 Billion Inference Deal With OpenAI — The Next Platform
- CES 2026: AI Compute Sees a Shift from Training to Inference — Computerworld
- LLM Inference Price Trends — Epoch AI
- LLMflation: LLM Inference Cost Is Going Down Fast — Andreessen Horowitz
- AI Inference vs Training Infrastructure: Why the Economics Diverge — Introl
- Cerebras CS-3 vs Groq LPU — Cerebras
- NVIDIA Buying AI Chip Startup Groq’s Assets for About $20 Billion — CNBC





Advertisement