⚡ Key Takeaways

Meta’s 2026 capex hits $115-135B (55-67% of revenue), anchored by the 5 GW Hyperion campus in Louisiana funded via a $27B Blue Owl joint venture, plus an MTIA 300/400/450/500 silicon roadmap already deployed at hundreds of thousands of units.

Bottom Line: Plan sovereign AI roadmaps around the Llama open-weights trajectory — that is the real Algerian-accessible output of this spend.

Read Full Analysis ↓

Advertisement

🧭 Decision Radar

Relevance for Algeria
Medium

Meta’s infrastructure bet affects Llama’s open-weights trajectory, which Algerian teams increasingly use to fine-tune Arabic and French models locally. Better Llama = better starting point for sovereign AI work.
Infrastructure Ready?
No

Hyperion’s 5 GW single-campus model is light-years ahead of anything Algeria can currently support. Even a 50 MW AI-optimized campus would be a national-scale project here.
Skills Available?
Limited

Data-center engineering, MEP design for liquid cooling, and power-purchase-agreement negotiation are specialist skill sets with few Algerian practitioners; university curricula are only beginning to address them.
Action Timeline
12-24 months

Llama 4.x and 5.x weights trained on Hyperion become available for Algerian fine-tuning in 2027-2028; domestic infrastructure planning should start now to capture spillover.
Key Stakeholders
Sovereign-cloud strategists, MPTIC, Sonelgaz (power generation), Sonatrach (gas-to-power potential), university AI labs, startups fine-tuning open models
Decision Type
Strategic

Treat Llama’s trajectory as infrastructure-level dependency for Algerian AI, and begin the long conversation about domestic AI-grade data-center capacity.

Quick Take: Meta’s $115-135B commitment matters most to Algeria indirectly: it guarantees continued investment in the Llama open-weights family that is the default base model for Algerian sovereign AI efforts. Separately, Hyperion’s 5 GW scale is a benchmark Algeria should study — not to replicate, but to understand the gap and plan modest 50-200 MW domestic capacity by 2030.

The Most Aggressive Non-Cloud Capex in Tech History

Meta’s 2026 guidance of $115-135 billion in capital expenditure is striking for one reason above all: Meta does not sell cloud services. Every dollar funds its own AI ambitions, its own infrastructure, its own product surface. At roughly 55-67% of projected revenue, it is the most capital-intensive year any profitable consumer-technology company has ever committed to.

CFO Susan Li framed the increase as “support for our Meta Superintelligence Labs efforts and core business.” In practice, it breaks into four large buckets: flagship data center campuses, custom MTIA silicon, Llama training infrastructure, and Reality Labs plus traditional product infrastructure. The first two are where the new money is going.

Hyperion: The 5-Gigawatt Louisiana Campus

The single most consequential project in Meta’s 2026 build is Hyperion, a 4.1-square-mile data center campus in Richland Parish, Louisiana. At peak design capacity, Hyperion is planned to reach 5 gigawatts of compute power — enough to run several of the largest model-training clusters in existence simultaneously.

The financing structure is as interesting as the scale. In October 2025, Meta formed a joint venture with funds managed by Blue Owl Capital that targets up to $27 billion in total development cost. Blue Owl owns 80% of the JV; Meta retains 20%. This off-balance-sheet financing model preserves Meta’s flexibility while accessing institutional infrastructure capital that would otherwise sit in core-plus real-estate or power-infrastructure funds.

Construction is already underway on 2,250 acres of former farmland. Workforce peaks at roughly 5,000 construction workers by mid-2026. First-phase operations are projected for 2028, with $10 billion already committed and $875 million contracted to Louisiana-based suppliers in the first twelve months of the project. Once operational, the campus will support more than 500 permanent roles.

Prometheus in Ohio and the Supporting Network

Hyperion is the headliner, but it is not alone. Meta’s 2026 build also includes:

  • Prometheus — a 1GW campus in Ohio, also dedicated to AI training
  • Expansions at existing data center sites in Texas, Nebraska, Iowa, Virginia, and New Mexico
  • Selected cloud leasing from hyperscalers for burst capacity

The scale forces a new operating model. Meta has signed multi-gigawatt power purchase agreements, including a major deal with Entergy covering Louisiana load, and is exploring small modular reactor (SMR) partnerships for 2030-era capacity. The binding constraint on Meta’s AI roadmap is no longer GPU supply — it is megawatts on the grid.

Advertisement

MTIA: Four Chips in Twenty-Four Months

Running in parallel with the campus build is Meta’s custom silicon program, the Meta Training and Inference Accelerator (MTIA). In 2026, Meta unveiled a four-chip roadmap spanning roughly 24 months:

  • MTIA 300 — In production for ranking and recommendations training
  • MTIA 400 — GenAI inference, ramping in 2026
  • MTIA 450 — Doubled HBM bandwidth vs. MTIA 400, optimized for GenAI inference, exceeds leading commercial products on bandwidth
  • MTIA 500 — Designed for future GenAI training workloads, planned for 2027

Meta has already deployed hundreds of thousands of MTIA chips in production and tested the platform with Llama-class models. The strategic bet is that vertically integrated silicon plus in-house model architecture plus in-house data centers creates a cost and performance advantage that rented NVIDIA capacity cannot match at Meta’s volume.

Independent analysts estimate that MTIA can deliver Meta inference at roughly 30-40% lower total cost than comparable NVIDIA H200/B200 configurations once the silicon is fully depreciated — a material lever for a company running trillions of AI inferences per day across Facebook, Instagram, WhatsApp, Threads, and Meta AI.

How the $115-135B Splits: Training vs. Inference

Analysts tracking Meta’s capex splits estimate the 2026 allocation roughly as:

  • ~45% ($55B) — AI training infrastructure (GPUs, HBM, networking, training-optimized data centers like Hyperion and Prometheus)
  • ~25% ($30B) — AI inference infrastructure (MTIA deployment, edge capacity, recommendations, Meta AI product serving)
  • ~15% ($18B) — Data center shell construction, land, and power (not yet filled with silicon)
  • ~10% ($12B) — Reality Labs (Quest, AR glasses, Orion development)
  • ~5% ($6B) — Traditional product infrastructure (Facebook/Instagram/WhatsApp core services)

The pivot from inference-heavy spending (historically dominant at Meta as a consumer-product company) to training-heavy spending reflects the Superintelligence Labs priority. Meta is no longer content to ship best-in-class recommendation models trained on moderate-scale clusters. It is committing to frontier model training at a scale competitive with OpenAI, Anthropic, and Google DeepMind.

The Investor Concern

The 2026 guidance produced what analysts have taken to calling “capex anxiety.” Meta’s stock has oscillated on the scale of commitment and the extended payback horizon — Hyperion’s first phase does not come online until 2028, and the training clusters it houses will depreciate across 2028-2033. That is a long arc for a consumer-tech company with quarterly earnings pressure.

Meta’s counter is threefold:

  1. AI is already generating measurable revenue lift in advertising (Advantage+ campaigns, Reels ranking) and creator tooling
  2. MTIA economics will compound as custom silicon displaces rented NVIDIA capacity
  3. Superintelligence is a winner-take-most race and under-investing is more dangerous than over-investing

Whether that thesis holds will not be clear until 2028-2030. What is already clear is that Meta has made the largest private-sector bet on AI infrastructure in corporate history — and that the capex number, however eye-watering, is now the baseline. Expect 2027 guidance to come in higher again.

Follow AlgeriaTech on LinkedIn for professional tech analysis Follow on LinkedIn
Follow @AlgeriaTechNews on X for daily tech insights Follow on X

Advertisement

Frequently Asked Questions

Why does Meta spend as much as Amazon on infrastructure when it does not sell cloud services?

Meta runs some of the world’s largest internal workloads — trillions of AI inferences per day across Facebook, Instagram, WhatsApp, Threads, and Meta AI — and is betting that vertically integrated silicon (MTIA) plus in-house training clusters produce a structural cost advantage over renting NVIDIA GPUs. At Meta’s volume, even a 30% cost delta pays back the capex in a handful of years.

Will Llama 4 or Llama 5 work well for Algerian Arabic use cases?

The Llama family’s Arabic performance has improved with each generation but still lags Gemini and Claude on dialectal Algerian Arabic. For MSA and formal French, Llama is production-ready. For darja or Tamazight, plan on fine-tuning — open weights are the reason Algerian teams pick Llama despite the gap.

Could Algeria realistically host a hyperscale AI campus like Hyperion?

Not at 5 GW scale in this decade. A 100-300 MW AI-capable regional zone is plausible by 2030 if Algeria pairs Sonelgaz power guarantees with a hyperscaler or neocloud partner and secures liquid-cooling supply chains. That requires explicit national-scale planning starting now.

Sources & Further Reading