⚡ Key Takeaways

Apple is paying Google approximately $1 billion per year for a custom 1.2 trillion parameter Gemini model to rebuild Siri — but iOS 26.4, released March 24, 2026, shipped without any of the promised features after internal testing revealed critical quality issues with multi-step reasoning and response accuracy.

Bottom Line: Apple is paying Google $1 billion per year for a 1.2 trillion parameter Gemini model to rebuild Siri, but the features missed their iOS 26.4 deadline and may not ship until iOS 26.5 or iOS 27.

Read Full Analysis ↓

Advertisement

🧭 Decision Radar (Algeria Lens)

Relevance for Algeria
Medium

Apple's iPhone holds meaningful market share among Algeria's urban professionals and diaspora. Siri improvements — particularly if Arabic support expands — would directly affect hundreds of thousands of Algerian users. The larger lesson about build-vs-buy AI strategy is relevant for Algerian startups evaluating foundation model integration.
Infrastructure Ready?
Partial

Apple Private Cloud Compute requires stable internet for complex queries. Algeria's 4G/5G coverage in major cities (Algiers, Oran, Constantine) supports this, but inconsistent connectivity in rural areas limits the experience. On-device features work regardless of connectivity.
Skills Available?
Partial

No specialized skills needed for end users. For Algerian iOS developers building SiriKit integrations or App Intents, Apple's existing developer documentation applies. The deal's strategic lessons about AI licensing and integration are relevant for Algerian tech leaders evaluating similar decisions.
Action Timeline
Monitor only

The new Siri features are delayed beyond iOS 26.4 and may not arrive until iOS 26.5 (May 2026) or iOS 27 (September 2026). Algerian users should update when available. App developers targeting Apple's ecosystem should monitor SiriKit and App Intents updates for new action-chaining capabilities.
Key Stakeholders
iOS app developers, enterprise mobile teams, consumers with Apple devices, telecom operators (Djezzy, Mobilis, Ooredoo) preparing for increased cloud AI traffic, Algerian startups evaluating AI integration strategies
Decision Type
Educational

Understanding how AI assistants are evolving — and how even Apple chose to license rather than build — helps Algerian tech professionals anticipate broader trends in human-computer interaction and AI product strategy.
Priority Level
Low

This is a consumer product update with no immediate action required. The strategic lessons about AI licensing economics are valuable context for Algerian tech decision-makers but do not require urgent response.

Quick Take: For Algerian iPhone users, the new Siri will bring meaningful improvements once it eventually ships, particularly in multi-step tasks and on-screen awareness. The larger lesson for Algeria’s tech ecosystem is strategic: even Apple, with virtually unlimited resources, chose to license AI capabilities rather than build from scratch — and still missed its launch deadline. Algerian startups should take note: integrating the best available models is smarter than building your own, but execution and reliability remain the hard part.

A Billion Dollars and a Missed Deadline

On January 12, 2026, Apple and Google announced a multi-year partnership to rebuild Siri around Google’s Gemini AI technology. Under the deal, Apple will pay Google approximately $1 billion annually to license a custom 1.2 trillion parameter Gemini model — an eightfold increase over the roughly 150 billion parameter system currently powering Apple Intelligence.

The agreement makes it one of the largest AI licensing deals in history. The upgraded Siri was originally expected to debut with iOS 26.4 in spring 2026. But when Apple released iOS 26.4 on March 24, the new Siri was nowhere to be found. Internal testing had revealed critical quality problems — Siri cutting users off mid-sentence, struggling with complex multi-step requests, and exhibiting slow response times — forcing Apple to push the features to iOS 26.5 (expected May 2026) and potentially iOS 27 (September 2026).

The delay underscores a difficult truth: even with a trillion-parameter model and a billion-dollar budget, shipping a reliable AI assistant is harder than building the underlying technology.

Why Apple Went Outside

Siri’s core problem has always been architectural. Since its 2011 launch, Siri has operated as an intent-classification system: user requests are matched against a predefined set of commands. “Set a timer for 10 minutes” works reliably. “Find the email from my dentist last week and add the appointment to my calendar” does not, because it requires contextual reasoning, cross-app search, and multi-step execution that intent classifiers cannot handle.

Apple’s internal attempts to modernize Siri — including a team of several hundred engineers assembled around 2022 and the Apple Intelligence initiative announced at WWDC 2024 — produced incremental improvements (Writing Tools, email summaries, notification prioritization) but never delivered the fundamental architectural leap needed to compete with GPT-4-class and later models.

The build-versus-buy calculation ultimately favored buying. Training a frontier model from scratch would require tens of thousands of high-end GPUs running for months. Google has already made that investment for Gemini. Apple’s licensing fee buys access to a model Google spent several times more developing, while letting Apple focus its AI resources on on-device inference, privacy engineering, and hardware-software integration — areas where it has genuine competitive advantage.

Apple evaluated multiple vendors before selecting Google. Notably, Anthropic’s Claude was reportedly in contention to power the new Siri, but Anthropic’s pricing demands — reportedly several billion dollars annually, doubling each year — made the deal untenable. Apple also maintains its existing ChatGPT integration in Siri (introduced in iOS 18.2), which continues alongside the Gemini partnership.

What the New Siri Promises

On-Screen Awareness

The most transformative planned capability is on-screen context awareness — Siri understanding what is currently displayed on the user’s screen and incorporating that context into responses and actions. A user looking at a restaurant webpage could say “make a reservation here for Saturday at 7” and Siri would identify the restaurant, locate its booking system, and initiate the reservation.

The implementation relies on Gemini’s multimodal capabilities, processing a structured semantic description of visible UI elements rather than raw screenshots. On-screen awareness is designed to work across all native Apple apps and third-party apps using standard iOS UI frameworks.

Multi-Step Action Chains

Current Siri can execute single actions or, in limited cases, two linked actions. The Gemini-powered Siri is designed to support chains of up to 10 sequential actions, each informed by the results of previous steps. A request like “Check my calendar for next week, find a free afternoon, draft a meeting email to my team, and attach the project brief” would involve at least five distinct actions executed in sequence.

Each step includes an implicit checkpoint — if any action produces an unexpected result, Siri pauses and asks the user how to proceed rather than continuing blindly. However, it is precisely this multi-step reliability that has proven problematic in internal testing, contributing to the iOS 26.4 delay.

Privacy Architecture

The 1.2 trillion parameter model is too large to run on-device. Apple’s solution is Apple Private Cloud Compute (PCC) — secure cloud infrastructure using Apple Silicon servers where complex queries are processed in encrypted, isolated environments. Google never sees raw user queries or personal data; the Gemini model runs inside Apple’s infrastructure.

Simple requests — timers, phone calls, basic questions — still process entirely on-device using Apple’s smaller proprietary models. The Gemini-powered PCC path activates only for complex tasks.

That said, reports have emerged that Apple is also exploring running some Siri workloads on Google’s own servers, suggesting the PCC infrastructure alone may not be sufficient to handle demand at scale.

Advertisement

What Google Gets

The financial terms — roughly $1 billion per year — are significant but secondary to distribution. Apple’s 2.5 billion active devices represent the largest single distribution channel any AI model has ever achieved. For Google, which has struggled to convert Gemini’s technical capabilities into consumer adoption beyond Search, Apple’s ecosystem solves the go-to-market problem that engineering alone cannot.

The deal also validates Gemini’s technical quality. Apple choosing Google’s model over building its own — or licensing from OpenAI or Anthropic — signals that Gemini is competitive for this specific use case. That Apple, a company that publicly committed to on-device AI, concluded Google’s model was worth a billion dollars annually sends a market signal no marketing campaign could replicate.

The Strategic Risks

Dependency and Delay

Apple is now dependent on a direct competitor for its flagship AI feature’s core intelligence. The multi-year contract provides some protection, but the strategic vulnerability is real. Apple is reportedly continuing to invest in its own foundation model research, viewing the Gemini partnership as a bridge while it develops an in-house alternative.

The iOS 26.4 delay adds another dimension to this risk. Each month without the upgraded Siri erodes the deal’s value proposition and gives competitors like OpenAI (through its own consumer products) and Google Assistant (with native Gemini integration) more time to strengthen their positions.

The Expectation Gap

When the new Siri eventually ships, it will be dramatically more capable than the current version — and dramatically less capable than many users expect. The gap between “can chain 10 actions and understand your screen” and “can do everything a human assistant can” is enormous. Managing expectations may prove Apple’s hardest challenge: a Siri that is ten times better but still fails at tasks users believe it should handle risks feeling frustrating rather than impressive.

The Industry Signal

The Apple-Google deal represents a maturation of the AI industry’s business model. A small number of companies — Google, OpenAI, Anthropic, and Chinese labs like DeepSeek and Baidu — build the foundation models, while a larger ecosystem integrates those models into products, devices, and services.

Apple, the world’s most valuable company, choosing to license rather than build sends a clear signal that this structure is not just viable but optimal. The cost of building a frontier model is so high, and the required expertise so specialized, that even trillion-dollar companies rationally choose to buy rather than build.

For the billions of users waiting on the new Siri, the provenance of the underlying model is irrelevant. What matters is whether the upgrade eventually delivers — and right now, “eventually” is doing a lot of work in that sentence.

Follow AlgeriaTech on LinkedIn for professional tech analysis Follow on LinkedIn
Follow @AlgeriaTechNews on X for daily tech insights Follow on X

Advertisement

Frequently Asked Questions

Will the new Gemini-powered Siri support Arabic?

Apple has not confirmed the full language list for the Gemini-powered Siri at launch. Historically, Siri has supported Arabic (Modern Standard Arabic and several regional variants) for basic commands. The Gemini model has strong multilingual capabilities including Arabic, making full support technically feasible. However, Apple typically rolls out new features in English first and expands to other languages over subsequent updates. Arabic support for advanced features like action chains and on-screen awareness will likely arrive after the initial English-language rollout.

Does Apple send user data to Google through this partnership?

No. Apple processes all Siri queries either on-device (for simple requests) or through Apple Private Cloud Compute (for complex requests). The Gemini model runs inside Apple’s infrastructure, not Google’s. Google licensed the model to Apple but does not receive user queries, personal data, or response content. However, recent reports suggest Apple may also explore running some Siri workloads on Google’s servers for capacity reasons, which could complicate this privacy picture.

Why was the new Siri missing from iOS 26.4?

Apple originally targeted iOS 26.4 (released March 24, 2026) for the Gemini-powered Siri debut. However, Bloomberg reported in February 2026 that internal testing revealed critical quality issues: Siri was cutting users off mid-sentence, struggling with complex multi-step requests, and exhibiting slow response times. Apple decided to spread the features across iOS 26.5 (expected May 2026) and potentially iOS 27 (September 2026) rather than ship an unreliable experience.

Sources & Further Reading