Three Models, Zero Consensus
Global AI governance in 2026 is not a story of harmonisation approaching. It is a story of three competing regulatory philosophies that are becoming more entrenched, not less.
The European Union’s EU AI Act, which became law on August 1, 2024, represents the most comprehensive binding AI governance framework in the world. Its structure is risk-based: AI systems are classified into prohibited, high-risk, limited-risk, and minimal-risk categories. The prohibited tier bans social scoring systems, real-time biometric surveillance in public spaces (with narrow exceptions), and systems that manipulate behaviour through subliminal techniques. The high-risk tier — covering Annex III applications including biometrics, employment screening, credit scoring, and law enforcement — requires conformity assessments, technical documentation, EU database registration, and ongoing monitoring. High-risk obligations activated on August 2, 2026, with maximum penalties of €35 million or 7% of global annual turnover. General-purpose AI (GPAI) model provider obligations have applied since August 2025.
The United States maintains a deliberately fragmented approach. At the federal level, the Trump administration revoked President Biden’s Executive Order 14110 on AI safety in 2025, signalling a shift toward industry-led governance and away from binding federal AI regulation. The federal vacuum has been filled by state-level action: 38 states enacted approximately 100 AI-related measures in 2025 alone. Colorado’s SB 24-205 — the first comprehensive state AI law — requires companies deploying high-risk AI systems to take “reasonable care” to avoid algorithmic discrimination and mandates consumer disclosure. California, Texas, and Illinois have their own intersecting requirements on AI hiring, deepfakes, and biometric data. The result for companies operating across US states is a compliance patchwork that rivals the complexity of the EU’s framework — without the benefit of harmonised rules.
China’s model is distinct from both. China applies strict content controls on publicly deployed AI systems — generative AI services must comply with regulations on labeling, content restrictions, and user identity verification that took effect in 2025. Simultaneously, China continues aggressive state-backed AI development through national strategy, industrial policy, and designated national standards. The regulatory surface facing international companies deploying AI in China is the content control framework; the regulatory surface facing Chinese AI companies internationally is increasingly the EU AI Act and equivalent market-entry requirements in partner countries.
The Emerging Market Disadvantage
The fragmentation between these three models creates a structural problem for emerging market economies and their technology companies that goes beyond compliance complexity.
Advanced economies designed these frameworks primarily for their own markets and their own company populations. The EU AI Act was shaped by EU stakeholder consultation processes; the US state laws were drafted by US legislators responding to US consumer and employer concerns; China’s framework serves Chinese governance priorities. Emerging market countries — including those across Africa, MENA, and Southeast Asia — had formal representation in some consultation processes (notably through OECD and G20 AI governance forums), but minimal structural influence over the final regulatory outcomes.
The consequence is regulatory dependency: emerging market companies that want to operate in EU, US, or Chinese markets must comply with frameworks they did not design and cannot change. The compliance cost is asymmetric: a Nairobi-based AI startup building a credit scoring tool must navigate EU AI Act Annex III requirements for high-risk AI systems when seeking EU market access, despite having zero input into how “high-risk” was defined and which categories were included.
Singapore provides the most sophisticated response to this asymmetry among smaller economies. Singapore has built an AI governance framework — including the Model AI Governance Framework, the AI Verify testing toolkit, and participation in the Global Partnership on AI — that positions the country as a trusted third party in international AI governance. Rather than attempting to compete with EU/US/China on regulatory design, Singapore focused on building trusted implementation infrastructure that makes its companies credible partners for regulated markets. This is a viable model for other middle-income economies, including those in MENA.
Algeria’s situation illustrates the practical stakes. Algeria’s National AI Strategy (adopted December 2024) and its National AI Council reflect ambition to position AI as a GDP growth driver across six priority pillars. If Algerian AI companies want to serve EU customers — particularly in high-risk sectors like healthcare, finance, or employment — those systems must meet EU AI Act conformity requirements regardless of Algeria’s own regulatory framework.
Advertisement
What the 2026 Governance Gap Means for AI Product Strategy
For companies building AI products in emerging markets, the governance fragmentation creates four specific product strategy questions that did not exist before 2024.
1. Design for the Strictest Market First
The EU AI Act’s high-risk provisions — particularly the conformity assessment requirements, technical documentation standards, and EU database registration — are the most demanding AI product requirements currently in force globally. Companies that design AI systems to meet EU high-risk compliance requirements can generally deploy those systems in markets with less demanding regulatory frameworks. The reverse is not true: a system designed to minimal standards for an unregulated market will require substantial redesign to meet EU requirements if market expansion is later sought.
The practical implication for product roadmaps: if EU market access is a medium-term objective (3-5 years), EU compliance architecture should be built into the product from the beginning rather than retrofitted. The conformity assessment process — third-party audit, technical documentation, human oversight implementation — typically takes 12-18 months for complex AI systems and cannot be compressed to a pre-launch sprint.
2. Map Your Data Processing to Multiple Regulatory Regimes
AI systems ingest and process data. The data governance requirements that apply to that processing vary significantly by market. EU data subjects’ data processed by AI systems must comply with GDPR in addition to EU AI Act requirements. US state laws impose separate requirements on biometric data (Illinois BIPA), employment data (California CPRA), and healthcare data (HIPAA and state-level equivalents). China’s Personal Information Protection Law (PIPL) governs Chinese user data with data localization requirements that conflict with GDPR’s cross-border transfer framework.
For an AI company operating across these markets, maintaining a single data model that satisfies all three frameworks simultaneously is genuinely difficult — the requirements conflict rather than stack. The pragmatic approach is jurisdictional data separation: maintain separate data pipelines for EU, US, and Chinese user data with jurisdiction-specific processing rules applied at the pipeline level. This is operationally complex and expensive, but it avoids the compliance failure mode of applying a single data model that satisfies none of the three frameworks fully.
3. Treat GPAI Model Compliance as a Supplier Risk
The EU AI Act’s general-purpose AI (GPAI) model obligations — activated August 2025 — apply to providers of large foundation models. Companies that build AI applications on top of GPAI models (ChatGPT, Claude, Gemini, Mistral, etc.) face a specific compliance question: does using a non-compliant GPAI model make the application system non-compliant?
The EU AI Act’s answer is nuanced: deployers of GPAI models have GPAI-model-level compliance obligations that are less demanding than GPAI provider obligations, but they are not zero. Specifically, deployers must ensure that their use of a GPAI model does not cause the final system to violate prohibited AI practices, and they must implement appropriate risk management measures for any high-risk application built on a GPAI foundation.
For companies in emerging markets using international GPAI models, the practical advice is: before deployment in EU markets, confirm that the GPAI model provider has filed its required technical documentation with the AI Office and is listed in the EU AI database. As of August 2025, the major frontier model providers had varying states of EU compliance documentation; the situation will have evolved by August 2026.
4. Monitor the Regulatory Alignment Initiatives — They May Simplify Compliance
Despite the current fragmentation, regulatory alignment efforts are underway that could reduce the long-term compliance burden for companies operating across multiple jurisdictions. The G7 has developed AI governance principles under the Hiroshima Process; the OECD AI Policy Observatory tracks regulatory divergence and publishes convergence proposals; the Global Partnership on AI includes both advanced and developing country members working toward common governance principles.
The most significant near-term convergence risk — or opportunity — is whether the EU and US establish a mutual recognition framework for AI conformity assessments. Under such a framework, an AI system that has undergone EU conformity assessment might be recognized as compliant in US markets without a separate assessment, and vice versa. No such framework exists as of May 2026, but the transatlantic AI governance dialogue is one of the few areas where EU-US tech policy cooperation appears to be advancing despite the DMA/DSA trade conflict.
Companies that actively participate in standards-development processes — ISO/IEC AI standards, NIST AI frameworks, IEEE AI ethics standards — position themselves to shape convergence rather than simply respond to it.
The Structural Lesson
The global AI governance landscape of 2026 reflects a deeper pattern in how major regulatory powers assert influence over technology markets: they design frameworks for their own governance priorities and then effectively mandate compliance by all companies that want market access, regardless of those companies’ geographic origin or their capacity to influence the framework design.
For emerging market technology companies, this is neither a temporary problem to be waited out nor an insurmountable barrier. It is a structural feature of the global technology economy that requires strategic response. Singapore’s approach — building trusted implementation infrastructure rather than attempting to compete on regulatory design — is instructive. Companies that invest in EU AI Act conformity capability, that design for data portability across multiple regulatory regimes, and that participate in international standards processes are building regulatory resilience that translates directly into market access.
The alternative — building AI systems optimized for domestic markets without EU/US compliance architecture, then attempting to retrofit compliance when international expansion is sought — is the more common path and the more expensive one. Regulatory compliance retrofitting on deployed AI systems is consistently more costly than compliance-first design, in both engineering time and reputational risk if enforcement actions occur before the retrofit is complete.
Frequently Asked Questions
Which AI systems are classified as “high-risk” under the EU AI Act and require conformity assessment?
EU AI Act Annex III defines eight categories of high-risk AI systems: biometric identification and categorisation; management of critical infrastructure; educational access and vocational training; employment and worker management (including CV screening and performance monitoring); access to essential services (credit scoring, insurance, benefits determination); law enforcement; migration, asylum, and border control; and administration of justice. For most commercial AI products built by startups and SMEs, the most commonly encountered high-risk categories are employment AI (hiring tools, performance systems) and financial services AI (credit and insurance scoring). Systems in these categories must complete conformity assessment, maintain technical documentation, register in the EU database, and implement human oversight mechanisms before EU deployment.
How does the US state-level AI law patchwork compare to the EU AI Act in practice?
The US approach currently lacks a central authority equivalent to the European AI Office, no unified classification system, and no cross-state mutual recognition. Companies operating across all 50 US states must track 100+ individual state measures with varying scope, definitions, and compliance deadlines. The EU AI Act, despite its complexity, provides a single framework with consistent definitions applied uniformly across 27 member states. For compliance planning purposes, many legal advisors describe EU AI Act compliance as “easier to scope but harder to satisfy” compared to the US patchwork, which is “easier to satisfy locally but impossible to satisfy completely for national US operations.”
Is there a multilateral forum where developing countries influence global AI governance?
Yes, with significant limitations. The UN General Assembly passed a resolution in 2024 urging a human rights-based approach to AI governance, with broad developing country participation. The Global Partnership on AI (GPAI) includes both OECD member countries and invited partners from emerging markets. The OECD AI Policy Observatory provides technical resources accessible to non-member governments. However, the foundational regulatory frameworks — EU AI Act, US federal and state laws, China’s AI content regulations — were designed and adopted without meaningful input from African or MENA governments. Developing country participation in international AI governance is primarily advisory, not determinative.
—
Sources & Further Reading
- AI Regulations Around the World 2026 — Mind Foundry
- Global AI Governance and Regulation 2026 — Supertrends
- Comprehensive Guide to AI Laws and Regulations Worldwide — Sumsub
- EU AI Act Implementation Timeline — Artificial Intelligence Act (EU)
- AI Governance Policy Trends: Global Regulation — Sysart Consulting
- Why Algeria Is Positioned to Become North Africa’s AI Leader — New Lines Institute






