Why This Treaty Is Different from Everything Before It
Since 2019, the world has accumulated dozens of AI governance frameworks: the OECD AI Principles, UNESCO’s AI Ethics Recommendation, the G7 Hiroshima AI Process, the US Executive Order on AI, and the EU AI Act itself. What all of these share — except the EU AI Act — is that they are non-binding. They establish shared principles, but they cannot create legal obligations on signatory states or their domestic institutions.
The Council of Europe Framework Convention on Artificial Intelligence (formally CETS No. 225) is fundamentally different. Opened for signature on September 5, 2024, it is the first international AI treaty that creates legally binding obligations on the countries that ratify it. When the EU Parliament voted on March 11, 2026, to ratify the convention and authorize the EU to formally conclude it as a bloc, it triggered a process that will require all 27 EU member states — plus the 50+ other signatories — to align their domestic AI governance systems with the convention’s requirements.
Those requirements are substantive. The convention mandates that “activities within the lifecycle of artificial intelligence systems” align with human rights standards and democratic principles. It establishes obligations for:
- Risk assessments before deploying AI systems that could affect fundamental rights
- Transparency mechanisms so that affected individuals can understand how AI decisions are made
- Accountability structures that designate responsibility for AI system outcomes
- Non-discrimination protections across all AI-governed decisions
- The right to challenge AI-driven decisions — a procedural protection that most existing AI frameworks have not codified
Who Has Signed and What That Means
The convention’s signatory list is geographically broader than any previous AI governance instrument. As of early 2026, signatories include the EU (as a bloc), Canada, the United Kingdom, the United States, Japan, Norway, Iceland, Switzerland, Ukraine, and Uruguay, among others — more than 50 endorsing parties in total. This is not a European-only instrument; it is the first AI governance framework to span major democratic economies across four continents.
The breadth matters for a specific reason: AI regulatory fragmentation has been the primary driver of compliance cost for multinational technology companies. A company deploying AI in healthcare in five jurisdictions currently navigates five different risk assessment frameworks, five different transparency requirements, and five different accountability regimes. The Framework Convention creates a shared baseline — a minimum standard that all signatory jurisdictions must implement. Companies that build their AI governance programs to meet the convention’s requirements will find compliance in any signatory jurisdiction significantly lower-friction than companies that built to jurisdiction-specific minimums.
The non-signatory picture is equally important. China, India, Russia, and Brazil have not signed. This creates a governance split that scholars have begun calling the “democratic AI alignment” problem: the Framework Convention effectively defines an AI governance zone built on human rights and accountability principles, distinct from AI deployment regimes in non-signatory states. The geopolitical implications are real — AI systems built to convention standards may face specific scrutiny when deployed in non-signatory jurisdictions, and vice versa.
Advertisement
How the Convention Relates to the EU AI Act
The Framework Convention and the EU AI Act are not the same instrument, and understanding their relationship is critical for compliance teams.
The EU AI Act is sectoral and granular: it classifies AI systems by risk level (unacceptable, high, limited, minimal), mandates specific technical conformity requirements for high-risk systems listed in Annex III, requires CE marking, and establishes a registration database. It applies to any AI system deployed in the EU, regardless of where it was built.
The Framework Convention is principles-based and broad: it establishes the human rights and accountability floor that domestic laws must reflect. It does not specify technical conformity requirements or risk classification tiers. Instead, it sets the governing intent — that AI systems must be transparent, accountable, and subject to effective challenge — and leaves implementation to national legislation.
The relationship is complementary: the EU AI Act is how the EU implements the Framework Convention at the technical level. For non-EU countries that ratify the convention — Canada, the UK, Japan, the US — their domestic AI laws (or lack thereof) will need to evolve to fulfill convention obligations. This creates a convergence pressure that will gradually bring international AI regulation closer to the EU standard over the 3–5 year ratification and implementation cycle.
What Compliance Officers and Enterprise Leaders Should Do About It
1. Map Your AI Inventory Against Convention Obligations — Not Just EU AI Act Categories
The Framework Convention’s obligations apply to any AI system deployed in a signatory jurisdiction, regardless of whether it meets the EU AI Act’s Annex III high-risk threshold. This means AI systems that are classified as “limited risk” under the EU AI Act — chatbots, recommendation engines, content filters — still need to demonstrate human rights impact assessment, transparency, and challenge rights if deployed in convention jurisdictions. Compliance teams that have scoped their AI governance programs exclusively to EU AI Act high-risk categories may have significant gaps. The convention requires a broader inventory: all AI systems that could affect fundamental rights, regardless of risk tier.
2. Build Challenge and Redress Mechanisms Into AI Deployment Workflows
The right to challenge AI-driven decisions is the convention’s most operationally demanding requirement for most enterprises. Most current AI deployment workflows — particularly in HR, credit, and content moderation — produce decisions without embedded challenge pathways. The convention requires that affected individuals have a meaningful mechanism to contest AI decisions and obtain human review. Building this into existing systems retrofits significant UX and process complexity. Starting with the highest-volume decision contexts (hiring screening, credit applications, customer service escalation routing) and working backwards is the pragmatic sequencing.
3. Treat the Convention as the Long-Term Governance Standard, Not the EU AI Act
Enterprise compliance teams have rightly prioritized the EU AI Act because it has binding force and near-term enforcement dates. But the EU AI Act is one implementation of a broader international standard — the Framework Convention — that will shape AI regulation globally for the next decade. Companies that build their AI governance programs to convention principles (human rights alignment, risk assessment, transparency, accountability, challenge rights) will be better positioned for regulatory evolution in non-EU signatory jurisdictions than companies whose compliance programs are purely EU AI Act-calibrated. The convention is where the EU AI Act came from — building to convention principles future-proofs the compliance investment.
The Bigger Picture: A Democratic AI Governance Zone
The Framework Convention’s real significance is architectural, not operational. For the first time, a coalition of more than 50 democratic nations has created a shared legal standard for AI governance that is binding under international law. This is not a soft-law consensus — it is a treaty with ratification obligations, domestic implementation requirements, and the Council of Europe’s institutional monitoring apparatus behind it.
The AI governance landscape that emerges from this is not a single global standard — China, India, and Russia’s absence ensures that — but it is a stable, legally grounded zone of governance alignment that covers most of the world’s democratic economies and a significant share of global AI deployment. For enterprises, researchers, and policymakers operating within that zone, the convention sets the floor from which all future AI regulation will be measured.
Algeria, as a country with observer status at the Council of Europe and growing ties to European digital regulation, will face increasing pressure to align domestic AI frameworks with convention standards as the ratification wave progresses. The question for Algerian policymakers is not whether this standard arrives but when and through which channel.
Frequently Asked Questions
What is the difference between the Council of Europe AI Convention and the EU AI Act?
The Framework Convention is a principles-based international treaty signed by 50+ countries that establishes the human rights and accountability floor for AI governance globally. The EU AI Act is the EU’s detailed technical implementation — it classifies AI systems by risk tier, mandates specific conformity requirements for Annex III high-risk systems, and creates an EU-wide enforcement mechanism. The convention sets the “why”; the EU AI Act specifies the “how” for EU member states. Non-EU signatories must implement the convention’s principles through their own domestic legislation.
Is the Framework Convention legally binding, and what happens if a country doesn’t comply?
Yes — the Framework Convention is a legally binding international treaty under Council of Europe auspices. Ratifying countries must bring their domestic laws into compliance. The Council of Europe’s monitoring mechanism will assess implementation periodically, similar to its human rights convention oversight structure. However, the convention lacks direct sanction mechanisms comparable to EU enforcement — compliance is primarily enforced through domestic courts, peer review processes, and the reputational consequences of non-implementation.
Which major AI-deploying nations have NOT signed, and does that create a governance gap?
China, India, Russia, and Brazil are the most significant non-signatories. Their absence means the Framework Convention governs a “democratic AI alignment zone” but not global AI deployment comprehensively. AI systems built to convention standards may face scrutiny when deployed in non-signatory jurisdictions that have different transparency and accountability requirements — or none at all. This governance gap is the central challenge that the UN Global Dialogue on AI Governance (scheduled for July 2026 in Geneva) is attempting to address.
—
Sources & Further Reading
- EU Parliament Backs EU Conclusion of the Council of Europe Framework Convention on AI — Council of Europe
- EU Endorses First International Treaty on AI Governance — FEBIS
- Framework Convention on Artificial Intelligence — Wikipedia
- EU Parliament Committee Report on AI Convention Ratification — European Parliament













