The Rule That Forces AI Outputs into the Open
Article 50 of the EU AI Act is the clause that drags generative AI into transparency territory. Its core requirement is simple in principle: outputs of generative AI — audio, image, video, text — must be identifiable as AI-generated or manipulated, and users must be informed when they encounter a deepfake or AI-generated text published to inform the public on matters of public interest. The enforcement date, confirmed by the European Commission’s AI Office, is 2 August 2026.
The official text, published at artificialintelligenceact.eu/article/50, splits duties between two roles. Providers — the companies that build or distribute generative AI systems — are responsible for technical marking. Deployers — the companies that use those systems to publish content — are responsible for user-facing disclosure. This split matters: a European newsroom generating an AI voice-over carries deployer obligations, while the upstream model vendor (OpenAI, Anthropic, Mistral, Google, etc.) carries provider obligations.
The Two Obligations That Define Compliance
Provider obligation: machine-readable marking. Outputs must be marked in a machine-readable format and detectable as artificially generated or manipulated. The marking must be effective, interoperable, robust, and reliable — language that, per Jones Day’s analysis, rules out fragile watermarking schemes and pushes vendors toward cryptographic provenance standards like C2PA (Content Provenance and Authenticity).
Deployer obligation: human-readable disclosure. When a deployer publishes a deepfake or AI-generated informational text, the content must be clearly labeled at first exposure. The Code of Practice signatories are converging on a common icon, with an interim two-letter acronym (“AI”, “KI”, “IA”) until the uniform EU icon ships, as detailed by TechPolicy.Press.
Advertisement
Timeline: What’s Already Happened and What’s Coming
The rollout is deliberately paced. Per the Bird & Bird briefing:
- 17 December 2025 — European Commission and AI Office publish the first draft of the Code of Practice on Transparency of AI-Generated Content.
- March 2026 — Second draft expected (incorporating industry and civil-society feedback).
- June 2026 — Final Code of Practice anticipated, giving signatories ~60 days to ship.
- 2 August 2026 — Article 50 obligations enter into force across the EU’s 27 member states.
For global AI vendors, the compressed timeline between the final Code and enforcement is the critical risk. A provider that has not instrumented C2PA-style provenance by mid-June 2026 is looking at either enforcement exposure or pulling their product out of the EU market.
The Implementation Stack: What “Compliant” Actually Looks Like
Three technical layers are converging, as Ashurst outlines:
- Provenance metadata. A cryptographically signed credential attached to the output (C2PA is the leading standard). It travels with the file and can be inspected by any downstream platform.
- Robust watermarking. An in-content signal (visible or invisible) that survives reasonable transformations — re-encoding, cropping, screenshotting for images; compression, transcoding for audio/video.
- Human-readable disclosure. The label shown to end users at first exposure. Interim two-letter acronym, migrating to a uniform EU icon once published.
Vendors who already ship C2PA (Microsoft, Adobe, OpenAI on selected products) have the shortest path. Vendors relying only on proprietary watermarking will need to add provenance; vendors relying only on provenance metadata may need to layer watermarking to meet the “robust” bar.
What This Means Beyond Europe
Article 50 is a Brussels Effect classic: the EU rule becomes the de facto global standard because building one compliant pipeline is cheaper than maintaining separate regimes. Expect three ripple effects:
- Non-EU markets adopt EU-aligned rules. The UK, Switzerland, Brazil, and several African jurisdictions are already referencing Article 50 language in their draft AI policies.
- Newsrooms update editorial policy. Any newsroom serving an EU audience needs AI-disclosure standards that match the deployer obligations — even if they operate outside the EU.
- Enterprise buyers add provenance to RFPs. Per HEUKING’s advisory note, corporate procurement teams are beginning to require C2PA-capable generative AI as a baseline vendor criterion.
The Article 50 deadline is less a regulatory checkbox than a forcing function that finally standardizes how AI-generated content enters the information ecosystem. The next four months decide who has the infrastructure ready.
Frequently Asked Questions
Does Article 50 apply to AI-generated text, or only to deepfake images and videos?
It applies to all modalities — audio, image, video, and text. However, the text obligation is narrower: it specifically covers AI-generated or manipulated text published to inform the public on matters of public interest. Internal business documents, private chats, and creative fiction generated with AI are not covered. News articles, public-affairs content, and political communication are squarely within scope.
Is C2PA mandatory, or just a recommended technical path?
The AI Act does not name a specific technical standard — it requires marking to be “machine-readable, effective, interoperable, robust, and reliable.” C2PA is the standard the market is converging on because it meets these criteria, is open, and is already implemented by Adobe, Microsoft, OpenAI, the BBC, and others. Vendors using alternative approaches must demonstrate equivalent robustness, which is a harder argument to win.
What are the penalties for non-compliance?
The AI Act’s penalty framework allows fines up to 15 million EUR or 3% of global annual turnover, whichever is higher, for Article 50 transparency violations. Unlike GDPR’s data-protection authority model, Article 50 enforcement runs through each member state’s designated market surveillance authority, coordinated by the European AI Office.
—
Sources & Further Reading
- Article 50 — Transparency Obligations for Providers and Deployers — Official AI Act Text
- Code of Practice on Marking and Labelling of AI-Generated Content — European Commission
- European Commission Publishes Draft Code of Practice on AI Labelling — Jones Day
- What the EU’s New AI Code of Practice Means for Labeling Deepfakes — TechPolicy.Press
- Transparency of AI-Generated Content: The EU’s First Draft Code of Practice — Ashurst
- Understanding the Draft Transparency Code of Practice — Bird & Bird
















