The First Comprehensive Rulebook for AI-Generated Content
The European Commission is weeks away from finalizing the most detailed regulatory framework ever created for labeling AI-generated content. The Code of Practice on Marking and Labelling of AI-Generated Content, developed under Article 50 of the EU AI Act, establishes how providers and deployers of generative AI systems must disclose when content is machine-made or machine-altered.
The first draft was published on 17 December 2025 by the EU AI Office. A revised second draft followed in mid-March 2026, incorporating written feedback from hundreds of participants spanning industry, academia, and civil society. The code is expected to be finalized by early June 2026, ahead of the 2 August 2026 enforcement deadline when Article 50 transparency obligations become legally binding.
While the code itself is voluntary, legal analysts from Bird & Bird and other firms note it will likely become the key reference point for regulators and courts when assessing compliance. Organizations that ignore it do so at their own risk: fines for violating Article 50 can reach EUR 15 million or 3% of global annual turnover, whichever is higher.
A Multi-Layered Approach to AI Marking
The code rejects the idea that any single technical solution can solve the content authenticity problem. Instead, it mandates a multi-layered approach combining three tiers of protection:
Secured metadata forms the first layer. Providers must embed provenance information directly into files. The draft references open standards like the Coalition for Content Provenance and Authenticity (C2PA) framework as an example, though it deliberately avoids endorsing any single standard, instead promoting interoperable open approaches that reduce vendor lock-in.
Imperceptible watermarks constitute the second layer. These machine-readable signals survive common transformations like resizing, compression, and format conversion, making it harder to strip AI provenance from content as it circulates online.
Fingerprinting and logging systems serve as a fallback third layer for scenarios where metadata and watermarks prove insufficient. Providers may maintain internal records that allow retroactive verification of whether content originated from their systems.
The second draft streamlined these requirements significantly, offering more flexibility for signatories while maintaining the core principle that no content should rely on a single point of failure for provenance tracking.
Deepfake-Specific Rules Go Further Than Any Prior Regulation
Where the code breaks genuinely new ground is in its format-specific deepfake disclosure requirements, which go well beyond generic labeling mandates.
Video deepfakes face the strictest treatment. For real-time video (such as live streams or video calls), deployers must display a persistent, non-intrusive icon consistently throughout the entire exposure, combined with a disclaimer at the beginning. For pre-recorded deepfake video, the rules allow a combination of an opening disclaimer, a persistent icon, and end credits — ensuring disclosure at every stage of consumption.
Audio deepfakes carry a distinct obligation: an audible disclaimer in plain, natural language. For clips shorter than 30 seconds, the disclaimer must appear at the beginning. For longer formats such as podcasts, it must be repeated at the beginning, at intermediate stages, and at the end. Where a screen is available alongside the audio, visual cues must supplement the audible disclaimer.
Multimodal content — combining text, images, audio, or video — must display a visible icon without requiring any user interaction to discover it. The code pushes for an EU-wide interactive icon that can provide additional information about which specific elements of a piece of content were AI-generated or altered.
A common taxonomy distinguishes between fully AI-generated content and AI-assisted content, ensuring that a lightly edited image is not treated the same as a wholesale synthetic fabrication.
Advertisement
Five Commitments for Deployers
The code structures deployer obligations around five key commitments:
- Timely disclosure — label AI-generated content no later than the natural person’s first interaction or exposure
- Common icon placement — apply a standardized icon in a visible and consistent location for deepfakes and AI-generated public interest text (an interim two-letter “AI” acronym is permitted until the official EU icon is developed)
- Flagging and correction — facilitate third-party flagging of mis-labeled or unlabeled deepfakes, and fix labels without undue delay
- Regulatory cooperation — cooperate with market surveillance authorities and very large online platform (VLOP) providers
- Accessibility — ensure icons and logos conform to applicable EU accessibility requirements
Providers, meanwhile, must ensure their system outputs are marked in a machine-readable format and that their technical marking solutions are effective, interoperable, robust, and reliable.
Creative Content Gets a Lighter Touch
Article 50 carves out a meaningful exemption for artistic, creative, satirical, and fictional works. Where content is evidently part of a creative work, only minimal and non-intrusive disclosure is required — designed not to interfere with the integrity, enjoyment, or normal exploitation of the work.
In practice, this means a clearly labeled satirical video or an AI-assisted film would not need the persistent on-screen icons mandated for deepfakes. However, the exemption still requires some disclosure to protect third-party rights, preventing creators from using the artistic defense to distribute deceptive deepfakes of real people without any indication of manipulation.
What Comes Next
The feedback period on the second draft closed on 30 March 2026. Finalization is expected by early June, giving organizations roughly two months before the 2 August enforcement date. Companies operating AI systems that generate or manipulate text, image, audio, or video content in the EU market should be mapping their compliance gaps now — particularly around metadata embedding, deployer labeling workflows, and deepfake disclosure protocols.
The code does not exist in isolation. It complements the General-Purpose AI Code of Practice (which addresses model-level obligations) and aligns with the broader Digital Services Act requirements for very large online platforms. Together, these frameworks create a layered regulatory architecture that treats AI content transparency as a systemic challenge rather than a single-point fix.
Frequently Asked Questions
What is the EU Code of Practice on AI content labeling?
The Code of Practice is a voluntary framework developed under Article 50 of the EU AI Act that provides practical guidance for marking and labeling AI-generated content. It establishes a multi-layered approach combining visible icons, machine-readable metadata, imperceptible watermarks, and logging systems. While voluntary, it is expected to serve as the primary compliance benchmark when the binding transparency obligations take effect on 2 August 2026, with non-compliance fines reaching EUR 15 million or 3% of global annual turnover.
How do the deepfake-specific rules differ for audio versus video content?
Video deepfakes require a persistent visible icon throughout playback, combined with opening disclaimers and end credits. Audio deepfakes require an audible spoken disclaimer in plain language at the beginning of the content, and for formats longer than 30 seconds, the disclaimer must be repeated at intermediate stages and at the end. Both formats must also carry machine-readable metadata marking, but the user-facing disclosure methods are tailored to each medium’s consumption patterns.
Does the EU Code of Practice affect companies outside Europe?
Yes. Article 50 obligations apply to any provider or deployer whose AI system generates or manipulates content that reaches natural persons within the EU, regardless of where the company is headquartered. This means Algerian, American, or Asian companies serving EU users or distributing AI-generated content in the EU market must comply with the marking and labeling requirements or face fines of up to EUR 15 million or 3% of global turnover.
Sources & Further Reading
- Commission publishes first draft of Code of Practice on marking and labelling of AI-generated content — European Commission
- Commission publishes second draft of Code of Practice — European Commission
- What the EU’s New AI Code of Practice Means for Labeling Deepfakes — TechPolicy.Press
- Article 50: Transparency Obligations for Providers and Deployers — EU AI Act
- EU AI Act Code of Practice on marking and labelling — European Commission
- Taking the EU AI Act to Practice: Understanding the Draft Transparency Code — Bird & Bird
- Article 99: Penalties — EU AI Act
















