⚡ Key Takeaways

Only 3% of the 11,520 consultation respondents supported the UK government’s originally preferred opt-out exception for AI training data, forcing Britain to abandon the proposal and maintain existing copyright law — a decision that preserves creative industries worth 145.8 billion pounds in annual GVA while leaving AI companies without the legal certainty they sought.

Bottom Line: The UK’s rejection of a training data exception and the EU’s licensing-first approach signal a global consensus that copyright holders should retain control over how their works are used in AI training — organizations must prepare for a licensing-based future.

Read Full Analysis ↓

Advertisement

🧭 Decision Radar (Algeria Lens)

Relevance for Algeria
Medium

Algeria’s media, publishing, and music sectors will face similar AI training data questions as generative AI adoption grows domestically and regionally across the Francophone and Arabic-speaking world.
Infrastructure Ready?
No

Algeria lacks robust copyright enforcement infrastructure, digital licensing platforms, and collecting societies capable of managing AI-related rights at scale. ONDA handles traditional rights but has no AI training data framework.
Skills Available?
Partial

Algeria has intellectual property legal expertise through its IP courts and INAPI, but specialized knowledge at the intersection of copyright law, AI technology, and text/data mining regulation remains limited.
Action Timeline
12-24 months

Monitor UK legislative developments, EU AI Act implementation, and US court rulings to build an evidence base before considering Algerian copyright framework updates.
Key Stakeholders
ONDA (copyright office), INAPI (IP institute), Ministry of Culture, Ministry of Post and Telecommunications, Algerian media organizations, Arabic-language publishers, local AI developers and startups
Decision Type
Educational

This article provides educational context to build understanding and inform future decisions.

Quick Take: Algeria’s creative sector — including Arabic-language publishing, music, and film — will eventually face the same AI training data questions the UK is confronting. The UK’s rejection of an opt-out exception and the EU’s licensing-first approach both point toward a global consensus that copyright holders should retain control. Algerian policymakers at ONDA and the Ministry of Culture should study these outcomes to prepare a framework that protects Algerian creators while keeping the country connected to beneficial AI tools.

What the March 2026 Report Actually Says

On March 18, 2026, the UK government published two landmark documents under the Data (Use and Access) Act 2025: a report on copyright and artificial intelligence (Section 136) and an accompanying economic impact assessment (Section 135). Jointly produced by the Department for Science, Innovation and Technology (DSIT), the Intellectual Property Office (IPO), and the Department for Culture, Media and Sport (DCMS), these documents represent the conclusion of a consultation process that attracted 11,520 responses — 10,112 via Citizen Space and over 1,400 by email.

The headline outcome is clear: the government has abandoned its previously preferred approach. Option 3 — a broad commercial text and data mining (TDM) exception with an opt-out mechanism for rights holders — is dead. Only 3% of Citizen Space respondents supported it, while 88% backed Option 1, which would require licensing for all uses of copyrighted works in AI training.

Rather than adopting any of the four consultation options outright, the government chose a cautious path: maintain the existing copyright framework (effectively Option 0), develop voluntary transparency standards, monitor the emerging licensing market, and gather more evidence before legislating. The one concrete legislative proposal is removing Section 9(3) of the Copyright, Designs and Patents Act 1988 (CDPA), which grants copyright protection to computer-generated works with no human author.

The Four Options That Were on the Table

The government’s December 2024 consultation presented four distinct policy pathways for how copyright law should treat AI training data.

Option 0 — Status quo: No changes to copyright law. Commercial AI training that copies copyrighted works without permission remains potentially infringing under the CDPA. The existing Section 29A exception covers only non-commercial text and data mining research.

Option 1 — Mandatory licensing: Require AI developers to obtain licences for all uses of copyrighted works in training. This had overwhelming public support (88% of Citizen Space respondents) and aligns with creative industries’ position that copyright holders should control how their works are used.

Option 2 — Broad TDM exception: Create a new commercial text and data mining exception with no opt-out — essentially permitting unrestricted AI training on copyrighted works. This was the most AI-industry-friendly option and the most opposed by creative industries.

Option 3 — TDM exception with opt-out: The government’s originally preferred approach — a commercial TDM exception where rights holders could opt out using technical measures like robots.txt. Creative industries rejected this as inadequate, arguing that robots.txt applies only to web crawling, is not granular enough for specific works, and compliance is purely voluntary.

The March 2026 report formally removes Option 3 from consideration and does not endorse any of the remaining options. The government is effectively operating under Option 0 while it continues gathering evidence.

Why the Voluntary Approach Failed

Before the formal consultation, the UK government had attempted a voluntary approach — convening AI developers and creative rights holders to negotiate a code of practice. The December 2025 progress statement under Section 137 of the Act acknowledged limited progress.

Three structural barriers proved insurmountable.

Transparency deadlock. Creative industries demanded that AI companies disclose which copyrighted works they used for training. AI companies resisted, citing competitive sensitivity and the technical impracticality of retroactive disclosure for models trained on billions of data points. The report notes that over 90% of consultation respondents supported mandatory transparency, but the government stopped short of requiring it by statute — instead proposing industry-led best practice with the possibility of future legislation.

Opt-out mechanism inadequacy. Rights holders wanted technically enforceable mechanisms to prevent their works from being used in AI training. AI companies offered to respect robots.txt directives, but this applies only to web crawling and does not cover data obtained through purchased datasets, licensed archives, or other channels. No agreement could be reached on a technically robust alternative.

Compensation impasse. Creative industries sought licensing frameworks or statutory fees for AI training use. AI companies argued that the value created by training is too diffuse to attribute to individual works and that licensing requirements would make AI development prohibitively expensive. The report characterizes this as a fundamental asymmetry of incentives — AI companies benefit from the status quo, while creative industries bear costs without compensation.

Advertisement

The Parallel Pressure: House of Lords Report

Twelve days before the government report, the House of Lords Communications and Digital Committee published its own inquiry: “AI, copyright and the creative industries” (HL Paper 267, March 6, 2026). The Lords’ conclusions were sharper than the government’s.

The Committee endorsed a “licensing-first” approach, explicitly recommending against any new TDM exception with opt-out. It called for statutory transparency obligations — not voluntary ones — requiring AI developers to disclose their training data sources. The Committee also recommended new protections against unauthorized digital replicas and harmful “in the style of” AI outputs that exploit creators’ identities.

The Lords’ report carries no binding force but exerted significant political pressure. Its publication days before the government’s statutory deadline signaled that Parliament expects stronger action than the cautious evidence-gathering the government ultimately proposed.

Where the UK Sits Globally

The UK’s decision to maintain its existing copyright framework places it in a distinctive position among major AI jurisdictions.

The EU has established a statutory framework under the Copyright Directive (2019/790). Article 3 permits text and data mining for research; Article 4 allows commercial TDM but gives rights holders the ability to opt out. The AI Act reinforces this with training data transparency requirements. The EU approach creates a de facto licensing regime for commercial AI training.

The United States relies on fair use (17 U.S.C. Section 107), with multiple lawsuits still testing whether AI training qualifies. The most significant ruling to date came in February 2025, when Judge Bibas found that ROSS Intelligence’s use of Thomson Reuters headnotes to train an AI legal research tool was not fair use — the first US court to rule definitively on this question. That case is on appeal to the Third Circuit. Meanwhile, The New York Times v. OpenAI continues in discovery, with a January 2026 order compelling OpenAI to produce 20 million ChatGPT conversation logs.

The UK now occupies a middle ground. Existing copyright law likely prohibits unauthorized commercial AI training (no specific exception exists), but there is no UK case law testing this interpretation and no statutory framework addressing AI training specifically. The government has chosen not to create new exceptions and not to mandate licensing — a posture that preserves creative industry protections in theory while leaving enforcement to the courts.

What This Means for Creative Industries and AI Companies

For the UK’s creative industries — contributing £145.8 billion in GVA to the economy in 2024 and employing approximately 2.4 million people — the outcome is a qualified victory. The feared TDM exception that would have permitted AI training without consent has been rejected. Existing copyright protections remain intact, meaning AI companies technically need permission to use copyrighted works for training.

The qualification is enforcement. Without statutory transparency requirements, rights holders cannot easily determine whether their works have been used to train AI models. Without a licensing framework, there is no standardized mechanism for compensation. Creative industries won the policy argument but still lack the practical tools to exercise their rights.

For AI companies, the outcome creates legal uncertainty. Training on copyrighted works without permission carries infringement risk under existing UK law, but no company has yet been sued in the UK for AI training, and the government has signaled no intent to pursue enforcement. The practical reality is that AI training continues while the legal framework remains untested.

The emerging licensing market may fill this gap. Several AI companies have already signed licensing agreements with publishers and news organizations, and the government has indicated it wants this market to develop organically before considering regulatory intervention.

Follow AlgeriaTech on LinkedIn for professional tech analysis Follow on LinkedIn
Follow @AlgeriaTechNews on X for daily tech insights Follow on X

Advertisement

Frequently Asked Questions

What did the UK government’s March 2026 copyright and AI report decide?

The report, published March 18, 2026 under Sections 135 and 136 of the Data (Use and Access) Act 2025, formally rejected the government’s previously preferred Option 3 — a broad commercial text and data mining exception with an opt-out mechanism. Only 3% of the 11,520 consultation respondents supported it, while 88% backed mandatory licensing. The government chose to maintain existing copyright law (status quo), develop voluntary transparency standards, and monitor the emerging licensing market rather than legislate immediately. The only concrete legislative proposal is removing Section 9(3) CDPA protection for computer-generated works.

How does the UK’s approach differ from the EU and US?

The EU has a statutory framework under the Copyright Directive where rights holders can opt out of commercial text and data mining, creating a de facto licensing regime. The US relies on fair use, with the first ruling against AI training (Thomson Reuters v. ROSS Intelligence, February 2025) now on appeal. The UK sits between these — existing copyright law likely prohibits unauthorized commercial AI training, but there is no specific legislation addressing it and no UK case law testing this interpretation. The government has chosen not to create new exceptions or mandate licensing, leaving the framework untested.

Why should Algeria pay attention to the UK copyright and AI debate?

The UK’s experience reveals a global pattern: voluntary negotiations between AI companies and creative industries fail due to structural incentive asymmetries, and governments are forced toward statutory solutions. Algeria’s creative industries — Arabic-language literature, music, film, and journalism — face the same risk of unauthorized AI training use. The UK’s rejection of an opt-out and the EU’s licensing approach signal that copyright protection for training data is becoming the international norm. ONDA and Algerian policymakers should begin building the technical and legal infrastructure for AI-era copyright enforcement before generative AI adoption makes the question urgent domestically.

Sources & Further Reading