Twenty-Six Words That Created the Internet
Section 230 of the US Communications Decency Act of 1996 contains what legal scholars call the most consequential sentence in internet history. This single provision created the legal foundation for user-generated content platforms. Without it, Facebook, YouTube, X, Reddit, and every other platform hosting user content would be legally liable for every defamatory post, every copyright infringement, every illegal statement made by their billions of users.
For three decades, Section 230 provided near-absolute immunity: platforms could host user content without being treated as the publisher of that content, and could moderate content without losing their immunity. This dual protection enabled the explosive growth of social media, marketplace platforms, and the broader user-generated web.
In 2026, this framework is under siege from every direction. The EU has imposed its first major fine under an entirely different regulatory model. The US Congress is debating bills that would sunset Section 230 entirely. AI-generated content is introducing liability questions that the drafters of a 1996 law never imagined. And countries from Australia to Brazil are rewriting their own platform liability frameworks in real time.
The EU’s Digital Services Act: From Rules to Fines
The Digital Services Act (DSA), which took full effect in February 2024, represents the EU’s answer to Section 230. Where Section 230 grants broad immunity with minimal obligations, the DSA imposes graduated responsibilities based on platform size and type.
Obligations for All Online Intermediaries
- Transparency reporting: Publish regular reports on content moderation activities, including the number of removal orders received from authorities, the number of content items removed, and the number of user complaints
- Terms of service clarity: Platform terms must be clear, publicly accessible, and enforced consistently
- Notice and action: Implement mechanisms for users to report illegal content; respond to reports in a timely manner
- Cooperation with authorities: Respond to lawful orders from national authorities to remove illegal content
Additional Obligations for Online Platforms
- Internal complaint mechanism: Users whose content is removed must have a right to appeal
- Out-of-court dispute resolution: Platforms must provide access to independent dispute resolution bodies
- Trusted flaggers: Cooperate with designated organizations whose reports of illegal content receive priority handling
- Transparency in advertising: All advertisements must be clearly labeled, and users must be able to see who paid for the ad and the key targeting parameters
Obligations for Very Large Online Platforms (VLOPs)
Platforms with over 45 million monthly EU users face the strictest requirements:
- Systemic risk assessment: Annually assess risks their services pose, including dissemination of illegal content, negative effects on fundamental rights, manipulation of elections, and effects on public health and minors
- Risk mitigation measures: Implement measures to address identified systemic risks
- Independent auditing: Submit to annual independent compliance audits
- Data access for researchers: Provide qualified researchers with access to platform data for studying systemic risks
- Recommender system transparency: Allow users to choose a non-profiling-based version of recommendation algorithms
Designated VLOPs include: Meta (Facebook, Instagram), Google (YouTube, Search, Maps, Play), Apple (App Store), Amazon (Marketplace), Microsoft (LinkedIn, Bing), TikTok, X, Snapchat, Pinterest, Booking.com, Alibaba AliExpress, Temu, Wikipedia, and Zalando.
Enforcement: The First Fine Lands
The DSA is enforced by Digital Services Coordinators (DSCs) in each EU member state and by the European Commission directly for VLOPs. Fines for non-compliance can reach 6% of global annual turnover — potentially billions of euros for the largest platforms.
On December 5, 2025, the European Commission issued its first non-compliance decision under the DSA, fining X (formerly Twitter) EUR 120 million for three violations:
- Deceptive blue checkmark design (EUR 45 million): X’s paid verification system, which sells blue checkmarks for EUR 7/month without meaningful identity verification, violates the DSA’s prohibition on deceptive design. Users cannot reliably distinguish verified accounts from paid subscribers.
- Advertising transparency failures (EUR 35 million): X’s advertising repository lacks critical information including ad content, topics, and the legal entities paying for advertisements.
- Researcher data access failures (EUR 40 million): X’s terms of service prohibit researchers from accessing public data, directly violating the DSA’s researcher access obligations.
Beyond the X fine, the Commission has opened 14 formal investigations into VLOP compliance as of late 2025. In October 2025, preliminary findings determined that both TikTok and Meta breach DSA researcher data access obligations, and that Facebook and Instagram lack user-friendly mechanisms for reporting illegal content. Proceedings against AliExpress (opened March 2024) examine seller traceability, advertising transparency, and risk management of illegal content. Temu is also under investigation.
Section 230 Under Pressure in the US
While the EU is enforcing its new framework, the US is engaged in an intensifying debate about the future of Section 230.
Supreme Court Signals
In Gonzalez v. Google (2023), the Supreme Court considered whether YouTube’s algorithmic recommendation of ISIS recruitment videos was protected by Section 230. The Court ultimately declined to rule on the substantive Section 230 question, instead deciding the case on narrower Anti-Terrorism Act grounds. But the fact that the Court took the case signaled willingness to revisit the scope of Section 230 immunity.
The unresolved question remains: Does Section 230 protect algorithmic amplification? When a platform’s recommendation algorithm actively promotes specific content to specific users, is the platform merely hosting content (protected) or making an editorial decision about what to amplify (potentially unprotected)?
The Sunset Threat
In December 2025, a bipartisan group of senators introduced the Sunset Section 230 Act (S.3546), which would terminate Section 230 on January 1, 2027 unless Congress enacts a replacement framework. The bill is led by Senators Lindsey Graham (R-SC) and Dick Durbin (D-IL), with support from Senators Grassley, Whitehouse, Hawley, Klobuchar, Blackburn, Blumenthal, Moody, and Welch. A companion bill, the Sunset To Reform Section 230 Act (H.R.6746), was introduced in the House by Rep. Harriet Hageman (R-WY).
The bipartisan support is notable — but the political challenge remains contradictory. Republicans want to reform Section 230 to prevent platforms from removing conservative content, while Democrats want reform to force platforms to remove more harmful content. These opposing motivations make legislative agreement on a replacement framework difficult.
The TAKE IT DOWN Act: First Federal Content Removal Law
The most significant US legislative development is the TAKE IT DOWN Act, signed into law by President Trump on May 19, 2025. The law criminalizes the nonconsensual posting of intimate images, including AI-generated deepfakes, and requires platforms to remove such content within 48 hours of receiving a compliant removal request. The criminal prohibition took effect immediately; platforms have until May 19, 2026 to establish the required notice-and-removal processes.
This marks the first time US federal law has imposed a specific content removal obligation on platforms with defined timelines — a departure from the Section 230 model of platform immunity.
Kids Online Safety Act (KOSA)
The Kids Online Safety Act was reintroduced in the 119th Congress in May 2025 and advanced through a House subcommittee in December 2025. The 2025 version requires covered platforms to implement reasonable policies to prevent specific harms to known minors, though it has been revised from earlier versions to include First Amendment protections. KOSA has bipartisan support but faces an uncertain path to final passage amid competing legislative priorities.
Other Reform Bills
Multiple additional bills have been introduced to narrow Section 230 immunity:
- EARN IT Act — would condition Section 230 immunity on compliance with best practices for preventing child sexual exploitation
- SAFE TECH Act — would narrow immunity to exclude paid content, content facilitating illegal activity, and civil rights violations
- PACT Act — would require platforms to explain content moderation policies, provide appeal systems, and respond to court orders within specified timeframes
None has passed as of early 2026.
Advertisement
AI-Generated Content: The New Liability Frontier
The rise of generative AI introduces platform liability questions that existing frameworks were not designed to answer.
Is AI-generated content protected by Section 230? When a user prompts an AI model to generate text and posts it on social media, who is the “information content provider” — the user, the AI company, or both? If the AI hallucinates defamatory statements about a real person, Section 230 provides no clear answer. Bills have been proposed to explicitly remove Section 230 protection for content generated by AI systems.
Are AI chatbots covered? When a platform deploys an AI chatbot that provides harmful advice — medical misinformation, instructions for illegal activity — is the platform hosting third-party content or publishing its own? Early legal analysis suggests courts may treat AI-generated content as the platform’s own speech, stripping Section 230 protection.
The scale of the problem: Since the beginning of 2025, there have been over 500 documented cases of AI-hallucinated content submitted in US courts. Over 70 copyright infringement lawsuits have been filed against AI companies, with 50+ cases pending in US federal courts.
Deepfakes and the TAKE IT DOWN Act: AI-generated intimate imagery and deepfakes drove the passage of the TAKE IT DOWN Act. But the broader liability questions around synthetic media — AI-generated disinformation, fraudulent impersonation, non-consensual digital likenesses — remain largely unaddressed by existing regulation.
The Global Landscape: Divergent Models Converging
Beyond the US and EU, countries are rapidly developing their own platform liability frameworks — and several have undergone major changes in 2025.
United Kingdom: The Online Safety Act (2023) imposes a duty of care on platforms to protect users from illegal and harmful content. Enforcement by Ofcom began in earnest in late 2025, with fines of up to £18 million or 10% of global turnover (whichever is greater) and potential criminal liability for senior executives. Ofcom issued its first fines under the Act: £20,000 against 4chan in October 2025 for failing to respond to information requests, and £50,000 against a nudification site in November 2025 for lacking age verification. Investigations into 20 additional online pornography sites are underway, and major enforcement actions regarding children’s safety are expected in 2026.
Australia: The Online Safety Act (2021) empowers the eSafety Commissioner to order removal of harmful content within 24 hours, with penalties up to AUD 49.5 million. In December 2024, Australia went further by passing the Online Safety Amendment (Social Media Minimum Age) Act, banning users under 16 from social media platforms. Effective December 10, 2025, platforms including Facebook, Instagram, TikTok, X, YouTube, and Reddit must take reasonable steps to prevent minors from creating or maintaining accounts. Australia became the first country to enforce a nationwide social media minimum age, with France, the UK, Germany, Italy, and others considering similar measures.
Brazil: The Marco Civil da Internet historically provided platform immunity similar to Section 230, with a judicial notice-and-takedown requirement. This framework changed fundamentally on June 26, 2025, when Brazil’s Supreme Court (STF) declared Article 19 of the Marco Civil partially unconstitutional in an 8-3 decision. The court ruled that conditioning platform liability solely on court-ordered removal was insufficient to prevent disinformation and criminal activity. The new fault-based liability model, inspired by the EU’s DSA, requires platforms to create self-regulation with clear moderation rules, publish annual transparency reports, maintain customer service channels, and appoint legal representatives in Brazil.
India: The Information Technology (Intermediary Guidelines) Rules (2021) require platforms to trace the “first originator” of messages flagged by authorities — a provision that undermines end-to-end encryption. WhatsApp filed a legal challenge in the Delhi High Court in May 2021, and the case remains active. In February 2026, India’s Supreme Court delivered a sharp rebuke to Meta over privacy practices, signaling continued judicial scrutiny of platform operations in the country.
China: Platforms are responsible for ensuring all content complies with Chinese law, with no safe harbor for user-generated content. Pre-screening using AI and human moderators is mandatory.
The Unsolved Problem: Scale
The fundamental challenge of platform liability is scale. Meta’s family of apps serves nearly 4 billion monthly active users generating billions of content items per day. YouTube receives over 500 hours of video upload every minute — more than 720,000 hours of new video daily. No human content moderation workforce can review this volume.
AI content moderation is improving but imperfect, making errors in both directions: removing legitimate content (over-moderation) and missing harmful content (under-moderation). Regulation that requires platforms to remove all illegal content without defining clear timelines and standards creates incentives for over-moderation — platforms err on the side of removing content to avoid liability, suppressing legitimate speech in the process.
The tension between preventing harm and preserving free expression is not a technical problem with a technical solution. It is a values question that different societies are answering in increasingly divergent ways — and 2025-2026 has been the year those answers began producing real enforcement consequences.
Advertisement
Decision Radar (Algeria Lens)
| Dimension | Assessment |
|---|---|
| Relevance for Algeria | High — Algeria is actively drafting platform regulation legislation targeting Facebook, TikTok, and YouTube; the DSA provides a reference model for this effort |
| Infrastructure Ready? | Partial — Algeria has regulatory bodies (ARPT, digital press authority) but lacks a dedicated digital services coordinator or platform oversight agency |
| Skills Available? | Limited — Few Algerian legal and policy experts specialize in internet regulation and platform governance; capacity building is needed |
| Action Timeline | Immediate to 6-12 months — A draft platform regulation bill is already under legislative consideration; Algeria should study DSA enforcement outcomes to inform its approach |
| Key Stakeholders | Ministry of Digital Economy, Ministry of Communication, ARPT, Algerian judiciary, National Assembly (legislative drafting), civil society organizations, Algerian tech startups |
| Decision Type | Strategic — Requires national-level policy development informed by international best practices, particularly the EU’s graduated approach |
Quick Take: Algeria is at an inflection point on platform regulation. A draft law proposed in 2025 would require major platforms to open local offices, store data locally, and remove illegal content within 24 hours — drawing directly from models like the DSA and Australia’s Online Safety Act. The EU’s graduated approach (lighter obligations for smaller platforms, heavier for VLOPs) is the most relevant template for Algeria, as it balances accountability with the reality that heavy regulation can discourage digital investment. Algeria’s practical priorities should be: (1) establishing enforceable legal pathways for content removal from global platforms that currently have no local presence, (2) ensuring the regulatory framework protects free expression and does not become a censorship tool, and (3) building institutional expertise in digital platform governance before enforcement begins.
Sources
- EU Digital Services Act — Official Text
- European Commission — DSA Fine Against X (Dec 2025)
- European Commission — TikTok and Meta Preliminary Findings (Oct 2025)
- Congress.gov — Sunset Section 230 Act (S.3546)
- Congress.gov — TAKE IT DOWN Act (S.146)
- Congress.gov — Kids Online Safety Act (S.1748)
- Gonzalez v. Google LLC, 598 U.S. ___ (2023)
- UK Online Safety Act — 2025 Enforcement Round-Up
- Australia eSafety Commissioner — Social Media Age Restrictions
- Brazil Supreme Court — Marco Civil Ruling (June 2025)
- Algeria Draft Digital Platform Regulation Bill (Oct 2025)
- Public Knowledge — Section 230 Reform Proposals in 119th Congress
- Electronic Frontier Foundation — Section 230
- CRS — Section 230: An Overview
Advertisement