AI & AutomationCybersecurityCloudSkills & CareersPolicyStartupsDigital Economy

Child Online Safety Laws: A Global Crackdown on Big Tech in 2026

February 23, 2026

Legislative hearing room with digital display showing social media shield icon and lawmakers deliberating

The Generation That Grew Up Online Is Paying the Price

The evidence has been accumulating for over a decade. Adolescent mental health has deteriorated significantly since the widespread adoption of smartphones and social media. In the US, depression among young people rose roughly 52% between 2005 and 2017, according to research published in the Journal of Abnormal Psychology. Suicide rates among 10-24 year-olds increased substantially over the same period. Anxiety disorders among adolescents reached what the US Surgeon General described as an epidemic — with up to 95% of teens aged 13-17 reporting social media use, and those spending more than three hours daily facing double the risk of depression and anxiety symptoms.

Correlation is not causation, and the debate over social media’s role in the adolescent mental health crisis is nuanced. But the accumulating evidence — longitudinal studies, internal platform research (leaked Meta documents showed the company knew Instagram was harming teen girls’ mental health), and neurological research on dopamine-driven feedback loops — has shifted the political consensus. Across the political spectrum, in every major democracy, legislators have concluded that the status quo — platforms designed to maximize engagement with no obligation to protect young users — is unacceptable.

The result is a global wave of child online safety legislation unlike anything in the history of internet regulation. By early 2026, the UK, EU, Australia, France, Brazil, and at least 17 US states have enacted laws imposing specific obligations on platforms regarding minors. Federal US legislation is advancing. And the implications for platform design, business models, and the broader internet are profound.


United Kingdom: The Online Safety Act in Enforcement Mode

The UK’s Online Safety Act (2023), enforced by Ofcom, is the most comprehensive child online safety law now actively being enforced. After receiving royal assent in October 2023, implementation began in phases — the first phase went into effect on March 17, 2025, and the Protection of Children Codes of Practice came into force on July 25, 2025.

Duty of care: Platforms have a legal duty to protect children from harmful content — including content that is not illegal for adults but is harmful to children. Categories include self-harm and suicide content, eating disorder content, pornography, content promoting violence, and cyberbullying.

Age assurance: Platforms likely to be accessed by children must implement “proportionate” age assurance measures. Ofcom has issued guidance on acceptable methods: age estimation (using facial analysis to estimate whether a user is a child), age verification (requiring identity documents or credit card confirmation), and account-level measures (parental controls, default safety settings for accounts identified as belonging to minors).

Algorithmic safety for children: Platforms must ensure their recommendation algorithms do not promote harmful content to children. This may require maintaining separate recommendation models for users identified as minors — filtering or deprioritizing harmful content categories.

Enforcement teeth: Ofcom can fine platforms up to 18 million GBP or 10% of qualifying worldwide revenue, whichever is greater. Criminal liability applies to senior executives who fail to comply with information requests. And Ofcom has the power to require platforms to use “accredited technology” to detect and remove child sexual abuse material (CSAM) — a provision that has raised concerns about end-to-end encryption.

Already biting: By late 2025, Ofcom had launched five enforcement programmes and opened 21 investigations. In August 2025, it fined 4chan 20,000 GBP for non-compliance. In November, it fined a nudification site 50,000 GBP for inadequate age verification. In December, it fined AVS Group 1 million GBP for lack of age checks. Ofcom’s CEO has stated that large platforms popular with children are an enforcement focus for 2026, with expectations of significant fines for failures to protect children online.


Australia: The Age Ban Takes Effect

Australia took the most aggressive approach globally: in November 2024, parliament passed the Online Safety Amendment (Social Media Minimum Age) Act, establishing a minimum age of 16 for social media use. The age restrictions came into effect on December 10, 2025. Platforms must take “reasonable steps” to prevent users under 16 from creating or maintaining accounts. The obligation is on the platform, not the child or parent.

Covered platforms: Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick, and Reddit are classified as age-restricted platforms. Messaging apps, online gaming, professional networking, education, and health support services are excluded.

Penalties: Platforms that fail to take reasonable steps face fines of up to AUD 49.5 million. The eSafety Commissioner is responsible for specifying what constitutes “reasonable steps” and taking a proportionate, risk-based approach to compliance — initially focusing on the largest platforms.

This is the first outright age ban on social media by a major democracy. The law faces significant implementation challenges:

Age verification technology: No current technology can reliably verify a user’s age without raising privacy concerns. Options include facial age estimation (AI that estimates age from a selfie — criticized for accuracy issues, bias, and surveillance implications), government ID verification (effective but creates a database linking identities to platform accounts), and device-level age signals (using device settings or mobile operator data).

Circumvention: Determined teenagers will use VPNs, false birthdates, or alternative platforms. The law’s effectiveness depends on enforcement against platforms, not individual users.

Privacy paradox: Verifying that a user is over 16 requires collecting personal information — creating new privacy risks in the name of child protection.

Despite these challenges, Australia’s ban has influenced global debate. France has followed with its own under-15 ban, and polls show majority public support in Australia and similar levels of support in the US and Europe for age restrictions on social media.


France: From Digital Majority to Outright Ban

France has been among the most aggressive European actors on child online safety, and in January 2026 it escalated further. The French National Assembly passed a bill banning social media use for children under 15, by a vote of 130 to 21. Backed by President Macron’s administration, the bill was pending Senate approval as of late January 2026, with an implementation target of September 2026.

Scope: TikTok, Instagram, Roblox, Fortnite’s chat features, WhatsApp, Telegram, and adult sites all fall under the new rules. Online encyclopedias and educational platforms are excluded. The legislation also includes a ban on mobile phones in high schools.

Age verification approach: France’s proposed model uses “double anonymity” verification through third parties — specialized firms, mobile operators, or an EU Commission app involving ID scans and facial recognition. The system would retain only age data, not full identity details, attempting to balance verification effectiveness with privacy protection.

Existing measures: France had already enacted laws requiring age verification for adult content sites (enforced by ARCOM, the audiovisual regulator, with site-blocking powers) and establishing a digital majority age of 15 for social media — requiring parental consent for younger users. The new ban goes further by placing the compliance obligation directly on platforms.

France’s approach has become a reference model for other EU member states considering similar measures.


Advertisement

United States: KOSA, COPPA, and a Patchwork of State Laws

The US has pursued multiple legislative tracks simultaneously — at both federal and state levels.

Kids Online Safety Act (KOSA) — Still Advancing

KOSA passed the US Senate in July 2024 with overwhelming bipartisan support (91-3 vote), but it failed to advance in the House of Representatives before the 118th Congress expired. In May 2025, the bill was reintroduced in the 119th Congress with modifications. In December 2025, the House Energy and Commerce Subcommittee advanced KOSA (13-10 vote) along with 17 other child online safety bills.

The current version requires platforms to:

  • Implement reasonable policies to prevent specific harms to known minors, including promotion of suicide, eating disorders, substance abuse, sexual exploitation, and bullying
  • Provide minors with options to protect their information, disable addictive product features (autoplay, push notifications, algorithmic recommendations), and opt out of personalized content
  • Enable the strongest privacy settings by default for users identified as minors
  • Provide parents with tools to supervise and manage their children’s platform experience

Notably, the 2025 House version drops the original “duty of care” language and applies to a narrower list of harms compared to the Senate-passed version. The FTC would enforce the law, and state attorneys general could bring enforcement actions. As of February 2026, KOSA has not yet passed the full House or been signed into law.

COPPA: Rule Update and Legislative Expansion

Two parallel tracks are updating children’s privacy protections:

FTC COPPA Rule Amendments (already finalized): The FTC published final amendments to the COPPA Rule on April 22, 2025, effective June 23, 2025, with a compliance deadline of April 22, 2026. Key changes include expanded definitions of “personal information” (now covering biometric identifiers, government-issued identifiers, and mobile telephone numbers), enhanced parental consent requirements, and stricter data retention and security obligations.

COPPA 2.0 legislation (pending): The Children and Teens’ Online Privacy Protection Act (S.836, 119th Congress) would raise the age threshold from 13 to 17 for enhanced protections, ban targeted advertising to minors, require opt-in consent for data collection from minors, and create a “digital eraser” right allowing minors to delete their data. This legislation remains pending in Congress.

State-Level Action: A Patchwork Emerges

The most dramatic US action has come from individual states. At least 17 states have enacted laws addressing minors’ access to or treatment on social media, with over 300 pieces of legislation pending across 45 states in 2025:

  • Virginia (effective January 1, 2026): Under-16 users limited to one hour per day per social media application without parental consent
  • Florida: Requires platforms to verify ages, obtain parental consent for under 18, protect minors’ data, and limit exposure to harmful content
  • California (SB 976): Regulates algorithmic “addictive feeds” for minors, requires age-determination measures by January 1, 2027
  • Nebraska (effective July 1, 2026): Requires age verification and parental consent for under 18; minors can opt out of features like infinite scroll

However, the state-level approach faces legal headwinds. Laws in Arkansas and Ohio have been permanently blocked by courts, while California, Florida, and Georgia measures are temporarily halted pending litigation — primarily on First Amendment grounds.


Brazil: The Digital ECA

Brazil joined the global wave in September 2025, enacting the Digital Statute of the Child and Adolescent (“Digital ECA”) — one of the most comprehensive child protection frameworks globally. Taking effect in March 2026, it establishes:

  • Mandatory age verification: “Effective and reliable” age verification is required — simple self-declaration (entering a birthdate) is explicitly prohibited
  • Parental consent and account linking: Mandatory for all users under 16
  • Platform obligations: Pornographic websites and social networks must block underage access and detect child accounts
  • Enforcement: Violations can result in fines up to BRL 50 million per incident, suspension, or prohibition of activities
  • Broad scope: Applies to any technology product or service targeted at or likely accessed by children (under 12) or adolescents (12-18)

The EU: Digital Services Act + GDPR + New Guidelines

The EU addresses child online safety through multiple overlapping instruments, and 2025 brought significant new guidance:

GDPR Article 8: Member states can set the age of digital consent between 13 and 16 (most have set it at 16). Children below this age require parental consent for data processing.

Digital Services Act (DSA): Very Large Online Platforms (VLOPs) must assess and mitigate systemic risks to minors’ wellbeing. Targeted advertising based on profiling of minors is prohibited. Platforms must provide clear and age-appropriate terms of service for minor users.

July 2025 DSA Guidelines on Protection of Minors: On July 14, 2025, the European Commission published detailed guidelines under DSA Article 28, applicable to all online platforms accessible to minors (except micro and small enterprises). While not legally binding, they provide a comprehensive framework addressing:

  • Age assurance (self-declaration deemed unreliable; age estimation and verification recommended)
  • Protection against grooming, harmful content, addictive behaviours, and cyberbullying
  • Bans on manipulative design practices (countdown timers, loot boxes in games)
  • Content moderation trained to identify threats like grooming
  • The upcoming EU Digital Identity Wallet (expected 2026) as an age verification tool

The Platform Response

Major platforms have responded to the regulatory wave with increasing seriousness:

Meta: In October 2025, Instagram revamped its Teen Accounts with a PG-13 content framework. Teens are now blocked from following accounts that regularly share age-inappropriate content, and search terms like “alcohol” or “gore” are filtered. The system uses PG-13 movie rating standards to determine content appropriateness, automatically hiding posts with strong language, risky stunts, and potentially harmful content. Parents can opt into an even stricter “Limited Content” mode. AI-powered conversation restrictions are planned for 2026. The rollout began in the US, UK, Australia, and Canada, with broader global deployment planned for 2026. However, early data suggests the measures are imperfect: a major report found that 60% of teens aged 13-15 still reported encountering unsafe content despite Teen Account protections.

TikTok: Maintains a default 60-minute daily screen time limit for users under 18 (overridable with a passcode), disabled push notifications after 9 PM for users under 16, restricted direct messaging for under 16, and private accounts by default for users aged 13-15. TikTok has also deployed age estimation technology.

YouTube: YouTube Kids exists as a separate, curated app for younger children. The main YouTube platform restricts features for users identified as minors (no autoplay at bedtime, mandatory break reminders).

Snapchat: Introduced parental controls (Family Center) and restricted features for users under 16.

The fundamental tension: Platforms’ business models are built on engagement — keeping users on the platform as long as possible, returning as often as possible. The most effective engagement mechanisms (infinite scroll, autoplay, push notifications, social comparison, algorithmic content recommendations) are precisely the features that child safety advocates argue are harmful to developing minds. Genuine compliance with child safety laws may require platforms to make their products less engaging for young users — which directly conflicts with their revenue incentives. The gap between Meta’s 95% parent approval rating for its safety features and the 60% of teens still seeing harmful content illustrates this tension between stated intentions and measurable outcomes.


The Technical Challenge: Age Verification at Scale

Every child safety law ultimately depends on one capability: accurately identifying which users are minors. This is harder than it sounds:

Self-declaration (current standard): Users enter a birthdate during registration. Trivially circumvented — every teenager knows to enter a date making them 18 or older. Brazil’s Digital ECA has explicitly banned this approach, and the EU’s DSA guidelines deem it unreliable.

Government ID verification: Effective but raises privacy concerns (platforms accumulating government ID data), accessibility concerns (not all children have government ID), and equity concerns (excluding users who cannot provide ID).

Facial age estimation: AI models that estimate age from a selfie photo or video. Companies like Yoti offer age estimation independently tested by NIST (the US National Institute of Standards and Technology), achieving a 99.3% True Positive Rate for 13-17 year-olds with no discernible bias across genders or skin tones. Crucially, modern age estimation systems delete the selfie immediately after processing and store no personal identifiers — making them distinct from facial recognition technology. Yoti’s revenue grew 62% in 2025, reflecting surging demand. Privacy advocates nonetheless remain concerned about normalizing face scans to access online services.

Device-level signals: Using information from the device (mobile operator records, device settings, linked accounts) to estimate age without additional user action. Apple’s Communication Safety features use on-device machine learning to detect nudity in messages sent to or from children’s accounts.

Third-party age assurance services: Independent services that verify age and provide a token or credential to the platform without sharing the underlying personal data. France’s “double anonymity” model and the EU’s upcoming Digital Identity Wallet both follow this privacy-preserving approach, which requires building an entirely new identity infrastructure.

No solution is perfect. The ideal system would accurately verify age, protect user privacy, be accessible to all users, and be resistant to circumvention. No current technology achieves all four goals simultaneously — but the regulatory consensus has shifted from “wait for perfect technology” to “deploy the best available tools now.”

Advertisement


Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria High — Algerian children use the same global platforms (TikTok, Instagram, YouTube, Snapchat); Algeria adopted the African Union Child Online Safety and Empowerment Policy in May 2024 but lacks comprehensive domestic legislation; a survey of 1,000 Algerian children aged 8-18 found 70% owned mobile phones and 41% used them to access the internet
Infrastructure Ready? Partial — Algeria established the National Authority for ICT-related crimes (Decree No. 21-439, 2021) and the personal data protection authority ANPDP (2023 law), but lacks a specialized child online safety regulator comparable to the UK’s Ofcom or Australia’s eSafety Commissioner
Skills Available? Limited — Few Algerian legal or technical professionals specialize in online child safety; civil society organizations addressing the issue are emerging but under-resourced; parental control services were available to only 60% of parents surveyed
Action Timeline 6-12 months for an initial policy framework; 18-24 months for enforcement capability
Key Stakeholders Ministry of National Education, Ministry of Communication, Ministry of Family, ARPT, ANPDP, Algerian parents’ associations, child protection organizations, internet service providers
Decision Type Legislative-Educational — Requires both a regulatory framework and widespread digital literacy programs in schools and communities

Quick Take: Algeria benefits indirectly from global child safety regulation — when Meta implements Teen Accounts or TikTok restricts features for minors, Algerian children receive the same protections as children in regulated markets. However, relying solely on platform goodwill is insufficient. Algeria should develop a national framework for child online safety — drawing on the UK Online Safety Act and French models — and integrate digital citizenship into the school curriculum. The most immediate and impactful intervention is educational: digital literacy programs in Algerian schools teaching children and parents about online risks, privacy settings, and healthy technology habits, as multiple European countries have done.


Sources

Leave a Comment

Advertisement