AI & AutomationCybersecurityCloudSkills & CareersPolicyStartupsDigital Economy

Age Verification Online: The Global Push to Prove You’re Old Enough for the Internet

February 24, 2026

Smartphone with age verification shield icon and padlock for internet age verification article

The World Decided Children Should Not Have Unrestricted Internet Access

The political consensus arrived with remarkable speed. Between 2023 and 2025, a cascade of legislation across democracies established that online platforms must verify the age of their users — or face severe consequences. The UK’s Online Safety Act, which received Royal Assent on 26 October 2023, imposes a duty on platforms to prevent children from accessing harmful content, with Ofcom empowered to require age verification as a compliance mechanism and penalties reaching 10% of global turnover for non-compliance. Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024, passed on 29 November 2024 and enforced from 10 December 2025, banned children under 16 from social media entirely — and the results were immediate, with more than 4.7 million accounts belonging to under-16s deactivated or restricted within weeks of enforcement. France strengthened its age verification regime in May 2024 with the SREN Law, and in October 2024, the regulator Arcom (in consultation with the CNIL) published a binding standard requiring adult content sites to implement age verification using a “double anonymity” framework where neither the site nor the verification provider can identify the full picture of the user. In the United States, the Kids Online Safety Act (KOSA) passed the Senate 91-3 in July 2024 but failed to clear the House before the 118th Congress ended, and was reintroduced in May 2025 — as of early 2026, it remains pending legislation despite bipartisan support.

The legislative wave reflects genuine public concern. A 2023 Pew Research Center survey of nearly 9,000 US adults found that 71% favor requiring people to verify their age before using social media, and 81% support requiring parental consent for minors to create accounts. The Surgeon General’s May 2023 advisory on social media and youth mental health — warning that there is insufficient evidence to conclude social media is safe for children, and citing links between heavy use and increased rates of anxiety, depression, and body image disorders among adolescents — provided the medical establishment’s endorsement. In June 2024, Surgeon General Murthy escalated further, calling for mandatory warning labels on social media platforms analogous to those on cigarettes. In the UK, the case of Molly Russell — a 14-year-old whose 2022 inquest found that harmful content on Instagram and Pinterest contributed to her death — created overwhelming political pressure that cut across party lines and accelerated passage of the Online Safety Act.

The question is no longer whether governments will mandate age verification online. They already have. The question is whether it can be implemented without creating surveillance infrastructure that fundamentally changes the relationship between individuals and the internet.


The Technical Approaches: None of Them Work Perfectly

Age verification technology falls into four broad categories, each with distinct accuracy, privacy, and usability characteristics. Understanding these categories is essential because the policy debate often treats “age verification” as a single technical problem with a single solution. It is not.

Document upload verification requires users to submit government-issued ID (passport, driver’s license) to a platform or third-party verification service. This is the most established approach — used by financial services for KYC (Know Your Customer) compliance for decades. Companies like Yoti, Jumio, and Onfido provide document verification APIs that can confirm age in seconds. The accuracy is high for valid documents, but the privacy implications are severe: users must share identity documents with commercial entities, creating databases that are targets for breach. In 2024, reporting by 404 Media revealed that AU10TIX, an Israeli identity verification firm used by TikTok, Uber, X, LinkedIn, and other major platforms, had left administrative credentials exposed for over a year — giving potential access to a logging platform containing names, birth dates, nationalities, and images of identity documents, demonstrating the real-world risk of centralized identity verification databases.

Facial age estimation uses AI to estimate a user’s age from a selfie or video capture without requiring identity documents. Yoti’s facial age estimation technology, according to its July 2025 white paper, achieves a mean absolute error of 1.1 years for 13-17 year olds, with true positive rates above 99% for correctly identifying users as under 21. The UK’s Information Commissioner’s Office (ICO) has acknowledged facial age estimation as the most widely used age estimation approach with high levels of accuracy, while noting that much of the work in this area remains in a research and development phase. The technology also raises bias concerns — the ICO specifically flagged that systems based on biometrics such as facial structure may not perform as well for people of darker skin tones or those with medical conditions or disabilities affecting physical appearance.

Digital identity wallets represent the most privacy-preserving approach. The EU’s eIDAS 2.0 regulation, adopted in March 2024, mandates that all 27 EU member states offer digital identity wallets to citizens by December 2026, with regulated industries required to accept them by December 2027. The wallets include an age verification attribute — users can prove they are over a specified age without revealing their actual birthdate or any other personal data. This is a zero-knowledge proof approach: the wallet cryptographically attests to a property (age >= 18) without disclosing the underlying data. The challenge is deployment — the EU Digital Identity Wallet is not yet widely available, and no equivalent exists in most non-EU countries. Self-declaration (checking a box that says “I am over 18”) remains the default in most jurisdictions despite being trivially circumvented by anyone who can click a button.


Advertisement

The Privacy-vs-Protection Tradeoff

Civil liberties organizations have mounted the most substantive opposition to mandatory age verification, and their arguments deserve serious engagement rather than dismissal. The Electronic Frontier Foundation (EFF), the Open Rights Group, and the ACLU have each argued that age verification requirements are, in practice, identity verification requirements that affect all internet users — not just children.

The logic is straightforward. To verify that a 14-year-old cannot access a platform, you must verify the age of every user, including adults. Any system that requires adults to prove their age to access lawful content creates a verification infrastructure that can be repurposed for surveillance, content restriction, or user tracking. This is not a speculative concern — it is a design feature. An age verification system that works is, by definition, a system that knows something verifiable about every user. The EFF’s 2025 year-in-review analysis characterized mandatory age verification as having gone from a fringe policy experiment to a sweeping reality, with half of US states now mandating it for accessing adult content or social media — and warned that the resulting infrastructure poses severe risks to privacy, anonymity, and security for all internet users.

The counterargument from child safety advocates — represented by organizations like the National Center for Missing & Exploited Children (NCMEC), the 5Rights Foundation, and the Internet Watch Foundation — is that the status quo is unacceptable. The NCMEC CyberTipline received 36.2 million reports of suspected child sexual exploitation material in 2023, and while the 2024 figure dropped to 20.5 million reports (partly due to a methodological change allowing platforms to bundle related incidents), reports involving AI-generated exploitation material surged by over 1,300%. Meta’s own internal research (disclosed during the 2021 Facebook Files revelations) showed that Instagram made body image issues worse for one in three teenage girls. The argument that adult privacy concerns should override child safety protections carries diminishing political weight in a legislative environment where child protection has become the most effective regulatory driver in technology policy.

The tension is real and likely irresolvable through technology alone. Privacy-preserving age verification (zero-knowledge proofs, digital identity wallets) can reduce the surveillance risk but cannot eliminate it — any system that gates access based on age creates an incentive for the entity controlling verification to collect, store, or monetize the data involved.


Platform Compliance and the Emerging Criticism

Social media platforms face a compliance landscape that is fragmented, technically demanding, and commercially threatening. Meta, TikTok, Snap, X (formerly Twitter), and YouTube must implement age verification systems that satisfy regulators in the UK, Australia, the EU, and potentially the US — each with different technical standards, age thresholds, and enforcement mechanisms. Meta’s response has been to push age verification responsibility onto app stores (Apple and Google), arguing that device-level age verification is more effective and less privacy-invasive than platform-level checks. Apple introduced a “Communication Safety” feature that uses on-device machine learning to detect sensitive images in children’s Messages and other apps — turned on by default for under-18s — but has resisted becoming a general-purpose age verification gatekeeper.

The compliance costs are significant, particularly for smaller operators. While implementing age verification is relatively straightforward for Meta (which already collects extensive user data), it is potentially existential for small forums, independent content creators, and open-source platforms that operate without user accounts. Ofcom’s fee regime for the UK Online Safety Act involves levies of approximately 0.02-0.03% of qualifying worldwide revenue, but the full compliance burden — including technical implementation, legal review, and ongoing monitoring — falls disproportionately on organizations without dedicated trust and safety teams.

The most provocative criticism — gaining traction in academic and civil society circles — is that age verification is a Trojan horse for universal online identity. Once the infrastructure exists to verify age, it can be extended to verify identity, nationality, or any other attribute a government wishes to condition internet access upon. France’s Arcom standard for adult content sites mandates “double anonymity” — neither the site nor the verification provider should know both the user’s identity and the site being accessed — but critics note that the trusted third party issuing the attestation still represents a point of potential compromise. The UK’s Ofcom has explicitly stated that age verification for the Online Safety Act could involve a range of methods including identity verification. The line between proving you are 18 and proving who you are is thinner than any government has been willing to publicly acknowledge.

Advertisement


🧭 Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria Medium — Algeria lacks specific age verification legislation but child online safety is a growing concern as youth internet adoption accelerates
Infrastructure Ready? No — Algeria has no digital identity wallet, limited biometric infrastructure, and no established age verification providers
Skills Available? No — child online safety expertise exists in civil society but technical age verification implementation capacity is absent
Action Timeline 12-24 months — monitoring international approaches before considering domestic regulation is prudent
Key Stakeholders Ministry of Post and Telecommunications, ARPCE, Ministry of Education, child protection NGOs, platform companies operating in Algeria
Decision Type Monitor

Quick Take: The global push for age verification online is the most significant internet regulation trend since GDPR. Algeria should study international implementations — particularly the EU’s eIDAS 2.0 digital identity wallets and France’s “double anonymity” framework — to avoid the privacy pitfalls others are encountering before drafting domestic policy.


Sources & Further Reading

Leave a Comment

Advertisement