The Architecture of the Online Safety Act
The UK’s Online Safety Act 2023 is a layered regulatory framework with a tiered compliance architecture. Understanding that architecture is essential before the July 2026 categorisation register reshapes obligations for platforms operating in — or attracting users from — the UK.
The Act distinguishes between “user-to-user services” (platforms allowing user-generated content or interaction: social media, forums, dating apps, file-sharing services, messaging platforms) and “search services” (search engines). All regulated services — defined by having links to the UK, meaning UK users — must comply with baseline duties on illegal content and, for services likely to be accessed by children, child safety requirements.
The categorisation register introduces a second compliance layer above this baseline. Services that meet specific threshold conditions — determined by user numbers, functionalities, and risk levels — are placed into one of three categories. Category 1 carries the heaviest obligations. Category 2A (large search services) and 2B (large social media services without the most intensive child-facing content risks) carry a defined sub-set. Ofcom sets the threshold conditions; services meeting them cannot opt out of categorisation.
The register has been delayed from its originally planned 2025 publication. The delay resulted from a legal challenge by the Wikimedia Foundation contesting categorisation regulations; Ofcom ran a representations process in early 2026 allowing services meeting threshold conditions to comment on provisional categorisation decisions before the register was finalised. Publication is confirmed for July 2026.
What Category 1 Means — and What It Costs to Miss
Category 1 is the most consequential classification. Services placed in Category 1 face mandatory duties in seven domains that do not apply to non-categorised services:
Transparency reporting: Category 1 services must publish detailed, Ofcom-specified transparency reports covering content moderation volume, enforcement actions, appeals outcomes, and algorithmic system descriptions. This is not voluntary disclosure — the format, timing, and scope are specified by Ofcom.
User empowerment features: Category 1 services must provide users with tools to control their own content experience — including the ability to filter or limit content from unverified accounts and to reduce exposure to certain content types. This obligation is specific: tools must be available, prominently accessible, and functional.
User identification verification: Perhaps the most significant structural obligation for platforms whose business models depend on anonymous or pseudonymous user interaction. Category 1 services must offer — though not necessarily mandate — an identity verification option for users. Users can choose whether to verify; the platform must make the option available and display verified status to other users who opt in.
Journalist and democracy protections: Category 1 services must have specific protections for journalistic content and democratic political speech, including appeals mechanisms that prevent algorithmic suppression of content from regulated news publishers and individual journalists.
Deceased child user disclosure: All categorised services (Category 1 and 2A/2B) must disclose, on request from parents or guardians, information about how a deceased child user used the platform — a provision that emerged from evidence about teen self-harm and mental health outcomes linked to platform use.
Fraudulent advertising prevention: Category 1 services that carry paid advertising must implement measures to prevent fraudulent ads, including scam and phishing content.
Enhanced risk assessments: Categorised services face more frequent, detailed risk assessment obligations compared to baseline services, with Ofcom-specified format requirements.
The penalty structure for Category 1 non-compliance: fines up to 10% of qualifying worldwide annual revenue for a first offense; up to 10% for repeat violations with potential further escalation; in the most serious cases, Ofcom can apply to courts to block services from operating in the UK. For a platform with £1 billion in global revenue, 10% exposure is £100 million — a figure that concentrates compliance attention.
The Enforcement Track Record Before the Register
A common misreading of the Online Safety Act timeline is that enforcement begins when the categorisation register is published. It does not. Baseline duties on illegal content have applied since 2025, and Ofcom has been actively enforcing since that point.
As of early 2026: Ofcom has launched more than 90 investigations into platforms for Online Safety Act compliance. The active enforcement actions span a range of services, from large social media platforms to specialist content sites. Ofcom has issued fines totalling over £1 million — including £1 million against an adult website operator for inadequate age verification and £50,000 against the same operator for failing to respond to information requests. An investigation into X (formerly Twitter) is active, concerning AI-generated sexual deepfakes involving children.
Ofcom’s enforcement priority areas for 2026 are documented in its industry bulletins: child protection from sexual abuse and grooming online; effective age verification for adult content; removal of terrorist and illegal hate material; and safety measures specifically addressing online harms targeting women and girls. Facebook, Instagram, TikTok, YouTube, and Snapchat have been named explicitly in Ofcom’s published reviews as platforms required to strengthen child protection measures.
The July 2026 register publication will expand Ofcom’s enforcement toolkit — but the authority to investigate and fine has been active for over a year before that date.
Advertisement
A Four-Pillar Compliance Framework for Platform Operators
Platforms that may be categorised — or that must comply with baseline duties that are already in force — should structure compliance around four pillars.
1. Conduct a Scope Assessment and Categorisation Self-Analysis
Before Ofcom sends provisional categorisation notices, platforms should conduct an internal assessment of whether they meet Category 1, 2A, or 2B threshold conditions. The thresholds include a combination of user number metrics and functionality criteria. Ofcom’s published threshold regulations — available through the Online Safety Act legislation portal at legislation.gov.uk — specify the exact conditions.
The scope assessment also determines whether the platform is regulated at all. Services with no links to the UK, services providing only business-to-business communications, and services regulated under other specialist UK legislation are excluded. Getting this boundary question wrong — either over-complying expensively or under-complying riskily — is the first avoidable error.
2. Map Current Practices Against Category-Specific Duties
Once scope and probable categorisation tier are determined, map existing platform practices against each mandatory duty. For Category 1 candidates, this means: does a user identification verification option exist? Does the transparency reporting infrastructure produce the data at the granularity Ofcom requires? Are user empowerment tools accessible to the full UK user population?
Where gaps exist, prioritize based on enforcement risk. Age verification and child protection measures are Ofcom’s documented enforcement priority — platforms where children can foreseeably access harmful content face the highest near-term investigation risk regardless of categorisation tier. Age verification for adult content platforms has already generated £1 million in fines.
3. Build the Documentation Infrastructure Ofcom Requires
Online Safety Act compliance is not just operational — it is documentary. Ofcom can request that platforms produce risk assessments, content moderation records, algorithmic system descriptions, and transparency report data. Platforms that cannot produce this documentation on request — either because it was never created or was deleted — face both an evidential disadvantage in investigations and potential aggravation of any penalties.
The documentation standard is: a senior responsible officer (SRO) — a named individual at director level — must sign off on risk assessments. This creates personal accountability at board level for Online Safety Act compliance status. Platforms should identify their SRO, ensure that officer has visibility of compliance gaps, and establish a reporting cadence that keeps compliance status current.
4. Prepare the Representations Process Response
For platforms receiving provisional categorisation notices from Ofcom in early 2026, the representations process is the last opportunity to contest or clarify the categorisation before the register is published. Representations that successfully demonstrate a platform does not meet threshold conditions, or that specific functionality characteristics place it in a lower category, can meaningfully reduce the compliance burden.
Representations must be factual, specific, and documented. Generic arguments that categorisation is burdensome or unfair are not the standard; the standard is whether the platform meets the legal threshold conditions as Ofcom has interpreted them. Platforms should obtain specialist UK regulatory legal advice before submitting representations if the difference between Category 1 and 2B compliance obligations is material to their business model.
What the DSA Comparison Reveals
UK media and tech policy commentary frequently positions the Online Safety Act as the UK’s counterpart to the EU’s Digital Services Act. The comparison is partially accurate but obscures important differences that affect multinational compliance planning.
The DSA’s categorisation equivalent — the Very Large Online Platform (VLOP) designation — applies to platforms with over 45 million EU active users, triggering a specific set of enhanced obligations including mandatory algorithmic risk audits and researcher data access. VLOPs are designated by the European Commission and subject to direct Commission enforcement.
The UK Online Safety Act’s categorisation applies to platforms meeting Ofcom’s threshold conditions for UK user bases and UK-linked risk profiles. A platform that is a VLOP under the DSA is likely to be Category 1 under the OSA — but the obligations are not identical, and the enforcement authority is different (Ofcom in the UK, the Commission or national Digital Services Coordinators in the EU).
For global platforms managing both DSA and OSA compliance, the practical implication is that neither framework’s compliance output satisfies the other. A DSA algorithmic risk audit does not substitute for an Ofcom transparency report. A DSA user empowerment feature implementation does not automatically satisfy OSA user identification verification requirements. The frameworks must be managed in parallel, not sequentially.
Where Platforms Should Focus Before July 2026
The period between now and the July 2026 register publication is the last window in which platforms can conduct gap analysis and build compliance infrastructure without operating under active categorisation-specific obligations. After the register is published, the duties activate immediately for categorised services — there is no grace period equivalent to the DSA’s VLOP transition timeline.
Age verification is the highest-priority area: it is both a baseline duty and a categorisation-specific enhanced duty, and it is Ofcom’s demonstrated enforcement priority. Platforms whose UK users include minors who could access harmful content need functioning age assurance systems — not just age-gating by date of birth entry, which Ofcom has indicated does not meet the “highly effective” standard — before July 2026.
Transparency report infrastructure is the second priority: building the data collection and reporting systems that Ofcom requires takes time. Platforms should begin configuring reporting infrastructure against Ofcom’s published transparency report guidance, even if their categorisation tier is not yet confirmed.
Frequently Asked Questions
Does the UK Online Safety Act apply to non-UK companies?
Yes. The Act applies to any service that has a “link to the UK” — defined broadly as services with a significant number of UK users, services designed for or likely to be accessed by UK users, or services that generate revenue from UK users. Geographic location of the company’s headquarters is irrelevant. A US, Algerian, or any other non-UK platform with significant UK user engagement falls within scope. Non-compliance is still enforceable: Ofcom can apply to UK courts for access restriction orders (blocking) against non-compliant non-UK services.
How does Ofcom determine which platforms are placed in Category 1 versus Category 2B?
Ofcom sets threshold conditions through secondary legislation — regulations that specify quantitative and qualitative criteria for each category. The conditions combine a user number threshold (the specific numbers are set in the regulations, linked to UK user volumes) with functionality criteria (whether the service allows anonymous or pseudonymous interaction, the type of content hosted, the risk profile of likely users). Platforms that believe they are near a threshold boundary can make representations to Ofcom during the early 2026 representations process, which closes before the July 2026 register publication.
What does “highly effective” age assurance mean under the Online Safety Act?
Ofcom has published guidance specifying that “highly effective” age assurance methods include: photo ID matching (comparing a submitted ID document against a live selfie), facial age estimation using certified algorithms, Open Banking (using financial institution age data), digital identity services from certified providers, and mobile network operator age checks. Methods that are NOT considered highly effective include: self-declaration of birth date, checkbox confirmations, and soft checks that allow users to simply state they are adults. Platforms using self-declaration-based age gates are not compliant with the Online Safety Act’s child safety requirements.
—
Sources & Further Reading
- Online Safety Act: Categorisation Register Pushed to July 2026 — techUK
- Ofcom and the Online Safety Act in 2026 — Burges Salmon
- Online Safety Act: 2025 Key Milestones and Future Steps — CMS Law
- Online Safety in 2026: Enhancement and Enforcement in EU and UK — Taylor Wessing
- Age Verification Laws 2026: UK, EU, US, Australia Compared — AgeOnce
- Online Safety Act: Explainer — UK Government (gov.uk)






