Online Safety Act: regulation meets reality

By Ria Moody and Ben Packer
As Ofcom enforces sweeping online safety rules, platforms face tough choices over age checks, AI, and compliance
The Online Safety Act (OSA) passed into law in October 2023, with the stated intention of making the UK “the safest place in the world to be online”. Under the OSA, all providers of user-to user-and search services with links to the UK are required to assess the risk of users encountering illegal content and content that is harmful to children (where they have child users), and to put in place proportionate systems and processes to detect and remove such content. The compliance burden for the tech industry has been significant, even for those providers which had existing online safety protection measures in place, because the OSA is accompanied by over 3,000 pages of regulatory guidance from the regulator, Ofcom. This guidance is prescriptive, setting out detailed requirements for exactly how to ensure risk assessments are adequate, and recommending specific measures which providers must implement based on their risk profile, or else justify an alternative approach. The final deadline for all services to complete their first round of annual risk assessments, and implement corresponding safety measures, was the end of July 2025; but now the dust has settled on this first stage of OSA implementation, what changes have providers made, and what is still ahead of them? The roll out of age assurance, the impact of AI, and Ofcom’s enforcement strategy are three of the key areas to consider in making this assessment.
All age assurance methods are equal – but some are more equal than others
The requirement which has been perhaps the greatest step change under the OSA has been Ofcom’s requirement for platforms to determine which users are adults and which are children (i.e. those under the age of 18) via, so-called “highly effective age assurance” (HEAA). This is a very high bar; platforms can no longer rely on self-declaration of age, or merely prohibit users under 18 in their terms of service, but rather must use methods such as photo-ID matching, credit card checks and facial estimation technology.
Pornography platforms in particular are required to implement HEAA to prevent children accessing the entire site. For more mainstream services, once child users have been identified providers have a duty to protect them from encountering harmful content, including porn, bullying content, and content which encourages eating disorders, or dangerous challenges or stunts. To do so, these services must detect this content, and remove it from parts of the site which can be accessed by child users, or by users whose age has not been verified. The only way to avoid implementing age assurance is to make the whole platform safe for children; that is to say that the service must prohibit content that is harmful to children in its terms, and then remove such content for all users (instead of having different feeds or channels for over-18s/under 18s and users of unverified age).
Platforms therefore have to answer the important question of whether their terms prohibit the full range of content considered harmful to children. For many mainstream platforms, the answer will be no - in which case they must decide how to approach age assurance. The most highly effective measures, such as ID verification, or using a third party digital identity add-on to provide facial age estimation, will introduce noticeable extra friction to the customer journey. In addition, in the case of certain platforms such as porn providers or location-based dating apps, users may be uncomfortable providing such identifying information (perhaps with good reason, given the repercussions of the Ashley Maddison data leak in 2015, where the revelations about users’ online activity was linked to several tragic incidents). Alternatively, platforms may be relying on less intrusive methods to conduct age assurance, such as behavioural indicators; but depending on the level of reliability and fairness of these methods, they may not always pass Ofcom’s threshold for being highly effective.
Since the HEAA requirements were introduced, there has been a noticeable uptick in the downloads of “virtual private network” (VPN) apps in the UK. VPNs conceal a user’s IP address and location. which often means that they will not be subjected to HEAA checks for UK users, a phenomenon which critics of the OSA have referred to as the “VPN loophole”. However, while VPNs are not illegal under the OSA, Ofcom is clear that services must not permit content that encourages use of VPNs to get around age checks. In addition, defenders of the OSA note that the NSPCC reports that children may inadvertently access pornography by clicking misleading links or via pop ups, and Ofcom research indicates that 3% of 8-9 years olds have been exposed to porn; therefore even if older children may use VPNs to circumvent highly effective age assurance, nevertheless, these measures will prevent younger children from stumbling upon harmful content.
The AI effect
AI poses both an opportunity and a threat in the online safety space. AI allows platforms to proactively moderate more content faster, to block harmful content before it appears online, and crucially to moderate without subjecting human moderators to reviewing high volumes of distressing material. However, AI also allows bad actors to produce harmful content quickly and cheaply - and in particular to produce content that constitutes mis- and disinformation and which facilitates fraud that is more convincing to users on the platform, for example deepfakes of trusted celebrities or authorities purporting to give advice. Although this is a vast and dynamic area, it’s a problem that lawmakers and regulators are keen to address, and regulatory changes are coming into force to directly target this problem. The Data (Use and Access) Act 2025 which became law in June creates a new offence of creating or requesting creation of deepfake non-consensual intimate images, one area of harm which has grown exponentially as AI has developed. In addition, Ofcom is currently consulting on measures requiring platforms to use hash matching to detect and remove non-consensual intimate images (including deepfakes) before they are posted, as services are currently required to for content depicting child sexual abuse.
A regulator with its teeth bared
Ofcom is keen to demonstrate that it is ready to take swift enforcement action where services are not complying with the OSA, and as more duties come into force Ofcom is scaling up its enforcement activity accordingly. While Ofcom has been transparent that it cannot investigate every suspected failure to comply with the OSA, and will therefore have to strategically select enforcement cases that meet its priorities, it does have significant powers under the Act where it chooses to enforce; sanctions includes fine of up to £18 million or up to 10% of qualifying worldwide revenue, as well as ultimately business disruption powers to order internet services providers to block access to offending sites by UK users. And in the first half of 2025 Ofcom opened enforcement action into a range of services, including pornography providers which it considers do not have effective age assurance measures in place, file sharing services who Ofcom allege have not implemented effective protections against child sexual abuse material, and small but risky services that Ofcom believe have not complied with risk assessment duties.
Recently the complexity of enforcing online regulation crystalised when Ofcom issued the first provisional notice of contravention under the OSA to 4chan, a US-headquartered discussion board platform which failed to respond to two statutory requests for information from Ofcom, including a request for 4chan’s illegal content risk assessment 4chan’s lawyers alleged that the provisional notice included Ofcom’s intention to impose a £20,000 fine "with daily penalties thereafter". In response to this notice, 4chan together with far-right discussion board Kiwi Farms, has filed a complaint in the Washington DC Federal Court, seeking a legal ban on Ofcom attempting to enforce the OSA in the US. This jurisdictional tussle is likely to become a feature of Ofcom’s work as many of the services that pose the greatest risk to adults and children are likely to have no physical presence in the UK.
What’s in store for service providers?
As content regulation has developed across the world in recent years, it has been difficult for providers of global platforms to stay on top of the burgeoning compliance requirements; nevertheless, this is necessary not just to ensure the protection of minors and removal of the most egregious online content, but also for the sake of their own bottom line, given the potential sanctions at play. And while industry may have heaved a sigh of relief on completing their first OSA risk assessments this year, Ofcom is not standing still. Not only is categorisation of the largest players expected soon, which will trigger an additional layer of compliance duties for the services in scope, but Ofcom is also currently consulting on additional recommended safety measures to add to the current lists in its codes of practice – so providers will not be able to file away their OSA to-do lists any time soon.