New law combats AI abuse at source

The UK government introduces groundbreaking legislation aimed at ensuring AI cannot be misused to create synthetic child sexual abuse images.
In response to alarming figures from the Internet Watch Foundation (IWF) indicating that reports of AI-generated child sexual abuse material have more than doubled in the past year, the UK government has unveiled new laws targeting the root of the problem. The legislation involves collaboration between the government, AI industry leaders, and child protection organisations, establishing a framework that prevents the misuse of artificial intelligence technologies in the creation of harmful content.
As Technology Secretary Liz Kendall stated, "We will not allow technological advancement to outpace our ability to keep children safe." These new laws are designed to facilitate the safe testing of AI models while preventing the generation of exploitative materials, addressing the disturbing rise of such content. The IWF's data highlights a leap in reports from 199 cases in 2024 to 426 in 2025, underscoring the increasing challenge posed by AI in this arena.
A specific area of concern has been the surge in images of infants, with depictions of 0–2-year-olds rising dramatically from just 5 cases in 2024 to 92 this year. The newly established measures will empower designated bodies, including AI developers and child safety organisations, to rigorously evaluate AI models. This initiative is crucial as existing legal constraints often inhibit developers from conducting essential safety testing on AI technologies.
Kendall emphasised, “By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.” Jess Phillips, Minister for Safeguarding and Violence Against Women and Girls, echoed this sentiment, asserting the need to adapt laws to the evolving digital landscape: "We must make sure children are kept safe online and that our laws keep up with the latest threats."
The government's proactive stance aims to combat the troubling trend of offenders using both real and synthetic images, seeking to undermine existing safeguards. Furthermore, the measures promise to bolster public trust in AI innovation by preventing abusive manipulation of technology.
Kerry Smith, Chief Executive of the IWF, praised these developments, stating, “We welcome the government’s efforts to bring in new measures for testing AI models to check whether they can be abused to create child sexual abuse.” The collaborative approach is envisioned to not only safeguard children from exploitation but also ensure a robust response to the potential for AI misuse in the future.
This legislative action marks a significant milestone in the UK’s commitment to leveraging innovation safely while prioritising the protection of children in the digital age, highlighting that technological progress can coexist with stringent safety measures.
