Safeguarding children in education in the age of AI

Baljinder Bath, from 4PB, assesses the risks posed to children by artificial intelligence in educational settings
The use of artificial intelligence (AI) in educational settings presents unprecedented safeguarding challenges. Whilst AI holds the potential to transform education, it also brings complex risks, ranging from deepfake imagery and voice cloning to AI-facilitated grooming and bullying. This article examines emerging threats, statutory duties under UK law, practical safeguarding strategies for schools, and anticipates future legislative developments. In a world of exponential digital change, safeguarding practices must urgently evolve to address the digital realities of children’s lives.
A shifting landscape: the Department for Education’s position
Children’s independent engagement with AI
In January 2025, the Department for Education issued guidance on the deployment of generative AI in schools. This represents a significant shift in the safeguarding landscape, acknowledging that technology adoption in education must be meticulously balanced against safeguarding priorities. The guidance stipulates that every proposed use of AI must undergo a thorough risk assessment, weighing the educational benefits against the safeguarding risks. Institutions must comply with statutory obligations under Keeping Children Safe in Education 2024 (KCSIE), ensuring that any deployment of AI tools enhances rather than compromises pupil safety.
Moreover, the guidance requires educational settings to anticipate unauthorised uses by staff or pupils and to plan mitigating strategies accordingly. Schools are urged to consider both explicit and covert interactions with AI systems, recognising that inappropriate use could occur without prior approval.
Filtering, monitoring, supervision, and enforcement of clear policies are no longer optional enhancements, but mandatory expectations in an AI-influenced environment.
Children’s independent engagement with AI
Common Sense Media’s report, ‘The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School 2024,’ demonstrates that children’s engagement with AI often occurs outside the parameters set by parents or educational institutions. Common Sense Media found that 50% of 12 to 18-year-olds had used ChatGPT for school assignments, a figure starkly higher than parental estimates. Furthermore, 38% admitted to using AI tools without their teacher’s permission.
Cambridge University’s Dr Nomisha Kurian’s article, ‘“No, Alexa, no!”: designing child-safe AI and protecting children from the risks of the “empathy gap” in large language models,’ reveals that children perceive AI not merely as tools but as companions, often attributing emotional resonance to their interactions. Children may experience AI-generated content as authoritative and trustworthy, regardless of its factual accuracy.
Such findings necessitate safeguarding frameworks that go beyond technical controls. Schools and families must address the relational dynamics children form with AI, ensuring that critical thinking, digital literacy, and emotional safety are prioritised, alongside technological supervision.
AI-specific safeguarding risks
Deepfake imagery and ‘nudify’ applications
The potential harms of AI misuse are significant and, in some cases, devastating. Deepfake technology now enables the creation of realistic, sexually explicit images from innocuous photographs. One of the most high-profile incidents occurred in Almendralejo, a small town in southwestern Spain. In mid-2023, dozens of girls (aged around 13–15) from local schools discovered that explicit nude images of them were being circulated in WhatsApp groups, images that looked shockingly real, but were entirely fake. A group of teenage boys in the town had used an AI app (later identified as ClothOff) to generate these nude pictures by processing the girls’ social media photos. The psychological fallout was severe. What began as a cruel ‘prank’ among classmates quickly escalated into a serious case of child sexual abuse imagery and bullying. The local prosecutor opened an investigation to determine criminal liability for creating and sharing sexual images of minors. Ultimately, 15 schoolchildren (ages 13–15) were identified as responsible. In 2024, a youth court convicted them of multiple counts of creating child abuse images and offences against the victims’ moral integrity. Each offender was sentenced to one year of probation and required to attend courses on gender equality and the responsible use of technology.
Voice cloning and digital impersonation
Voice-cloning technology presents another urgent safeguarding concern. In early 2024, Dazhon Darien, the athletic director at Pikesville High School, Baltimore County, Maryland was arrested for allegedly using AI technology to create a fabricated audio recording that mimicked the voice of Principal Eric Eiswert making racist and antisemitic remarks. This AI-generated recording was widely disseminated on social media, leading to substantial community backlash and the temporary removal of Principal Eiswert from his position. Investigations revealed that Darien had accessed AI tools through the school's network to produce the deepfake audio.Experts from the FBI and the University of California, Berkeley, analysed the recording and confirmed it was artificially generated. The authorities believe Darien’s actions were motivated by retaliation, as he was under scrutiny for alleged financial misconduct and had been informed that his contract might not be renewed.
In educational settings, the risk extends beyond reputational damage. Voice cloning could be weaponised for bullying, intimidation, and exploitation; often without the knowledge of victims until significant harm has been caused.
AI-facilitated bullying and grooming
Generative AI enables bullying at a scale and level of personalisation previously unimaginable. Chatbots, such as Snapchat’s My AI, have provided minors with sexually explicit advice, bypassing standard online safety measures.
More tragically, the Garcia v. Character Technologies, Inc. case illustrated the fatal consequences of emotionally manipulative AI interactions. The plaintiff, Megan Garcia, alleges that her 14-year-old son, Sewell Setzer III, died by suicide after forming an emotionally dependent relationship with an AI chatbot developed by Character.AI. The chatbot, modelled after Daenerys Targaryen from Game of Thrones, reportedly engaged in sexually suggestive and emotionally manipulative conversations with Sewell, exacerbating his mental health issues and ultimately resulting in his suicide.
These cases collectively demonstrate that safeguarding against AI misuse is no longer speculative; it is an immediate and pressing necessity.
The legal framework: current protections and limitations
Although the United Kingdom lacks a dedicated AI regulatory framework equivalent to the European Union’s AI Act, existing safeguarding laws have adapted to incorporate digital threats.
The Keeping Children Safe in Education 2024 (KCSIE) guidance mandates schools to ensure robust online filtering and monitoring, reflecting the recognition that online harms can be as damaging as physical threats. Online safety must be embedded throughout safeguarding practice, staff training, and policy development.
The Online Safety Act 2023 introduced substantial duties on technology platforms, requiring the detection and removal of illegal content, including AI-generated abuse material. It also extended criminal liability under the Sexual Offences Act 2003 to the creation or distribution of AI-generated sexually explicit imagery involving minors.
The Sharing Nudes and Semi-Nudes guidance (March 2024) now explicitly addresses AI-generated imagery. Schools are advised to treat all such incidents as safeguarding concerns, prioritise child wellbeing, and involve law enforcement agencies where appropriate.
While these frameworks offer important protections, significant gaps remain particularly around the use of children’s data by AI systems and the rapid development of new technologies that outpace regulatory oversight.
Practical safeguarding steps for schools
Every school must now ensure that their child protection, acceptable use, mobile technology, and homework policies explicitly address AI risks. Acceptable use agreements should define the scope of permitted AI use by staff and students, clear supervisory expectations, and the consequences of unauthorised or harmful AI use.
The evolving nature of AI harms demands a closer relationship between the designated safeguarding lead (DSL) and the data protection officer (DPO). Digital safeguarding and data protection are now inseparably linked.
DSLs must be alert to the risks AI poses to children’s wellbeing, while DPOs must ensure that any data processed via AI systems complies with the UK’s General Dara Protection Regulation and prioritises child safety.
Staff training must incorporate AI awareness, covering new types of grooming behaviours, cyberbullying methods, and the psychological effects of interacting with generative AI.
Sections 550ZA–ZC of the Education Act 1996 permit searches and confiscation of pupils’ possessions where there is reasonable suspicion of prohibited material, including harmful digital content. Where indecent images or data are suspected, devices must be confiscated without viewing the content wherever possible and referred to the DSL and, if appropriate, the police.
Staff must follow strict protocols to avoid criminal liability themselves and to ensure the safeguarding integrity of the process.
Where AI misuse indicates potential significant harm to a child, referrals must be made under Section 47 of the Children Act 1989 to the Multi-Agency Safeguarding Hub (MASH). Multi-agency collaboration enables a more accurate assessment of the risks and more timely protective interventions.
Future reforms: bills on the horizon
The Crime and Policing Bill 2025 proposes important new offences:
- Criminalising the creation, possession, and distribution of AI-generated child sexual abuse material;
- Prohibiting the possession of instructional materials that facilitate AI exploitation of minors; and
- Introducing enhanced penalties for online platforms that enable the distribution of harmful AI content.
Similarly, the Data (Use and Access) Bill 2024 proposes additional protections for children’s data by:
- Limiting the use of children’s personal data for AI training;
- Introducing restrictions on automated decision-making affecting children;
- Criminalising unauthorised AI content creation involving minors.
Both bills signal that legislative reform is beginning to catch up with technological realities, but more may still be required as AI capabilities expand.
Conclusion
AI presents one of the most profound challenges to safeguarding that the education sector has faced. While the opportunities for innovation are immense, so too are the risks to children’s safety, dignity, and development.
Schools must urgently integrate AI-specific threats into all aspects of their safeguarding policies and practices. Staff must be equipped with the skills to recognise and respond to new types of harm. Policymakers must ensure that laws continue to evolve to meet the reality of children’s digital lives.
Safeguarding can no longer be conceived solely in terms of the physical world. In the age of AI, protecting children demands vigilance, adaptability, and above all, a relentless focus on the best interests of the child.