Cyberflashing as Priority Illegal Content

The Online Safety Act 2023 elevates cyberflashing to priority illegal content, but enforcement and evidential hurdles may limit real-world protection
Cyberflashing is the digital distribution of images or film of someone’s genitals to another person without consent. The offence disproportionally affects women and girls. The Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025/1352 have made cyberflashing a priority offence under the Online Safety Act 2023 (OSA 2023), adding it to Schedule 7 so that it is now considered ‘illegal content’. User-to-User Service Providers and search services, including social media and dating platforms, must now take proactive steps to prevent this content.
This article examines cyberflashing and its legal framework under s.66A SOA 2003 and the Online Safety Act 2023, analysing from a regulatory perspective whether the OSA 2023 effectively safeguards victims.
What is cyberflashing?
Cyberflashing became a criminal offence on 31 January 2024. Statistics from Rape Crisis show that two in five women aged 8-34 have been sent a sexual photo without their consent. This provides important insight into the prevalence of the behaviour, which has resulted in international recognition of the need to criminalise it.
Under Section 66A SOA 2003, a person ‘A’ can be found guilty of the offence of cyberflashing if:
A intentionally sends a photograph or film of any person’s genitals to another person;
A intends that ‘B’ will see the genitals and be caused alarm, distress or humiliation; or
A sends or gives the photograph or film for the purpose of obtaining sexual gratification, and is reckless as to whether B will be caused alarm, distress or humiliation.
It is unclear whether User-To-User Service Providers may be convicted of the criminal offence even if their software has created a ‘deepfake’ genital image. This is because s.66A SOA 2003 says that the offence is committed by a ‘person’. Even if User-To-User Service Providers could be prosecuted under s.66A SOA 2003, it is unlikely that the police would enforce prosecutions against them in individual cases due to lack of resources. References to a photograph or film include images made or altered by ‘computer graphics’ or ‘any other way’ (s.66A (5) SOA 2003). This term is sufficiently wide enough to include photographs or film created by AI or ‘deepfake’ images. Genitals include surgically constructed genitals (s.79(3) SOA 2003). The offence is triable either way and the maximum sentence for the offence is currently two years’ custody. If you are convicted of the offence, you may be subject to sex offender notification requirements. This mirrors the maximum sentence for the offence of exposure under s.66 SOA 2003.
The Online Safety Act 2023
The Online Safety Act 2023 has strengthened protection for victims of cyberflashing by making it a priority offence. Section 10 OSA 2023 mandates regulated User-To-User Service Providers to take proportionate measures to prevent individuals from encountering cyberflashing mitigating the risk of this content appearing. It also asks them to limit the availability of illegal content and swiftly remove it. However, these duties are regulatory in nature and do not create criminal liability for platforms for the underlying offence.
To comply with the OSA 2023, User-To-User Service Providers are required to carry out risk assessments relating to illegal content (s.9 OSA 2023). The OSA 2023 further obliges them to provide reporting mechanisms for illegal content, maintain complaints procedures, and consider freedom of expression and privacy rights when implementing safety measures (s.20, s.21 and s.22 OSA 2023). User-To-User Service Providers must also ensure that their terms of service, content moderation practices, and user support systems are consistent with these statutory duties (s.7, s.10, s.12, s.71 (not yet in force), s.72 OSA 2023 (partially in force)).
In their risk assessments and systems User-To-User Service Providers must treat content as illegal content if, having considered all reasonably available information: (a) they have reasonable grounds to infer that all elements necessary for the commission of the offence, including mental elements, are present or satisfied, and (b) they do not have reasonable grounds to infer that a defence to the offence may be successfully relied upon (s.192(5) and (6) OSA 2023). In the case of content generated by a bot or other automated tool, the tests mentioned in subsection (6)(a) and (b) are to be applied in relation to the conduct or mental state of a person who may be assumed to control the bot or tool or, depending what a provider knows in a particular case, the actual person who controls the bot or tool (s.192(7) OSA 2023). Failure to comply with the OSA 2023 may result in a User-To-User Service Provider getting their services blocked, or fines of up to 10% of qualifying worldwide revenue or £18 million, whichever is higher (Schedule 13, and s.146 OSA 2023).
Critique
It is positive that the OSA 2023 is being used to extend the scope of the cyberflashing offence as its current scope under s.66A SOA 2003 is limited and conviction rates are very low. Mancunian Matters, an online news website, reports that only 43 people were convicted of cyberflashing in 2024. This is disappointing given the clear prevalence of the offence. Low conviction rates are not surprising, given that the mental element of the offence found in s.66A SOA 2003 is sending the image or film for the purpose of obtaining sexual gratification or an intention or recklessness to cause the recipient alarm, distress or humiliation. Focusing on a defendant’s intention rather than the recipient’s consent makes the evidential requirements on the prosecution too high. Extending the law into the regulatory sphere provides more protection for victims by aiming to prevent the offence from happening on User-to-User platforms.
Nevertheless, relying on the OSA 2023 to protect victims of cyberflashing has flaws. First, it is unclear whether User-To-User Service Providers will adequately control the use of AI software on their platforms.The OSA 2023 can only protect victims of cyberflashing if User-To-User Service Providers actually take adequate preventative measures to stop the offence from happening in the first place. AI bots such as Grok are an obvious example. Marketed on X as a ‘rebellious’ alternative to other AI models, Grok drew widespread criticism when users generated images that appeared to undress individuals in photos or depict them in sexually suggestive poses. Although this scandal took place immediately before cyberflashing became a priority offence, it demonstrates the ease with which AI bots can be misused by users.
Second, the OSA requires User-To-User Service Providers to make a judgement within their risk assessments as to whether there are reasonable grounds to infer that all elements necessary for the offences are present and satisfied. This is problematic in relation to the cyberflashing offence, as it is difficult to prove a defendant’s intention to cause alarm, distress or humiliation especially in a context where there may be romantic and sexual undertones within an online conversation or relationship. Many defendants are likely to say that it was a joke or that it was misguided flirting. This may lead some User-To-User Service Providers to conclude that all elements necessary for the offence of cyberflashing are not present or that there is a valid defence in leaving users exposed to the content.
Third, User-To-User Service Providers may come to different conclusions about how they will mitigate the risk of cyberflashing, leading to inconsistent protection for victims. Their mitigation risk must be proportionate to their specific risk profile. They will need to consider whether to ban all explicit photographs or film on their platforms – or not. If they choose to ban the content, they will also need to consider the sender’s freedom of expression rights. Some may take the view that the risk of cyberflashing is low compared to risks of censorship or undermining user privacy (such as interfering or weakening end-to-end-encryption). Bumble, a dating App, uses AI to detect and blur explicit images before delivery, and allows recipients to consent whether to view them. This offers a balanced approach for User-to-User Service Providers.
Fourth, the size and capacity of the provider and whether the judgement is made by human moderators or automated systems are also relevant as to how effective preventative measures are (s.192(2) and (3) OSA 2023). Some User-To-User Service Providers will not have the financial means for automated systems or preventative detection software and will therefore use human moderators (or a mix of human moderators and technology), which may be not prevent the offence from taking place on the platform in the first place. For instance, HelloTalk, a language exchange app, has inadequate procedures in place to prevent cyberflashing. This may be because it relies on human moderators and technology rather than AI to detect and block explicit content before it reaches recipients.
There are also serious doubts about whether Ofcom will effectively enforce the OSA 2023. As Professor Clare McGlynn of Durham University commented to Sky News in January, rape pornography and non-consensual intimate images are priority offences under the Act, yet they remain accessible online. If Ofcom does not enforce the OSA 2023 then victims of cyberflashing will continue to be insufficiently safeguarded and priority offences will be theoretical in nature. In the circumstances, it is absolutely necessary that Ofcom properly enforces breaches of priority offences so that victims of cyberflashing are adequately protected.
Conclusion
Overall, making cyberflashing a priority offence under the OSA 2023 is a positive step to protect victims, as it requires User-to-User Service Providers to assess and mitigate the risk of the offence on their platforms before the offence takes place. However, whilst some providers may treat the risk as high and take preventative action, others may assess and mitigate the risk differently, due to time or resource issues, leading to inconsistent or reactive approaches. It also remains to be seen whether Ofcom will enforce breaches of the OSA in relation to cyberflashing.

