Deepfakes and the criminal law: addressing the rise of AI-generated sexual material

Danielle Reece-Greenhalgh, a Partner at Corker Binning, discusses the history to the recent reforms targeting the creation of sexually explicit deepfakes
In recent years, the use of artificial intelligence (AI) and deep learning to create hyper-realistic images, videos and audio, commonly known as deepfakes, has surged exponentially. AI technology is able to take existing (ie, ‘real’) image and audio data and replace or alter key details. Facial features, speech and movement can all be modified in a manner so seamless that it is often impossible to distinguish between real and computer-generated content.
Deepfakes are often discussed in criminal law in the context of harmful sexualised content, particularly pornography featuring depictions of individuals without their consent. AI models that enable the production of pornography from (a) text prompts or (b) existing (non-pornographic) images have resulted in the proliferation of both child sexual abuse material and deepfake pornography depicting real adult individuals. This, in turn, has exposed lacunae in criminal law, which the government has vowed to address.
In January 2025, the Ministry of Justice announced a ‘crackdown’ on sexually explicit deepfakes. Along with an overhaul of existing voyeurism laws (to widen the taking of an intimate image without consent beyond ‘upskirting’ and to criminalise the installation of equipment with intent to take such images), this crackdown includes proposed new offences of ‘creating or requesting the creation of a purported intimate image of an adult’. The specific acts to be criminalised are: (a) the creation of the purported (ie, deepfake) intimate image or (b) a request for such a creation.
Whilst the government had initially announced that these proposed ‘creating’ and ‘requesting’ offences would be incorporated into the Crime and Policing Bill, they were subsequently expedited into legislation already tabled before the House of Lords. They now appear in Section 135 of the Data (Use and Access) Bill. Given the renewed focus on the creation of deepfakes and the legislative priority it has been afforded, it is likely that these offences will now pass into law under the current government.
The legislative debate
To understand why these ‘creating’ and ‘requesting’ offences are being proposed now, it is necessary to discuss the history of the legislative debate. For many years, the police and Crown Prosecution Service have investigated and sought to prosecute aspects of the adult ‘deepfake’ trade using non-sexual offences, for example, harassment, stalking and malicious communications laws. These attempts were only intermittently successful and, as a result, many campaigners began to advocate for specific sexual offences targeting both the creation and distribution of adult deepfake images.
These campaigns pointed to the fact that individuals who discover deepfake sexual images of themselves report a sense of violation of their bodily integrity, often likened to a sexual contact offence.
Two years ago, parliament finally responded. The Online Safety Act 2023 inserted Section 66B into the Sexual Offences Act 2003 (the ‘SOA 2003’), which created a number of offences concerned with sharing real and deepfake images. But the Section 66B offence did not impose criminal liability on those who do no more than make explicit images of individuals without their consent. Criminal liability only arises if those images are sent or a threat to send is made.
The basis for this distinction was that the perceived harm was thought to arise only from the publication of the images (or a fear of the same) rather than the existence of the images themselves.
The decision not to criminalise the production of such images stemmed directly from the recommendations made by the Law Commission, which stated that:
‘We do not recommend that the act of “making” an intimate image without consent should fall within the scope of the criminal law. Instead, we consider that the act of sharing a “made” intimate image – such as a “nudified” image, should be the focus of the criminal law. We recognise that this may be disappointing to those who have been victims of such behaviour. However, as with a possible “possession” offence, we consider that a “making” offence would be difficult to enforce, and that the most harmful consequences of this behaviour can be captured through a “sharing” offence.’
Thus, the base offence in Section 66B(1) is of intentionally sharing photographs or film which show (or appear to show) an individual in an ‘intimate’ (not necessarily sexual) state, without the consent of the individual depicted. Sharing with the intent to cause alarm, distress or humiliation to the individual depicted (under Section 66B(2)), or sharing for the purpose of sexual gratification (under Section 66B(3)), are aggravated versions of the base offence, with commensurate increases in sentencing. Section 66B(4) criminalises a person who threatens to share real and deepfake images, but this offence can only be committed if the person intends that the victim, or a person that knows them, will fear that the threat will be carried out, or is reckless as to that fear arising. Prosecution for this Section 66(B)(4) offence can be brought regardless of whether the material threatened in fact exists, or indeed whether it in fact depicts (or purports to depict) the intended victim in an intimate state.
The 2025 reforms
Since these changes to the SOA 2003 came into force, calls for the intimate image provisions to be widened further to take into account a broader range of conduct, including the solicitation and creation of images, have not abated. Campaigners have argued that the existence of such images alone is sufficient to invoke feelings of vulnerability and violation.
Section 135 of the Data (Use and Access) Bill is the government’s expedited response, and would insert a new Section 66E into the SOA 2003, and is currently drafted is as follows:
A person (A) commits an offence if:
(a) A intentionally creates a purported intimate image of another person (B),
(b) B does not consent to the creation of the purported intimate image, and
(c) A does not reasonably believe that B consents.
‘Purported intimate image’ is a new concept in law, defined as an image which:
(a) appears to be, or to include, a photograph or film of the person (but is not, or is not only, a photograph or film of the person),
(b) appears to be of an adult, and
(c) appears to show the person in an intimate state.
Intentionally requesting another person (either in general or specific terms) to create a purported intimate image would also be an offence. There is no requirement for the request to be explicit, it would include ‘doing an act which could reasonably be taken to be a request (such as, for example, indicating agreement in response to an offer or complying with conditions of an offer)’. The ‘requesting’ offence could, therefore, be committed with no words being exchanged at all.
Both the ‘creating’ and the ‘requesting’ offences would be triable only in the magistrates’ court, carrying a possible custodial sentence of up to six months (being the current statutory maximum) and/or an unlimited fine.
These new offences, if enacted, would certainly achieve the objectives of those who have campaigned long and hard for the creators of deepfake images to be held to account. ‘Crackdown’ is the right term given that the offence will ensure that all people involved in the deepfake trade, including those who request images, those who create them and those who distribute them, will now face criminal sanctions. Attempts to limit the proposed offences and their consequences (for example by way of a ‘reasonable justification’ defence or removing the possibility of a custodial sentence) have been rejected. However, there are three potential problems with the new offences. One legal, one practical and one ethical.
Firstly, the legal issue, which is one of definition. As currently drafted, the new offences do not define ‘creation’. It will therefore be for the courts to determine whether parliament’s choice of the word ‘creation’ connotes something different from the most closely related concept in criminal law, that of ‘making’ indecent images of children contrary to Section 1(1)(a) of the Protection of Children Act 1978. ‘To make’ has been very widely defined over the years by the courts, and can include almost any action whereby an image is duplicated or stored (including photocopying, downloading and receiving unsolicited images over social media), as well as the physical creation of the root image. It is assumed (although it is not clear) that the ‘creating’ and ‘requesting’ offences will be restricted to the act of bringing a deepfake image into being, rather than some of the more passive acts that fall within the scope of the ‘making’ offence.
This definitional issue brings the second, practical problem into play, which is one of enforcement of the ‘creating’ offence. The uncertainty as to the scope of ‘creation’ may well pose difficulties for the police when a suspect is in possession of deepfake images, but where the police are unable to establish with any certainty who originally created them. A suspect may well say that the images have been sent to them unsolicited, but that they nevertheless retained possession of them or that they deleted them. Assuming that ‘creation’ is limited to the act of bringing the deepfake image into being (as opposed, for example, to merely saving the image on a computer’s hard drive), the police would need to prove that the suspect had done more than simply possess the image. They would need to undertake forensic analysis of the computer (as well as the suspect’s other electronic devices) in order to attempt to reconstruct the precise history and derivation of the image.
Furthermore, by its very nature, the new ‘creating’ offence requires no one apart from the creator to see the image. The image may have been created solely for the creator’s own purposes. But even if it was created in anticipation of being distributed in the future, the image never actually has to move from the creator’s device for the creator to be criminally liable. It is, therefore, likely to be an offence that is prosecuted only: (a) where images are located as a result of other (unrelated) investigations requiring the search of a suspect’s devices or (b) where a third party accidentally or intentionally obtains access to the device of an individual in possession of such material. These difficulties in detection and enforcement may well result in a lower charge rate for this offence than was hoped by its advocates both within and outside parliament. These limitations help to explain why many campaigners consider that the offence must go hand in hand with more effective regulation of the software publicly available to create deepfake images.
The third problem might be called an ethical one. The proposed offence, as currently drafted, will criminalise those who experiment with the production of deepfake images on their computers with no intent to cause harm to the individual depicted or to publish that material. Essentially, individuals who create their own adult pornography.
Whilst it may seem like an over generalisation, that demographic is likely to include a significant proportion of adolescent males with ready access to and knowledge of AI software, combined with developing sexual interests and curiosities. It will be for each local police force and the Crown Prosecution Service to then exercise their discretion over how these individuals should be treated, having to balance the interests of those (typically women) depicted in deepfake imagery against the risk of over-criminalising young men, who perhaps need education and rehabilitation, rather than a spell behind bars.