This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Alan Collins

Partner, Hugh James Solicitors

Hannah Hodgson

Paralegal, Hugh James Solicitors

Navigating the deepfake dilemma in sexual harassment

Practice Notes
Navigating the deepfake dilemma in sexual harassment

By and

Alan Collins and Hannah Hodgson explore the dark nexus of AI and sexual harassment, unveiling the troubling rise of deepfake abuse and the urgent legal challenges it poses

When it comes to artificial intelligence (AI) and its potential risks, there are various areas to consider regarding sexual harassment. Concerningly, we are seeing the use of AI to sexually abuse victims online occur more frequently. Although the virtual world is positive and exciting for some, this is not the case for all.

Technological advancements have creating and sharing images easier than ever, facilitated by our ever-evolving social media platforms, smart phones, and other devices. However, as technology advances, so does the risk of sexual harassment via such platforms, and so it is essential we ensure the laws in the UK keep up with the times.

What are deepfake sexual harassment images?

Deepfake images refer to digitally manipulated images or videos created using artificial intelligence techniques. These manipulated visuals can make it appear as though someone is doing or saying something they never actually did. In the context of sexual harassment, deepfake images can be used to create explicit or pornographic content featuring individuals without their consent.

The risks associated with deepfake images in relation to sexual harassment are significant. Deepfakes can also be known as "revenge porn," where someone's face is superimposed onto explicit content, which in turn can lead to severe emotional distress, reputational damage, and even blackmail.

A further alarming feature of such AI created images is that they may also be used to impersonate individuals, making it difficult to distinguish between genuine and manipulated content. This can undermine trust and lead to the spread of false information or malicious intent.

We have seen headlines about the use of deepfake images against celebrities. Only last month Twitter/X was used to circulate vulgar, nonconsensual images of Taylor Swift that were of a sexually explicit and pornographic nature. According to reports, the images had 27 million views within just 9 hours of being uploaded and made their way onto other websites within a short period of time.

Although the images were a clear violation of Twitter/X’s policy which states “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (misleading media’)” there is concern that Twitter/X and other social media platforms and websites do not have effective measures in place to monitor such content.

Other concerns

From another perspective, there is concern in the development of AI-powered chatbots or virtual assistants that may engage in inappropriate or sexually suggestive conversations. These chatbots are programmed to respond to user input, and if not properly designed and monitored, they can perpetuate or even encourage harassment. AI chatbots often feature on apps that are popular with children, so it is critical the risk of exposure to sexually explicit language and harassment is monitored to ensure that children are safe when using such apps. When it comes to children particularly, many experts fear an epidemic of child sexual abuse facilitated by AI.

Additionally, the impact that deepfake content containing manipulated images, videos, or even audio recordings falsely depicting someone engaging in sexual acts or making explicit statements is frankly unknown. There is concern that at the rate AI is advancing, there will soon be a time where it will be close to impossible to differentiate real content from fake which would have very serious and significant consequences for victims of online sexual harassment, as well as conviction rates.

What is the law?

To constitute a sexual offence, current legislation requires some level of physical contact, which is clearly not present in virtually perpetrated assaults. The UK does, however, have specific laws in place to address revenge porn under the Criminal Justice and Courts Act 2015. This legislation criminalises the sharing of private sexual photographs or films with the intent to cause distress or harm. It also covers situations where the images have been altered using AI or other technologies, but as we see new methods used by AI to target victims, it is hoped the legislation will be far reaching enough to ensure justice.

We have also seen the implementation of the Online Safety Act 2023 which seeks to minimise the risks of online sexual assaults by working with government agencies such as Ofcom to place new legal duties and responsibilities on online service providers to keep people safe online.

AI is a complex and constantly evolving field, which makes it challenging for the law to keep up with the latest developments and create comprehensive regulations. Additionally, AI systems can be opaque, making it difficult to understand how they make decisions or to assign legal responsibility when something goes wrong. As AI becomes more prevalent, it's crucial for legal frameworks to adapt and address the unique challenges posed by this technology.

Current issues

The difficulty of convicting someone for deepfake image sexual abuse can vary depending on various factors. Some of the difficulties involved include:

  • Jurisdictional challenges: deepfake images can be created and distributed across borders, making it difficult to determine which jurisdiction's laws apply and to coordinate international cooperation in investigations.
  • Identifying the perpetrator: perpetrators of online sexual abuse often hide behind anonymous or fake identities, making it difficult to identify and locate them.
  • Technological complexity: deepfake technology is constantly evolving, making it challenging to detect and attribute the creation of deepfake images to a specific individual. Virtual tools like encryption can make it harder to track and identify perpetrators.
  • Digital evidence: online sexual abuse often leaves behind a digital trail, but gathering and preserving this evidence can be complex. It requires expertise in digital forensics to extract and authenticate the evidence, which can be challenging for law enforcement.
  • Consent and knowledge: proving that the victim did not consent to the creation or distribution of the deepfake image, and that they were unaware of its existence, can be challenging.
  • Victim cooperation: victims of sexual assault may face challenges in being believed or taken seriously due to societal biases, victim-blaming, or lack of understanding about the dynamics of online abuse. This is turn may prevent people from coming forward, and a lack of reporting can hinder evidence gathering and conviction rates.

Claiming compensation

Despite issues within criminal law, victims of deepfake and AI generate sexual harassment images may still be able to claim for compensation even if there has been no criminal conviction. For victims to ensure the best prospects possible, they should report the online crime as soon as possible and try to keep a virtual log of evidence. However, due to the nature of this type of abuse, it is understandable that this may not always be possible, and so seeking legal advice at the earliest opportunity is advisable.

There may be a stigma amongst victims that just because the abuse has taken place online, the resulting harm is less. This is absolutely not the case. we know the devastating and long-lasting impact online sexual abuse, such as deep fake images or online grooming, can have on a victim. It is widely recognised that it takes a tremendous amount of courage for victims to speak out about this type of abuse.

Going forward

To combat the risks of deepfake images, it is crucial to raise awareness about their existence and educate individuals on how to identify and report them. Developing advanced detection technologies and legal frameworks to address deepfake-related offences is essential.

Despite challenges, it is evident that legal systems and regulators are working to adapt and develop strategies to address the issue of deepfake image sexual abuse and sexual harassment. It is important to report such incidents to the appropriate authorities and seek legal advice. And it is essential to support victims and continuously adapt laws and investigative techniques to effectively combat online sexual abuse and sexual harassment.

Alan Collins is a partner and Hannah Hodgson is a paralegal in the Sex Abuse team at Hugh James Solicitors