Online disinformation and the law

Danielle Reece-Greenhalgh, a Partner at Corker Binning, assesses the gaps in criminal law in regard to tackling disinformation online
The capacity to manipulate perception through digital content poses an escalating threat to democracy. In recent years, deepfakes have made global headlines: a video of Ukrainian President Volodymyr Zelenskyy apparently calling for troops to surrender was circulated during the early stages of Russia’s invasion in 2022. Fabricated footage of Barack Obama was used to demonstrate how convincingly political figures can be reported as making statements they never made, and artificial intelligence (AI)-generated videos of public figures, such as Keir Starmer and Rishi Sunak, appeared in UK social media feeds ahead of anticipated elections.
While the UK has introduced regulatory mechanisms, most recently through the Online Safety Act 2023 (OSA), there remains a gap in criminal law when it comes to tackling the deliberate dissemination of false or misleading information online. Despite a growing awareness of the harms associated with disinformation, the UK’s criminal framework remains reactive and ill-equipped to address the scale of these challenges.
The rapid evolution of generative AI has made it easier than ever to create deepfakes. The Office of Communications, otherwise known as Ofcom, defines deepfakes as ‘audio-visual content that has been generated or manipulated using AI, and that misrepresents someone or something.’ It identifies three primary uses: to demean, defraud, or disinform. These categories often overlap. For instance, a deepfake sexual image or video may be used in sextortion, combining elements of humiliation and blackmail.
The OSA made initial strides, through Section 188, which introduced the new Section 66B into the Sexual Offences Act 2003, criminalising the intentional sharing (or threat to share) images or films that appear to show someone in an intimate state, including synthetically generated images. The breadth of the statutory definition, encompassing altered or computer-generated content, criminalises deepfakes in the same way that ‘pseudo-photographs’ of children have been criminalised since 1978. However, it is a narrow offence, specifically confined to intimate images, and does nothing to address the broader disinformation landscape.
The UK criminal landscape
False communications: Section 179 of the Online Safety Act 2023
The most bespoke criminal offence targeting online disinformation is found in Section 179 OSA. A person commits an offence under this section if they send a message, containing information they know to be false with the intention of causing ‘non-trivial psychological or physical harm’ to a likely audience, and they have ‘no reasonable excuse’ for sending it.
This offence introduces a high threshold. It excludes misinformation (in which the content is created or disseminated without realisation or appreciation as to its falsity) and applies only to those who knowingly disseminate false information with intent to harm. In doing so, it excludes the vast proportion of harmful content circulating online, especially conspiracy theories, pseudo-scientific claims, and politically motivated distortions.
There are three main factors that limit the application of Section 179. Firstly, prosecutors must establish that the defendant knew the information was false, a high evidential burden, particularly where the disinformation is ideological or belief based.
Secondly, a person’s intention when spreading disinformation is crucial and ripe for challenge. For example, a person may accept (a) that they knew the information was false; (b) that it caused harm to its audience; and (c) that there was no reasonable excuse for sending it. However, they may reasonably say that by sharing the information, they were simply reckless as to the harm that would be caused. This would not be sufficient for the offence to be made out. It must be shown that they intended to cause harm by the sending of the information. Thirdly, the threshold of ‘non-trivial psychological or physical harm’ to a likely audience remains undefined. No statutory guidance exists on what constitutes sufficient ‘harm’ to meet the offence’s standard.
Communications offences: Section 1 of the Malicious Communications Act 1988 and Section 127 of the Communications Act 2003
Legislation concerning malicious communications involves similar hurdles with respect to the intention behind the act and the harm caused.
Section 1 of the Malicious Communications 1988 (MCA) requires that a person sends an ‘indecent or grossly offensive’ communication where their purpose (or one of their purposes) is that it should cause ‘distress or anxiety to the recipient or to any other person to whom he intends that it […] should be communicated.’ If the false information falls short of being ‘indecent or grossly offensive’ and/or is not intended to cause distress or anxiety (merely to disinform), this offence is not committed.
Section 127 of the Communications Act 2003 (CA) does go some way to remedying the deficiencies in regard to Section 1 of the MCA. Under subsection 2, a person can be guilty of an offence where they send or cause to be sent, using a public electronic communications network, a message that they know to be false. The sending must be for the purpose of ‘causing annoyance, inconvenience or needless anxiety to another.’
The Section 127 CA offence therefore aligns with its successor in regard to Section 179 of the OSA, in that a defendant must be shown to have known that the information they were sharing was indeed false. However, the mens rea bar in the Section 127 CA offence is lower in one sense than that in the Section 179 OSA offence, requiring only an intention to cause ‘annoyance, inconvenience or needless anxiety’ rather than ‘non-trivial psychological or physical harm.’
However, in a world where disinformation may be used for much grander purposes, for disrupting elections or undermining public health messages, for example, this threshold is too individualistic. A defendant may consider that the disinformation is for society’s greater good. That by proliferating knowingly false information which influences public opinion in one particular way, the intention is simply to ensure the dominance of that opinion over another, not that any annoyance, inconvenience or needless anxiety be caused to any one person in particular.
Disinformation in the context of UK elections
In the case of elections in the UK, there is a very niche criminal offence relating to the making or publication of a ‘false statement of fact in relation to [a] candidate’s personal character or conduct’ under Section 106 of the Representation of the People Act 1983. This offence is committed where a knowingly false statement is made before or during an election for the purpose of affecting the return of that candidate.
In the 2010 case of R. (Woolas) v Parliamentary Election Court [2010] EWHC 3169 (Admin), the High Court was asked to consider granting permission to bring judicial review against the Election Court’s decision that a Labour Party candidate (Mr Woolas) had made false statements about the personal character or conduct of the Liberal Democrat Party candidate (Mr Watkins) to the criminal standard. In finding against him, the Election Court had relied on the most recent authority on the issue (from 1911) that:
‘A politician for his public conduct may be criticised, held up to obloquy: for that the statute gives no redress; but when the man beneath the politician has his honour, veracity and purity assailed, he is entitled to demand that his constituents shall not be poisoned against him by false statements containing such unfounded imputations’ (The North Division of the County of Louth (1911) 6 O'M and H 103).
The High Court determined that the Election Court’s interpretation of ‘personal character or conduct’ had been too wide, that a false statement could be about personal character or public character but not both. Whilst granting permission to bring judicial review, it nevertheless upheld the criminal findings in respect of two of the false statements which went, in their view, to the personal character of Mr Watkins. In their concluding remarks, the Court made the following statement as to the issues:
‘Imposing a criminal penalty on a person who fails to exercise care when making statements in respect of a candidate's political position or character that by implication suggest he is a hypocrite would very significantly curtail the freedom of political debate so essential to a democracy. It could not be justified as representing the intention of Parliament. However, imposing such a penalty where care is not taken in making a statement that goes beyond this and is a statement in relation to the personal character of a candidate can only enhance the standard of political debate and thus strengthen the way in which a democratic legislature is elected.’
Whilst this decision concerned a very narrow issue, the broader implications in the political disinformation debate are clear. To extend criminal law in a way which meaningfully proscribes the proliferation of false political information (including deepfake material) would represent a profound shift in policy by parliament. Whilst there is in one view an argument to be made for such a shift in favour of a more transparent, truth-based online world, the potential implications on individuals and their free speech rights are significant.
The OSA: shifting the burden onto online platforms
Perhaps for this reason, the OSA focuses less on the responsibility of individuals, and more on the responsibility of online platforms, particularly those larger social media platforms hosting vast swathes of content. It is by far the most significant legislation in the UK’s online regulation sector, and sits alongside similar approaches taken by the EU’s Digital Services Act. Key provisions include:
- Section 10, which requires platforms to remove illegal or harmful (but not false or misleading) content once they become aware of it.
- Section 71, which compels large platforms to enforce their own terms and conditions, including those addressing misinformation and disinformation.
- Section 152, which mandates the establishment of an Ofcom advisory committee on misinformation and disinformation.
These provisions, if properly enforced, would go some way to tackling the disinformation problem. However, there is no positive obligation on internet platforms to actively work against those proliferating false information via their services. Moreover, the requirement on platforms to self-regulate is largely redundant where many of the biggest names are increasingly pulling away from their regulatory responsibilities and from any commitment to thorough fact checking.
Meta (which brings together Facebook, Instagram, and WhatsApp), announced in January 2025 that it would follow the X example of abandoning its independent fact checkers in favour of ‘community notes’, placing the burden of truth verification on users. This approach may be adequate for low stakes, individualised content moderation, but it is poorly suited to addressing coordinated, high-impact disinformation.
The UK is making incremental progress in responding to online disinformation, particularly through general platform regulation via the OSA, and the criminalisation of certain specified forms of deepfake abuse. However, the current criminal legal framework is incapable of addressing the widespread proliferation of false or misleading information in any meaningful way. On a fundamental level, this is unsurprising. It is an onerous burden for the UK (or any country) to use its domestic criminal law to effectively tackle a problem that has no regard for jurisdictional borders.