AI and the evolving risks to justice
.png&w=1920&q=85)
By Sue Prince
Courts confront growing challenges as artificial intelligence reshapes litigation, raising urgent questions about reliability, ethics, and procedural fairness
The life of the law is, as Oliver Wendell Holmes observed, always evolving. However, the pace at which it now needs to adapt is unprecedented, largely due to the rapid development of technology. In particular, the scale and speed of the expansion of artificial intelligence use across all sectors of civil society is striking.
The potential applications of AI are extensive, and even at this burgeoning stage of its development, the legal profession increasingly asks not whether AI should be used, but how it should be used. For courts, the challenge is therefore not whether AI will be used in the course of litigation, but how its use can be managed to preserve the integrity of the civil justice system. One type of AI, generative AI, was only made publicly available in November 2022, introducing a new and efficient way to analyse and synthesise documents, files and other forms of written language. It has impacted the proportion of firms using AI across OECD countries, rising from 8.7 per cent usage in 2023 to 20.2 per cent in 2025 (OECD, 2026). Similarly, according to Thomson Reuters, the proportion of lawyers using generative AI increased from 14 per cent in 2024 to 26 per cent in 2025.
Sir Geoffrey Vos, Master of the Rolls, has acknowledged the significant impact of technology on the way litigants prepare for court. Speaking at the Old Bailey in February, he observed: “AI is now being used by almost every individual litigant in person and small business. The first port of call used to be a lawyer if one was available and affordable. Now the first port of call is ChatGPT or CoPilot.” This observation reflects a fundamental shift in how legal problems are approached, even by those who might previously have sought legal advice. As AI continues to develop, it is therefore essential that lawyers, judges, regulators, and other legal professionals collaborate to develop a cohesive approach about the future and whether additional rules, ethical guidance and other forms of structural guardrails may be needed to protect the integrity of the justice system and to promote innovative design. As an example, in December 2025, the European Commission for the Efficiency of Justice (CEPEJ) published guidelines to provide a framework for the implementation of AI in the administration of justice.
For those individuals who connect with the legal system and the courts, AI platforms can appear to offer confident and persuasive legal sources and purposeful reasoning to help with their case. This is especially so for those lawyers, judges, or litigants inexperienced in its use. AI platforms may also engage in what has been described as “sycophancy”, providing overly confident or affirming responses to users’ queries. Large language models, on which generative AI platforms are built, rarely respond that they “do not know” the answer to a question. Instead, they are designed to generate responses that appear helpful and coherent. AI platforms are also known to fabricate or “hallucinate” legal arguments or case citations. They may proffer biased commentary in support of an argument. For this reason, the use of these tools by judges and the court service, beyond basic administration is quite limited.
Many lawyers are now aware that if such material is not carefully checked, it can lead to significant inaccuracies in documents submitted to the court. Several cases, in the UK and abroad, have already illustrated this problem. In R (Ayinde) v London Borough of Haringey [2025] EWHC 1383, submissions to the court were found to contain fabricated legal citations and a misrepresentation of a statutory provision. Dame Victoria Sharp warned that “there are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused”. Similarly, in the Canadian case of Kapahi Real Estate v Elite Real Estate Club of Toronto 2026 ONSC 1438, the Ontario Superior Court found that a lawyer challenging a costs order had included fabricated quotations in submissions to the court. In both cases, the judges referred the lawyers involved to their professional regulators to consider whether disciplinary action was appropriate.
In the United States, courts have recently addressed the implications of AI for legal privilege. In US v Heppner 25 503 (SDNY) 2026, the District Court in New York held that documents generated using an AI platform could not attract legal professional privilege. The defendant had uploaded information obtained from his lawyer into a generative AI system while preparing for his criminal case. The court found that the communications lacked the elements required for attorney-client privilege, which depend upon “a trusted human relationship with a licensed professional”. Such a relationship, the court held, cannot exist with an AI system, despite its anthropomorphic qualities. The judge also noted that the terms and conditions of most AI platforms expressly reserve the right to share inputted data with third parties, meaning they are not designed to guarantee confidentiality. The case illustrates how the emergence of AI represents, in the court’s words, “a new frontier in the ongoing dialogue between new technology and the law”.
For lawyers, the professional responsibility issues raised are growing in importance. These include concerns about confidentiality and data protection when inputting information into AI systems, the risk of bias and assumptions, and the need to verify all sources and outputs so that a “human in the loop” remains responsible for the final work product. The complexity of the law, combined with differences across jurisdictions, court hierarchies and levels of authority, can produce imprecise or misleading results when AI tools are used without sufficient expertise or care. In addition, many authoritative legal sources are located behind paywalls and therefore may not be accessible to basic generative AI platforms. Questions of reliability therefore complicate the ethical use of AI in legal practice.
Reports of cases such as these have led perhaps to a more cautious approach to AI use and less automatic trust in the results by lawyers, mindful of their professional liability if they overly rely on AI without building in scrupulous checks and balances.
The risks for litigants in person are huge as the lure of generative AI and its authoritative tone can create a minefield of jeopardy, especially if the user is unaware of the pitfalls of using chatbots or platforms. Judicial guidance issued by the Judiciary of England and Wales in October 2025 advises that where it appears an AI chatbot may have been used to prepare submissions or other documents, judges should enquire about the checks undertaken to verify the accuracy of the material and remind litigants that they remain responsible for the content they place before the court. The point about responsibility is challenging, however. According to the Administrative Justice Council (AJC) Addressing Disadvantage Report, published in 2025, litigants in person with little experience of law generally find navigating the legal system opaque, intimidating and complex. For this reason they are more likely to turn to accessible, alternative means to find answers.
Recent cases also illustrate the broader challenges created by other new and emerging technologies and how the pace of change creates further dangers ahead. In UAB Business Enterprise v Oneta [2026] EWHC 543, in a case before the Insolvency and Companies Court concerning company directors, the court discovered that one of the claimants was wearing smart glasses while giving evidence. Smart glasses are a form of spectacles that allow the wearer to connect to a mobile phone, access the internet, and record images while they are being worn. Although the claimant denied that the glasses influenced his testimony, the judge concluded that he was being coached through a connection between his phone and a device in the smart glasses. As a result, the court rejected his evidence in its entirety. It could have gone much further and found a contempt of court had it been so minded.
Smart glasses are part of a broader category of wearable technologies which are becoming increasingly available. It is predicted that Apple will introduce a version of smart glasses in 2027 bringing the technology to an even wider market. In the UAB case, the court became aware of the technology partly because of the electronic interference it caused during the hearing. As these technologies become more sophisticated, it may become increasingly difficult to distinguish smart glasses from ordinary spectacles as they become more technologically efficient and less obtrusive. Courts may therefore face new questions about how to respond to wearable technologies that allow users to access information, microphones, or cameras during proceedings. This is especially if they become as commonplace as the mobile phone.
Historically, courts have responded negatively to new technologies perceived to threaten the administration of justice. When cameras first began to appear in courtrooms, in criminal proceedings, Parliament concluded that they posed such a risk to justice that they should be banned. Section 41 of the Criminal Justice Act 1925 therefore prohibited photography not only inside courtrooms but also in the immediate vicinity of court buildings. The purpose of the provision was to protect the integrity and dignity of judicial proceedings. Although the law has since been modified by the Crime and Courts Act 2013 to allow limited exceptions, the general prohibition remains in force. Significantly, the ban survived the emergence of later technologies, including the advent of BBC television broadcasting in 1936. However, as wearable technologies become increasingly ubiquitous, and camera recordings become more commonplace, maintaining such restrictions and enforcing an outright ban may become more challenging and require a reimagination of legal concepts.
For judges and everyone using the courts there is growing motivation to integrate AI into various aspects of the legal system. Government policy recognises the potential efficiencies, and the subsequent positive impact on timeliness performance indicators, AI may bring to the justice system. The Justice Secretary, David Lammy, has announced a programme of digital modernisation designed to improve procedural justice through greater use of technology. Planned initiatives include the use of AI tools to transcribe and summarise judgments, as well as AI-assisted systems to support court listing and case management.
These developments and the groundswell of interest in AI tools and services underline the need for members of the legal profession to be trained in digital literacy and responsible use of AI. One key skill will be to be adaptable to change as lawyers keep abreast of developments to ensure ethical compliance. At the same time avoidance of AI in the future is unlikely to be an option, as it continues to permeate every aspect of civil society, and law firm clients and litigants in person demand its use. Judges, policymakers and members of the legal profession should consider how AI can be integrated effectively to enhance the legal system whilst also maintaining continued trust in the integrity of justice. This is likely to require innovative approaches and new ways of thinking about the justice system, whilst not forgetting foundational principle of due process, accountability, transparency and access to justice.

.jpeg&w=256&q=75)