Generative AI in law: opportunity, risk and responsibility

AI promises efficiency in legal practice, but unchecked use risks errors, sanctions, and damage to public confidence
It is well-known that AI can be used by professionals to improve the accuracy and efficiency of work produced, and its use within the legal sector has increased exponentially recently. Whilst the long-term impact of Generative AI on professional services is as yet unknown, many professionals, including lawyers, barristers, accountants and those in the financial services sector are using and investing in AI tools as part of their practice, with the aim of improving their services, reducing costs for clients and meeting new client demands. This is particularly due to the fact that processes can be automated and streamlined, with minimal human intervention.
AI has been utilised in litigation for some considerable time, to assist in large-scale disclosure exercises, where there are voluminous and often duplicated electronic documents. The benefits in terms of saving time for lawyers, reflected in costs-saving for clients are well-recognised. Furthermore, Generative AI is increasingly being used to review and analyse documents, as well as for research and drafting documents such as witness statements for use in court.
Although using Generative AI provides new and exciting opportunities, if not used properly, and if its limitations are not respected, AI brings with it additional risks for professionals which can have serious implications when things go wrong.
Within this article we explore the recent updates to the Law Society’s Guidance for solicitors on the use of Generative AI, as well as the recent conjoined cases of Ayinde v the London Borough of Haringey, and Al-Haroun v Qatar National Bank QPSC and another [2025] EWHC 1383 (Admin). These cases highlight the risks to those in the legal profession of using AI without putting in place appropriate checks and balances.
Updated Law Society guidance on Generative AI
The Law Society recently updated its guidance on: “Generative AI: the essentials” to take into account changes to the regulatory and policy landscapes on AI and the legal sector, and to highlight the judge’s comments in the extremely important cases of Ayinde and Al-Haroun. This guidance is essential reading for all those involved in the legal sector.
The guidance cautions that whilst the UK government is exploring the regulation of AI, the use of AI is not yet regulated. Therefore, given the legal uncertainties about whether and how AI and Generative AI are likely to be regulated in the future, it is important for those involved in the legal sector to recognise the risks in the current climate.
The Law Society guidance stresses that:
“It is important to note that Generative AI, like all forms of AI, lacks the capability to understand its output and meaning in the same way that humans do.
This means that Generative AI cannot autonomously validate or audit the accuracy of its results. It may even create false outputs.”
It is also clear that legal professionals have certain obligations and duties when using Generative AI, as is stressed in the Law Society guidance: “a solicitor’s professional duties and obligations, in particular their duties to the court and to the client, apply to work carried out by the solicitor.
This is regardless of whether AI or other technologies were used to assist with that work.
They apply whether AI was used by the solicitor personally or by anyone under that solicitor’s supervision.”
A solicitor’s duty to the court means that he or she must take positive steps to ensure information and documents submitted to the court are accurate and from genuine and verifiable sources. The solicitor also bears professional responsibility for the accuracy of witness statements and expert reports filed at court.
Furthermore, even if a solicitor is not being deliberately dishonest, misleading a client, the court, or anyone else is a breach of paragraph 1.4 of the SRA Code of Conduct.
The judge in Ayinde and Al-Haroun highlighted the dangers of AI-generated inaccuracies, which includes: (1) citing non-existent cases (or applying incorrect legal principles to cases which do exist); and (2) the importance of lawyers complying with their duties to the court.
The court’s decision in the Ayinde and Al-Haroun cases
The cases of Ayinde and Al-Haroun arose out of the actual or suspected use by lawyers of Generative AI tools to produce written arguments or witness statements which were not checked, so that false information was put before the court.
In the case of Ayinde, Mr Ayinde brought judicial review proceedings in relation to his housing accommodation. The grounds for judicial review settled by his barrister, included the citation of five non-existent cases and a summary of legislation which was not correct. The defendant’s legal representatives wrote to Mr Ayinde’s representatives indicating they could not find the cases cited and notifying them of their intention to seek a wasted costs order. Mr Ayinde’s barrister prepared a response (which was reviewed and approved by his solicitors) indicating that the reference to incorrect citations was a “cosmetic issue.”
In Al-Haroun, Mr Al-Haroun sought damages for the breach of a financing agreement. His solicitor relied upon legal research they received from Mr Al-Haroun when submitting witness statements containing citations of non-existent cases to the court, which they admitted to not verifying. It later transpired that Mr Al-Haroun had used AI in compiling the legal research.
Both these cases were referred to the Divisional Court under the court’s “Hamid jurisdiction,” which relates to the court’s inherent power to regulate its own procedures and to enforce duties that lawyers owe to the court.
The judge commented that “artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained…[since] the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.”
The court recognised that “freely available artificial intelligence tools, trained on large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”
Therefore, “those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work.”
The court continued that “there are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal professional with individual leadership responsibilities…and by those with the responsibility for regulating the provision of legal services.”
The judge in these cases stressed the seriousness of professionals failing to check citations, and the interpretation of legislation, and in presenting incorrect and misleading information to the court. It is essential that the courts can trust the veracity of the arguments placed before it, particularly with regard to cases cited in support of arguments. It was also made clear that any breach of professional standards from the misuse of AI to carry out legal research or draft court documents is extremely serious and can result in sanctions for the legal professionals involved.
Responsible use of AI
AI offers huge potential, but the judgments of cases such as Ayinde and Al-Haroun, as well as the Law Society’s guidance highlight the need to exercise vigilance when using AI. Courts will not excuse professionals who delegate judgment to machines. It is therefore essential to verify the sources of information, and to diligently check that case citations are correct, as well as reading the full judgment of a case cited before using it to back up legal argument. The key principle is: do not assume the information obtained from AI is correct, you must check those sources independently for yourself to ensure their accuracy.
.png&w=3840&q=75)

.jpg&w=256&q=75)