Can artificial intelligence be trusted for legal research? Lessons from Ayinde
.jpg&w=1920&q=85)
James Tumbridge, a Partner at Keystone Law, looks at the decision in Ayinde, R (On the Application Of) v London Borough of Haringey [2025] EWHC 1383 (Admin) (6 June 2025) concerning the duties that lawyers owe to the court and the actual or suspected use of artificial intelligence by lawyers to generate legal documents or arguments without adequate checking of the outputs
Artificial intelligence (AI) has been thought of as the solution to everything for the past couple of years. The use of AI in legal disputes presents a positive opportunity, but issues have already been spotted, resulting in various guidelines and rules being issued.
The first thing to understand is that not everything is generative AI (genAI), meaning you ask it for something, and it generates an outcome. The generative product of AI is what has caused most concern in legal proceedings. We have already seen embarrassment in US cases where, in May 2023, a New York lawyer used an AI tool, ChatGPT, for legal research, but the results included made-up cases. This resulted in Judge Castel demanding that the legal team explain itself. In Canada too there have been issues, in April this year, in the case of Hussein v. Canada, the lawyer apparently relied on a tailored legal genAI tool called Visto.ai designed for Canadian immigration cases, and still ended up using fake cases in the submissions, but also then cited real cases making the wrong points. Canada requires the disclosure of the use of AI, but that has not stopped these mistakes.
The judge ruling on the case commented:
“[39] I do not accept that this is permissible. The use of generative artificial intelligence is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human. The Court cannot be expected to spend time hunting for cases which do not exist or considering erroneous propositions of law.”
“[40] In fact, the two case hallucinations were not the full extent of the failure of the artificial intelligence product used. It also hallucinated the proper test for the admission on judicial review of evidence not before the decision-maker and cited, as authority, a case which had no bearing on the issue at all. To be clear, this was not a situation of a stray case with a variation of the established test but, rather, an approach similar to the test for new evidence on appeal. As noted above, the case relied upon in support of the wrong test (Cepeda-Gutierrez) has nothing to do with the issue. I note in passing that the case comprises 29 paragraphs and would take only a few minutes to review.”
The use of AI in English courts
English courts received guidance for judges on the use of AI in December 2023. One of the key warnings to judges was that the ‘[C]urrently available LLMs appear to have been trained on material published on the internet. Their ‘view’ of the law is often based heavily on US law, although some do purport to be able to distinguish between that and English law.’
The English courts do not ban the use of AI, but both judges and lawyers have been told clearly that they are responsible for the material which is produced in their name. In England, AI can be used, but the human user is responsible for its accuracy and is responsible for any errors. In November 2023, the Solicitors Regulation Authority (SRA) issued guidance on AI use, then the Bar Council published guidance, in January 2024, on the use of AI. More recently in 2025, the Chartered Institute of Arbitrators has also issued guidance. The common threat to all is that humans must check that the output is correct!
England has been looking to technology and potentially AI in regard to its use in helping with cases for some time. Back in March 2004, algorithm-based digital decision making was already working behind the scenes in the justice system. Lord Justice Birss explained then that algorithm-based decision making was already solving a problem at the online money claims service, with an algorithmic formula applied where defendants accept a debt, but ask for time to pay. Birss LJ went on to say that looking to the future: “AI used properly has the potential to enhance the work of lawyers and judges enormously.” In October 2024, the Lord Chancellor and Secretary of State for Justice, Shabana Mahmood MP, and, the Lady Chief Justice, The Right Honourable the Baroness Carr of Walton-on-the-Hill, also echoed the potential of technology for the future of the courts and justice system. Nothing is perfect though and, alongside accuracy, there is concern about ethics in regard to the use of AI. On ethical AI and international standards, the UK promotes the Ethical AI Initiative, and the international standard, specifically ISO 42001, the AI management system. This may be adopted as a standard in regard to English procedure at some point. In April 2025, the judiciary updated their guidance to judicial office holders on the use of AI. Yet, all this guidance seems to be unheeded: The need for a clearer understanding of the rules and policing of lawyers is clear following the case of Ayinde, R (On the Application Of) v London Borough of Haringey [2025] EWHC 1383 (Admin) (6 June 2025).
The case
This important case was heard by the President of the King’s bench and Mr Justice Johnson. It brought together two cases that involved the use by lawyers of genAI to produce written legal arguments or witness statements which were not then checked, so that false information ended up before the court.
The facts of these cases raise serious issues about the competence and conduct of the lawyers concerned. Some consider that this also means that we need to consider the adequacy of the relevant training, supervision and regulation, although perhaps it easier to ask: how do we know that a lawyer takes their duties seriously?
The importance of the case is perhaps best summarised by this quote from the judgment:
“Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained. As Dias J said when referring the case of Al-Haroun to this court, the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.”
Clearly, lawyers need to keep in mind their existing duties, whether barristers or solicitors. The SRA’s Rules of Conduct mean solicitors are under a duty not to mislead the court or others including by omission (Rule 1.4). They are under a duty only to make assertions or put forward statements, representations or submissions to the court or others which are properly arguable (Rule 2.4). Further relevant rules include the duty not to waste the court’s time (Rule 2.6) and the duty to draw the court’s attention to relevant cases, which are likely to have a material effect on the outcome (Rule 2.7). Most importantly a solicitor remains accountable for the work (Rule 3.5).
The court has a range of sanctions if a lawyer breaches the rules, including the public admonition of the lawyer, the imposition of a costs order, the imposition of a wasted costs order, striking out a case, referral to a regulator, the initiation of contempt proceedings, and a referral to the police, if the court thinks that is warranted.
Placing false material before the court with the intention that the court treats it as genuine may, depending on the person’s state of knowledge, amount to contempt. The problem with this is what level of knowledge is needed and how will it be proven?
In the case of Ayinde, it was submitted that the threshold for contempt proceedings was not met, because counsel did not know the citations were false.
If the judiciary are truly worried about the misuse of AI, they may need an adjustment to the rule, so that the state of knowledge is not relevant and, alternatively, it is strict liability that is the stick that ensures lawyers do not fail to check the AI work product.
The background to Ayinde
The case originated with a claim by Mr Ayinde represented by the Haringey Law Centre. Mr Victor Amadigwe, a solicitor, is the Chief Executive of the Haringey Law Centre. Ms Sunnelah Hussain is a paralegal working under his supervision and Ms Sarah Forey was the barrister instructed. The grounds for judicial review were settled and signed by Ms Forey, but she used AI and made inaccurate legal submissions, misstating the statutory provisions of the Housing Act 1996, and cited five fictitious cases. This came to light due to the defendant’s legal team writing and asking for copies of the cases they could not find. The errors were compounded when poor explanations for the errors were given, in that they did not explain them. In a hearing on wasted costs, Mr Justice Ritchie said:
“I do not consider that it was fair or reasonable to say that the erroneous citations could easily be explained and then to refuse to explain them.”
Mr Justice Ritchie then found that the behaviour of Ms Forey and the Haringey Law Centre had been improper and unreasonable and negligent. Before the Administrative Court, Ms Foley denied using AI tools to assist her with legal research and submitted that she was aware that AI is not a reliable source. She then accepted that she acted negligently and apologised to the court.
Ms Hussain and Mr Amadigwe of the Haringey Law Centre also apologised to the court. Mr Amadigwe explained it was not their practice to check what counsel produced.
The Administrative Court’s findings
The Court were far from impressed with the explanations by Ms Foley, saying:
“Ms Forey refuses to accept that her conduct was improper. She says that the underlying legal principles for which the cases were cited were sound, and that there are other authorities that could be cited to support those principles. She went as far as to state that these other authorities were the authorities that she ‘intended’ to cite (a proposition which, if taken literally, is not credible). An analogy was drawn with the mislabelling of a tin where the tin, in fact, contains the correct product. In our judgment, this entirely misses the point and shows a worrying lack of insight. We do not accept that a lack of access to textbooks or electronic subscription services within chambers, if that is the position, provides anything more than marginal mitigation. Ms Forey could have checked the cases she cited by searching the National Archives’ caselaw website or by going to the law library of her Inn of Court. We regret to say that she has not provided to the court a coherent explanation for what happened.”
The Court went on to find the threshold for contempt was met. Though they then determined that counsel’s junior nature and having already being publicly admonished and reported to the Bar Standards Board was sufficient sanction. Mr Amadigwe was referred to the SRA and Ms Hussain, as a paralegal under supervision, faced no punishment.
The background to the Al-Haroun case
Mr Al-Haroun sought substantial damages for alleged breaches of a financing agreement. His solicitor was Mr Hussain from Primus Solicitors. The defendants were Qatar National Bank and QNB Capital. The claimant’s lawyer sought to challenge an extension of time for the defence and their submissions caused concern before the court. Mrs Justice Dias dismissed the challenge due to these reasons:
“The court is deeply troubled and concerned by the fact that in the course of correspondence with the court and in the witness statements of both Mr Al-Haroun and Mr Hussain, reliance is placed on numerous authorities, many of which appear to be either completely fictitious or which, if they exist at all, do not contain the passages supposedly quoted from them, or do not support the propositions for which they are cited: see the attached schedule of references prepared by one of the court’s judicial assistants. It goes without saying that this is a matter of the utmost seriousness. Primus Solicitors are regulated by the SRA and Mr Hussain is accordingly an officer of the court. As such, both he and they are under a duty not to mislead or attempt to mislead the court, either by their own acts or omissions or by allowing or being complicit in the act or omissions of their client. The administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported. Putting before the court supposed ‘authorities’ which do not in fact exist, or which are not authority for the propositions relied upon is prima facie only explicable as either a conscious attempt to mislead or an unacceptable failure to exercise reasonable diligence to verify the material relied upon. For these reasons, the court considers it appropriate to refer the case for further consideration under the Hamid jurisdiction, pending which all questions of costs are reserved.”
The submissions included 18 made up cases and many other cases did not support the points submitted. Mr Al-Haroun admitted that the citations were generated using publicly available AI tools, legal search engines and online sources. He then submitted that he had complete but misplaced confidence in the authenticity of the material that he put before the court. Mr Hussain admitted his witness statement contained citations to non-existent authorities, based on his client’s research; an interesting approach for a solicitor to rely on their client for legal research. Mr Hussain reported himself to the SRA, and the Court rightly said it was concerned with the conduct of the lawyers, not the clients, in this case. The Court found that Mr Hussain, and Primus Solicitors, allowed a “lamentable failure to comply with the basic requirement to check the accuracy of material” and emphasised that lawyers have a professional responsibility to ensure the accuracy of materials. The Court then left the regulator to deal with Mr Hussain.
Conclusion
The future is clear: AI will be part of the administration of justice. What is also clear is that there is proper concern about its use. There likely needs to be procedural requirements for the disclosure of its use and generally users must own the outcomes as their responsibility. Clearly AI in our justice system can only be safely used with proper human oversight and responsibility.