Generative AI and the Irreducible Duty

By Solicitors Journal Editorial
As generative AI embeds across UK legal practice, courts and regulators are converging on a clear rule: liability remains human
Generative AI has moved from experimental novelty to operational reality in many UK law firms. Drafting tools, research assistants, document summarisation engines and internal knowledge bots are now embedded—formally or informally—across contentious and transactional practice. Yet as adoption accelerates, courts and regulators have converged on a simple allocation rule: liability is not outsourced to the model.
Lawyers and firms remain responsible for what they file, advise, draft and bill—regardless of whether a human, a junior, an outsourced provider or a machine produced the first draft.
That proposition is now explicit in senior judicial guidance (first issued in December 2023 and updated in October 2025), which warns that public large language models (LLMs) can fabricate cases, citations, quotations and even legislation. Accuracy must be checked before use. Confidentiality must be protected. Judges and legal representatives are personally responsible for material produced in their name.
The first wave of “AI liability” in legal practice is not, at least yet, dominated by classic client negligence claims. Instead, it is emerging through court-process sanctions: wasted costs, strike-outs, referrals to regulators and potential contempt consequences—driven by fabricated authorities, mis-citations and AI-assisted drafting failures.
The Divisional Court’s decision in Ayinde v London Borough of Haringey; Al-Haroun v Qatar National Bank (June 2025) crystallised that risk. In a Hamid jurisdiction context, the court treated submission of fictitious or unverifiable authorities as extremely serious, framed it as a breach of duties to the court and the administration of justice, and directed its judgment to professional bodies with an invitation to act urgently.
For professional negligence and risk practitioners, the message is stark: AI does not reduce the duty. It may expand the evidential burden of proving reasonable care.
Regulatory expectations: an AI overlay on existing duties
The regulatory picture in England and Wales is not one of a new standalone AI rulebook. Instead, it is best understood as an “AI overlay” on existing professional obligations.
The UK’s central approach to AI regulation, articulated in the 2023 AI White Paper and reinforced in the government’s February 2024 response, is pro-innovation and principles-based. Rather than legislate immediately, regulators are steered to apply existing powers and publish strategic updates.
In legal services, that means competence, honesty, integrity, confidentiality and supervision remain the core binding duties. What changes is how those duties must be operationalised in an AI-enabled workflow.
The Solicitors Regulation Authority’s November 2023 Risk Outlook addressed generative models directly, including ChatGPT, and acknowledged rapid adoption—including “shadow AI” use where staff deploy tools informally without formal firm approval. The SRA framed its approach as outcomes-focused: technology is permitted, but accountability is required.
Its Innovate compliance materials (most recently updated in early 2026) emphasise governance frameworks, leadership oversight, risk and impact assessment, training, monitoring and evaluation. Crucially, the SRA signals that the firm’s COLP bears responsibility for regulatory compliance when new technology is introduced, with board-level oversight expected in managing technology-failure risks.
In negligence terms, governance artefacts become evidential artefacts. If an AI-driven mistake crystallises into a claim, documented approvals, risk assessments, training logs and defined use cases may influence whether the firm appears to have acted reasonably.
At the oversight level, the Legal Services Board has reinforced this direction. In April 2024, it issued statutory guidance under the Legal Services Act framework, pushing frontline regulators to promote technology and innovation while maintaining public trust. In a parallel AI-specific update, the LSB described its approach as encouraging responsible innovation that “commands public trust”.
The LSB does not create new negligence duties. But it accelerates the “standard-setting ecosystem”: regulators publish expectations; professional bodies interpret them; firms embed them; courts treat them as reflective of reasonable practice. In a principles-led environment, the standard of care can evolve rapidly.
Courts and procedural integrity: the duty to verify
Senior judicial AI guidance is unusually direct about LLM failure modes. It warns that public chatbots do not answer from authoritative legal databases; outputs can be inaccurate or biased; and tools may “hallucinate” by fabricating cases, citations, quotes or legislation.
It also addresses confidentiality: do not enter private or confidential information into public chatbots, and assume that anything entered could become public.
Most significantly, it reiterates that legal representatives are responsible for what they put before the court. Lawyers may need reminding to independently verify any AI-generated research or citations.
That framing moves AI risk from an internal compliance issue to a matter of procedural integrity. Failure is not merely an internal mistake; it is a potential affront to the administration of justice.
The Divisional Court’s judgment in Ayinde / Al-Haroun demonstrates how courts are now responding. The case involved multiple authorities that were fictitious or did not support the propositions advanced. The court described reliance on a client’s AI-generated research without independent verification as a “lamentable failure” to comply with the basic requirement to check accuracy of material put before the court.
The judgment’s “further steps” section is striking. The court stated that promulgating guidance alone is insufficient; more must be done to ensure guidance is followed. It directed the judgment to professional bodies and invited urgent consideration of additional measures.
Closely related is Bandla v Solicitors Regulation Authority (May 2025), where the High Court emphasised decisive action to protect process integrity in the face of non-existent authorities. Even where AI use is denied, the judicial concern is the same: fake authority in formal documents is treated as an abuse-of-process risk.
For litigants in person, courts have shown contextual leniency. In Olsen v Finansiel Stabilitet A/S (January 2025), the High Court encountered fabricated authority and considered contempt, but was not satisfied to the criminal standard that the litigants knew the material was fake. In Zzaman v HMRC (FTT, April 2025), the tribunal warned that reliance on AI without human checks is dangerous and identified verification against authoritative sources as the “critical safeguard”.
For regulated lawyers, culpability analysis is likely to default upwards. Professional obligations include competence, supervision and duties to the court. The standard of care is correspondingly higher.
Professional bodies converge on verification and confidentiality
The Law Society’s updated “Generative AI – the essentials” guidance links the Ayinde judgment directly to professional obligations. Solicitors must take “positive steps” to ensure materials filed are accurate and from genuine sources. AI outputs must be checked against reliable, authoritative sources. Misuse may breach the SRA Code, and sanctions may include referrals and serious court consequences.
The Bar Council’s January 2024 guidance is similarly emphatic. It frames hallucinations, misinformation and confidentiality risk as core hazards. Sharing privileged, confidential or personal data in prompts may breach professional duties and trigger disciplinary or legal liability.
Across branches of the profession, the message is consistent: human verification is not optional.
Data protection: when prompts become processing
AI negligence rarely travels alone. It often intersects with confidentiality and data protection.
The Information Commissioner’s Office has made clear that AI and data protection are intertwined. Prompts and outputs can constitute personal data processing across the AI lifecycle, potentially triggering DPIAs, transparency obligations, security measures and rights-handling requirements.
The ICO’s generative AI consultation response highlights debates around lawful basis (including web-scraping), purpose limitation, output accuracy, individual rights and controllership in AI supply chains. It also signals that guidance will continue to evolve, particularly following legislative developments such as the Data (Use and Access) Act 2025.
For law firms, this intersects with privilege. Even “anonymised” prompts may contain contextual clues or metadata amounting to personal data. Supplier arrangements may involve cross-border processing.
Judicial guidance underscores the risk: entering private or confidential information into public chatbots may effectively make it public.
In negligence terms, a hallucinated authority is one problem. A confidentiality breach via prompting is potentially a much larger one—combining breach of retainer, breach of confidence, data protection exposure and reputational damage.
Negligence analysis: standard of care in an AI workflow
AI does not dilute professional duty. It often increases the surface area of what must be shown to be reasonable.
The SRA Code requires competent service, maintenance of professional knowledge and skills, and effective supervision. Supervisors remain accountable for work done through others. The firm Code requires systems and controls supporting ethical, competent service delivery.
Senior judicial guidance mirrors this. Accuracy must be checked before AI-provided information is relied upon. Legal representatives are responsible for what they put before the court.
In a future negligence claim, the “reasonable solicitor” benchmark will likely be argued using:
Known AI failure modes (hallucinations, non-authoritative outputs).
Available professional and judicial guidance.
The firm’s own policies and representations.
Once risk is known and guidance exists, failure to implement basic controls—verification, approved tools, client-data restrictions, supervision checks—becomes easier to plead as breach.
Delegating research or drafting to AI can be analogised to delegating to an unqualified assistant. Permissible only with effective supervision, clear scope and verification.
Litigation teams are particularly exposed because of duties to the court. A fabricated authority can trigger immediate adverse consequences: strike-out, wasted costs, regulatory referral.
Transactional teams are not immune. An AI-assisted clause mis-drafted or risk mis-summarised can produce quantifiable loss.
Evidential burdens will be process-heavy. Claimants are likely to seek disclosure of:
Tool identity and configuration.
Who used it, when and for what task.
Prompts, outputs and edits.
Checking steps undertaken.
Governance artefacts (policies, training logs, risk assessments).
The Law Society guidance advises documenting inputs and outputs where tools do not provide history and emphasises factchecking and authentication. The SRA stresses governance and monitoring.
An audit trail is not just good practice; it is potential litigation armour.
Insurance reality: MTC floor, underwriting shift
Solicitors’ professional indemnity insurance in England and Wales operates under the SRA’s Minimum Terms and Conditions (MTC). Qualifying insurance must indemnify against civil liability arising from private legal practice and operates on a claims-made basis.
This regime constrains the ability to exclude AI-specific negligence from qualifying PII. If the claim is, in substance, civil liability from private legal practice, it falls within the core trigger.
However, market behaviour can shift through pricing, capacity, proposal-form scrutiny and risk-management conditions.
Broker commentary indicates underwriters are increasingly asking about AI use: what tools are deployed, in which practice areas, under what policies, with what training and controls.
Insurer and broker guidance aimed at law firms emphasises managed adoption, verification culture and integration with cyber controls. AI governance is treated as correlated with multiple claim types—negligence, breach of confidence, cyber losses.
Coverage friction may still arise at the margins:
Whether a loss is a covered civil liability or a non-indemnifiable sanction.
Whether conduct is characterised as intentional or dishonest.
Boundary disputes between PII and cyber policies.
Aggregation issues if systemic AI defects affect multiple matters.
Beyond solicitors’ PII, non-PII lines (D&O, cyber, tech E&O) are experimenting with AI exclusions, sublimits and endorsements. International reporting suggests a trend towards containing systemic AI risk through tailored policy wording.
One notable gap remains: publicly available, UK-specific data on paid solicitors’ PII claims attributable specifically to generative AI remains sparse. Much commentary is anticipatory rather than retrospective.
Contractual risk allocation: retainers and vendors
Professional guidance suggests AI use may need to be discussed with clients to avoid misunderstanding.
The Law Society notes that the SRA does not mandate specific generative AI disclosures but advises that solicitors and clients decide whether and how such tools may be used, with clear communication preventing misunderstanding.
Best practice may therefore include:
Stating whether AI tools may be used and for what tasks.
Assuring supervision and verification.
Restricting use of client confidential information in public tools.
Explaining how AI affects speed, cost or staffing.
Marketing language should be cautious. Absolute claims—“error-free”, “AI-checked”—may become misrepresentation fodder if something goes wrong. Process-based descriptions are safer.
Vendor contracts also matter. The Law Society recommends supplier due diligence and negotiation of warranties, indemnities and limitations. For negligence defensibility, procurement terms affect auditability, explainability and recourse if vendor defects cause systemic errors.
In transactional contexts, AI-assisted due diligence raises additional exposure. If AI review misses a red flag, loss may surface as professional negligence, warranty dispute or W&I claim. Insurers may scrutinise diligence processes closely where AI is embedded.
Practical playbook: reducing claims and sanctions
Regulatory and judicial materials collectively suggest what “good” looks like.
A defensible AI governance model may include:
An approved-tools register distinguishing public from enterprise tools.
Defined permitted use cases by practice area.
“Red zone” prohibitions (no confidential data into public tools; no unverified citations in court documents).
Supervisory gates for court-facing documents.
Verification protocols against authoritative databases.
Documentation of prompts, outputs and checks.
Training and awareness programmes.
Integrated data governance and DPIA processes.
Renewal-ready documentation for insurers.
Litigation teams, in particular, should adopt explicit “verify and evidence verification” protocols. The “critical safeguard” identified by tribunals is checking outputs against authoritative sources.
AI governance must also connect to cyber governance. Threat actors are using AI to enhance phishing and social engineering. AI liability can arise from fraud as well as hallucination.
What comes next?
Three vectors merit close watch.
First, procedural reform. The Civil Justice Council’s AI working group, announced in June 2025, signals momentum towards potential procedural rule changes or explicit disclosure/verification expectations in court document preparation.
Second, regulatory follow-through. The Divisional Court’s call in Ayinde for urgent professional-body action suggests courts expect visible enforcement and adaptation.
Third, broader AI governance debates. While the UK remains committed to a pro-innovation model, parliamentary debate continues. The direction of travel is unlikely to absolve lawyers of responsibility.
For now, the liability rule is simple: AI is a workflow tool, not a duty shield. The reasonable solicitor in 2026 is one who understands known AI failure modes, embeds verification, protects confidentiality, documents process and communicates transparently.
In a profession built on trust and procedural integrity, generative AI may increase productivity. But when it comes to liability, the responsibility remains irreducibly human.

