Does AI Help Or Hinder Litigants-in-Person in the Family Court?

Generative AI is now ubiquitous in family litigation. Parents in private law children disputes, former spouses in financial remedy proceedings, and parties to cross-border cases are turning to AI chatbots and LLM drafting tools when they cannot afford, or cannot access, legal advice. Courts in England and Wales, Scotland, Canada, Australia, and the United States are all responding to this change, but not fast enough to negate some of the inequalities and abuses that can arise from Litigants in Person (LiP’s) using AI in the Family Courts.
For LiP’s, AI offers speed, access to primary and secondary legal sources, and apparent authority. It also carries the real risk of fabricated case law, procedurally incoherent or incorrect applications, and the new opportunities to control and abuse. The question for the entire legal profession is whether the use of AI by LiPs in the Family Court opens up access to justice, thereby creating a more even playing field (especially one party has legal representation) or an unstable tool that can be weaponised by the stronger and more tech-confident party to proceedings?
The limits of judicial sympathy
The clearest statement from the family jurisdiction of England and Wales comes from D (A Child) (Recusal) [2025]EWCA Civ 1570. The mother, an LiP, relied on a skeleton argument generated by AI that mixed genuine authorities, misapplied the law, and included citations that did not exist. Lord Justice Baker observed that it was “entirely understandable that LiPs should resort to artificial intelligence for help”, reflecting the reality of post-LASPO family justice where digital tools are used to plug the advice gap. At the same time, the Court stressed that sympathy for the position of parents does not lessen their responsibility to ensure that material placed before the court is accurate and reliable, and warned of the costs and delays caused when judges must unpick hallucinatory citations and confused arguments.
Guidance for the judiciary and wider profession
In England and Wales, the Judicial Guidance on Artificial Intelligence issued in October 2025 highlights hallucinations, training data bias, and the danger of putting confidential information into public tools, while accepting that AI-assisted material now appears frequently in litigation. The guidance is directed at judges but anticipates procedural responses to AI use by parties, particularly LiPs.
In Scotland, the Scottish Courts and Tribunals Service AI policy sets principles of fairness, transparency, and equality and (for the time being) excludes AI from decision-making in courts and devolved tribunals. The current emphasis is on transcription, summarisation, and translation tools, which will indirectly shape family work as they are deployed.
Other common law jurisdictions are already imposing consequences when AI goes wrong. In British Columbia, Nina Zhang v Wei Chen 2024 BCSC 285, the court imposed costs sanctions against a lawyer who unknowingly cited two fictitious ChatGPT authorities in a family application. The court accepted that there was no intention to mislead, but still required the practitioner to bear the extra costs arising from false citations. The Canadian Judicial Council Guidance issued in 2024 stresses the need for AI training and education, as well as the protection of judicial independence.
Australia has moved quickly on family-specific issues. In Helmold & Mariya (No 2) FedCFamC1A 163, a self-represented father admitted using AI to draft his Notice of Appeal and Summary of Argument, which cited non-existent cases. The Full Court, endorsing the cautions made by Dame Victoria Sharp, President of the King’s Bench Division of the High Court of Justice in Ayinde v The London Borough of Haringey [2025] EWHV 1383 (Admin) and reminded all litigants of their duty not to mislead the Court and flagged the risks of uploading family documents to public models, including possible breach of section 114Q of the Family Law Act 1975 and waiver of privilege. Subsequent guidance expects responsible AI use and permits judges to require disclosure of AI assistance.
In the United States, MATA v Avianca in the Southern District of New York remains the leading case concerning sanctions following AI-generated mistakes. Several federal and state judges now require certificates stating whether AI has been used and confirming the accuracy of citations.
How AI can help Litigants in Person – the good
Despite anxiety about hallucinations, AI delivers concrete benefits in family disputes. Those benefits are likely to grow as tools mature and courts endorse purpose-built systems rather than generic chatbots.
For LiPs, three practical features stand out:
Information access - AI assistants can explain procedural stages, suggest likely forms and help parents understand how to complete a C100 or Form A. In the United States, court-sponsored chatbots such as Pro Se Pal guide users through basic procedure and paperwork. The Ministry of Justice AI Action Plan envisages similar tools.
Translation and plain language - AI translation and “legalese decoder” tools can turn orders and directions into simpler English or into a litigant’s first language. This improves understanding of case management directions and welfare reports. The UK AI Action Plan highlights AI-assisted interpretation as a priority.
Preparation and organisation - Summarisation systems can digest long bundles, produce timelines and generate draft statements. Used carefully, they help a parent present events coherently and pick out key points from extensive messaging. Tools that analyse bank statements and financial material already assist financial remedy work.
There is also a psychological dimension. For many LiPs, an AI assistant offers guidance that feels less intimidating than an appointment and is available outside office hours. For some, “a chatbot is better than nothing”, provided its limits are understood.
The bad
The idea of AI as neutral and non-biased still requires caution. Training data can embed existing bias, and family outcomes are heavily fact-specific. At the same time, a system that gives both parents access to the same procedural information may help counter situations where one party has enjoyed privileged access to advice or has historically controlled information flows.
…and the ugly
The most troubling aspects of AI in family proceedings arise when it becomes a tool of abuse. Domestic abuse and coercive and controlling behaviour have long included technology-facilitated conduct. Courts increasingly encounter digital surveillance, message manipulation, and online harassment, and AI only magnifies these patterns.
Deepfakes, synthetic audio, and fabricated text can create material that appears compelling. US practitioners have already faced doctored recordings in custody disputes, and family lawyers in England report misuse of apps that generate convincing fake message threads. Combined with existing coercive control, these tools can be used to intimidate, isolate and discredit victims.
Controlling parties may also exploit the appearance of technical fluency. A polished position statement can lend respectability to allegations that form part of wider gaslighting or narrative control. Under Practice Direction 12J (supported by the Domestic Abuse Act 2021), judges must focus on behaviour patterns and their impact on children, which becomes harder where patterns are partly concealed behind sophisticated documents.
Courts and regulators are only beginning to address evidential consequences. Forensic IT expertise can expose manipulation, but carries a cost that sits uncomfortably with access to justice. Some commentators argue for panels of court-appointed forensic experts to test high-stakes audiovisual material. Others emphasise judicial training so that fact‑finders are alive both to synthetic evidence and to the ‘liar’s dividend’ where genuine material is dismissed as fake.
Practitioners and court officials must now:
Probe the provenance of digital material on which either party relies.
Consider directions for disclosure of original files and metadata.
Stay alert to patterns of technological control as part of the broader abuse picture.
The tech knowledge gap
The risks to justice extend beyond mere access to AI tools. The advantage lies with the party best able to use them effectively. When both parties are LiPs but only one has the skills or confidence to deploy AI, there is a risk that one party is in a far better position to collate evidence and argue their case.
Research on the digital divide for litigants in person in England and Wales highlights uneven access to information, procedural disadvantages, and unfair outcomes in cases where one party lacks competence with technology. LiP’s who are English second-language speakers are particularly affected, with many not receiving information in a form they genuinely understand, even when AI is used to translate documents.
The Family Procedure Rules require courts to deal with cases justly, including placing parties, so far as practicable, on an equal footing. Active case management enables judges to refine the issues, question LiPs where appropriate, and provide focused procedural explanations to support that objective. Interpreting and language services address some aspects of the imbalance between represented and unrepresented parties, but not all. The growing use of generative AI by LiP has a mixed impact: it can improve understanding and document quality, yet also introduce inaccuracies and unfocused material that may undermine effective case management.
Concluding comments
Here is where we are at present.
The overriding message from the judiciary, both in the UK and worldwide, is that humans are ultimately responsible for anything presented to the Court, whether it is generated by AI or not. Just as ignorance of the law is no defence in the criminal context, placing full faith in machine-generated documents that lack human oversight will result in sanctions.
When it comes to the deeper issues of access to information and the potential for AI tools to be used for abuse, the Government currently treats AI in the justice system as an operational issue. The Ministry of Justice AI Action Plan adopts a “scan, pilot, scale” approach. It identifies child arrangements and civil money claims as early test areas, alongside back‑office tools for summarisation and triage.
However, as we know from our experience with the negative aspects of social media, a great deal of harm can be done whilst the law plays catch-up. And this is where we must prevent history from repeating itself.

