When AI writes the complaint: responding to machine-generated requests

By Vicki Bowles
AI-generated complaints and information requests are increasing in volume and complexity—but often lack clarity, accuracy, and legal grounding
Solicitors, like many other professionals, cannot escape conversations about AI. AI has seeped into courtroom documents drafted by lawyers, is used to help generate summaries of meetings, and is even being used by individuals instead of formal legal advice.
In my field (information law and privacy), we are seeing a sharp increase in requests for information that are drafted by public AI tools. These requests are often linked to complaints or grievances, and these are also being generated by AI tools. In this article, we look at the issues with these generated requests, and explore what we can do – as legal advisors – to assist our clients in dealing with these requests.
The rise of AI-generated requests
One of the key issues with AI-generated correspondence is that it is often overly complicated and long. What could be said in a few short sentences is often spread across pages of background and contextual information that does not add anything to the substance of the request, and usually has the effect of obscuring the main issue.
Where appropriate, often the simplest way to deal with this is to speak directly to the individual. This enables a two-way conversation to clarify what is being asked and what the real issue is without a third-party intervention.
Where a direct conversation is not possible – either because the client does not have the necessary relationship with the individual or where the individual refuses to engage - clarification should be sought in writing as soon as possible.
When seeking clarification, it can be helpful to set out what you and your client believe the issues or questions to be, based on what you have been sent and any relevant context with the individual.
This clarification can then ask for confirmation that this is correct. It can also be helpful to give a reasonable deadline for a response, with a statement that if no correspondence has been received by a particular date, you will proceed on the basis that your interpretation is correct.
This gives a clear message to the individual that if your understanding is not correct, they have to take action in order to get what they are seeking. It can also be helpful where you are dealing with a request that has a statutory deadline. In the case of a subject access request under the UK GDPR, for example, the statutory time limit pauses whilst you are waiting for clarification.
Giving a deadline for clarification gives your client certainty on the statutory deadline for a response if no further correspondence is received from the individual.
Managing inaccuracies and expectations
Another issue with the public AI generated requests, is that the information being used to formulate the request is not necessarily accurate or correct. In practice, I have seen references to EU law which no longer apply in the UK, as well as references to statutes that apply in US states.
Many of the free tools also have a confirmation bias, so they respond to questions and prompts in a way that the AI believes that the individual wants them answered, and providing support for a particular standpoint even if incorrect. Some AI tools will invent a statute or case law to support a particular argument – and a non-legally trained individual will often not have the skills to research the validity of these statutes and cases.
An AI-generated letter will only be as good as the prompts used to deliver it, and whilst this technology is still relatively new, many individuals will not yet have the skills to craft prompts in a way that can overcome some of the issues identified above.
All of these issues have the effect of raising the expectations of requestors, and giving unrealistic or incorrect advice on the strength of their case or argument, and the expected outcome.
Unfortunately, once an individual has been told what they want to hear, it can be very difficult to convince them differently. However, it can be important to still attempt to address any inaccuracies, and make it clear where you are starting from in responding to their request.
As an example, a client may get a request under the Freedom of Information Act 2000 that looks like this: “Under the Act I am entitled to see all emails you hold, in full and without redaction, leading up to the decision on 3 May 2025 to implement the new policy because emails can never be confidential under the Public Email Systems Order of 1992.”
There are three immediate inaccuracies in this request which you may want to consider rectifying in your acknowledgement:
The Freedom of Information Act 2000 gives a right to information and not documents, so there is no automatic right to the emails, only the information contained in the emails.
Emails can be confidential under the common law – confidentiality is determined by the content of the email and not the format of the information.
The legislation quoted does not exist.
Even if the individual does not agree with your position, if there is likely to be a complaint, it can be helpful to demonstrate to the regulator (in this case, the Information Commissioner’s Office or ICO) that you have tried to manage expectations.
Clarification and refusal strategies
Finally, if you have a request or complaint that you and your client cannot understand and the individual does not want to engage, there may be ways of refusing to deal with the request. Under access to information legislation, a request might be considered manifestly unreasonable or vexatious if the burden of dealing with the request outweighs any public value in the information sought. Under the UK GDPR, a request may be manifestly unfounded or excessive, and could be refused on that basis. These options should not (and cannot) be applied just because AI has generated a request, but they may assist in the more extreme cases where the purpose cannot be ascertained from the text provided.
Looking ahead, AI-generated requests are likely to increase as individuals become more confident with the technology. Over time, individuals will learn how to craft prompts to elicit targeted and accurate requests, but whilst the current issues with the technology exist, clients and their legal advisors will have to navigate this new wave of responses by clarification and expectation setting, and considering refusal in more extreme cases.










