AI and public law: risks, duties and compliance

AI promises efficiency and savings, but public sector use must comply with legal principles to safeguard fairness and accountability
The use of AI is obviously on the increase. This includes its use by the UK Government and other public bodies. AI can do a lot of good including increasing efficiency, improving productivity, reducing costs and reducing human error in some tasks.
But there are concerns with the use of AI. Most obviously, what if someone were to receive an adverse decision from a public authority without knowing how it had been reached? Or is bothered about what might have been input into the computer model?
Compliance with public law principles
Public sector use of AI in decision-making raises a general question as to whether there is a human dignity value in relation to receiving decisions that have been taken by humans. Public authorities must comply with the UK General Data Protection Regulation and the Data Protection Act 2018. But there are also challenges as to how public law principles might handle AI and the risks for public authorities who will wish to comply with the law.
A starting point is that, save where the law allows delegation, decisions must be made by whoever is granted the exercise of the power. If it is perceived that a computer has taken the decision there is a risk that a court might regard it as unlawful delegation. Delegation to AI is also restricted under the UK GDPR.
Further challenges arise where the use of AI lacks transparency. This most obviously raises issues of procedural fairness. Depending on the context, the law might impose a fair process requirement that the recipient of a decision be given sufficient information to understand how it has been reached.
Without understanding how AI usage and decision-making align, how can someone have the opportunity to assess whether a policy has been applied appropriately? Or comply with any duty to give reasons for decisions?
Whether or not AI reduces bias and does not introduce different biases will depend on the design and operation of the AI process. Discrimination including in the context of the Equality Act 2010 is a key risk, particularly if algorithms perpetuate existing biases in data. There has been one AI case where the Court of Appeal concluded that a public authority had failed to comply with the Equality Act.
Challenges arise in relation to public authorities' exercise of discretion. This raises principles including non-fettering of discretion, whereby public authorities should not apply a policy inflexibly and must consider individual circumstances.
If an AI decision appears odd or inexplicable on its face it has the potential to engage principles including irrationality, unreasonableness, and procedural fairness. It might be unsupported by evidence or be affected by a material mistake of fact. An AI-driven decision that does not provide equal treatment for those in similar circumstances could also be characterised as irrational.
A framework for compliance
The best outcome in relation to the use of AI is lawful and accurate decision- making rather than public authorities reaching decisions where there is a greater risk of non-compliance with public law principles.
Parliament could legislate to provide a framework specifically for public sector use of AI. A Private Member's Bill was put forward but currently no legislation specifically addresses the public sector use of AI. The UK could also adopt international instruments or consider comparative measures.
Good practices might avoid or mitigate the risks. The Government has, in particular, launched an AI Playbook, which sets out ten principles to be followed when using AI, and public bodies have been producing guidance.
Finally, various organisations and commentators including NGOs have put forward views which warrant consideration.
A challenge for judicial review
The most obvious route of court challenge to public authorities' decisions based on AI is judicial review (although sometimes legislation will provide other routes of challenge).
We shall see how the procedural rules and practices in relation to, in particular, burden of proof, use of expert evidence, the judicial review disclosure duty of candour, and the effect of short court time limits, evolve to deal with decisions that have involved AI.
Such evolution will be important to ensure effective remedies and also to enable public authorities to know where they stand when seeking to defend decisions.
Overall, the use of AI by public authorities is increasing and it is important to address the risks and challenges to secure high standards and legality in decision making.