AI under the radar: governance lessons from the West Midlands Police failure

The increasing use of everyday AI tools without oversight is creating new governance risks for public bodies
Artificial intelligence has infiltrated every aspect of our lives, to the extent we don’t always know we’re even using it.
Take a simple Google search, which has fundamentally changed with AI-generated overviews, while Microsoft has installed CoPilot plug-ins within its Edge internet browser to summarise page results.
These are examples of “under-the-radar” AI, which is increasingly being used by staff without organisational oversight or training.
These are often a helpful way to find and summarise information, but failure to understand this transformation in how we receive information can lead to serious errors, as evidenced by a well-publicised AI mishap involving West Midlands Police. Public authorities must develop thorough AI frameworks, risk assessments and training to avoid the same outcomes.
AI-generated false intelligence
This case revolved around the force’s controversial decision to ban Maccabi Tel Aviv fans from a football match at Aston Villa’s stadium in Birmingham, in November 2025.
An interim report from Sir Andy Cooke, published on 14 January 2026, found the Chief Constable, Craig Guildford, compiled inaccurate intelligence using AI. It said CoPilot “hallucinated” by referencing a fictitious football match between West Ham United and Maccabi Tel Aviv from two years earlier.
While it was one detail in the larger dossier, and although other concerns were identified, the blunder became a headline-grabbing failure. It has contributed to undermining the decision and damaged confidence in West Midlands Police.
The Home Secretary, Shabana Mahmood, said she had lost confidence in Guildford, who two days later announced his retirement.
Lessons on everyday AI use
This mishap shouldn’t lead to a pause on the integration of new technologies within public bodies, but it does expose numerous issues surrounding AI governance.
While many organisations are actively building AI into their processes and systems, not all recognise the way in which AI has already crept into their workplaces.
Many of their employees will already use AI in everyday life and import it into working life without oversight. Updates or plugins from providers introduce new AI functionality overnight. Without an AI framework, these changes happen without preparation to address concerns such as AI bias or “hallucinations”.
This is what we mean by AI under the radar, and why it’s important for public bodies and businesses to implement AI governance, training and oversight now – whether they have an intentionally deployed AI or not.
Strong governance concerns how an organisation processes information to make decisions in a robust and transparent manner.
Given the potential for public law challenges via judicial reviews or scrutiny from elsewhere – such as councillors, mayors or government committees – it’s crucial that the evidence underpinning decisions is consistently robust.
Failure to sufficiently support any decision with uniformly valid information can often result in organisations having their broader decision-making questioned and undermined, whether in courts of law or of public opinion. This is why humans must figure prominently in any AI adoption.
What does AI readiness look like?
Organisations focusing solely on the procurement, contracting and rollout phases are setting themselves up for difficulties. The most successful AI deployments share a common characteristic: they recognise AI in existing systems and the need to work with this, then apply rigorous strategic planning focused upon organisational outcomes, before any specifications or tenders are drafted.
A question like “what are we actually trying to achieve?” can ensure technology is effectively targeted at the most pressing needs. This will rarely lead to either an outright ban or dash for AI – it will allow organisations to find areas in which existing and new technology delivers on priorities.
The right data governance infrastructure, information security protocols and internal expertise for effective oversight of AI vendors are other foundational issues in supporting AI deployment.
If the data of individuals or groups of individuals is being processed by a UK-based organisation, it must comply with the General Data Protection Regulation in terms of how it processes, stores and shares this information. Conducting a data protection impact assessment is a good starting point to identify and minimise the data protection risks of an AI project.
A dose of humanity goes far, too. Employees, service users and the wider community may have legitimate concerns about new technologies being integrated into their daily lives. Listening to, and addressing, these valid questions can deliver assurance.
Perhaps the most significant mindset shift required is recognising that AI projects don't end at go-live. Unlike traditional IT systems that remain relatively static once deployed, AI systems and their uses evolve. This creates fertile ground for improvement and innovation, but demands effective and evolving governance.
Just as the West Midlands Police case highlights how AI governance is needed for the most basic applications, it’s also a continuous programme that will become a necessary part of any organisational resilience strategy.
