This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Hannah Gannagé-Stewart

Deputy Editor, Solicitors Journal

Police use of AI to predict crimes can “amplify human bias”

News
Share:
Police use of AI to predict crimes can “amplify human bias”

By

Artificial intelligence (AI) used by UK police to predict crimes may “replicate and in some cases amplify” existing biases, according to new research.

Artificial intelligence (AI) used by UK police to predict crimes may “replicate and in some cases amplify” existing biases, according to new research.

The report by the Royal United Services Institute (RUSI) found that although use of data analytics and algorithms for policing has numerous potential benefits, it also carries significant risks, including those relating to bias.

“This could include unfair discrimination on the grounds of protected characteristics, real or apparent skewing of the decision-making process, or outcomes and processes which are systematically less fair to individuals within a particular group”, the report said.

Indeed, one police officer interviewed for the report commented: "Young black men are more likely to be stop and searched than young white men, and that's purely down to human bias. That human bias is then introduced into the datasets, and bias is then generated in the outcomes of the application of those datasets."

Another officer said: “We pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there’s more policing going into that area, not necessarily because of discrimination on the part of officers.”

The report calls for a new draft code of practice, specifying clear responsibilities for policing bodies regarding scrutiny, regulation and enforcement of these new standards.

“While various legal frameworks and codes of practice are relevant to the police’s use of analytics, the underlying legal basis for use must be considered in parallel to the development of policy and regulation. Moreover, there remains a lack of organisational guidelines or clear processes for scrutiny, regulation and enforcement”, the report noted.

Police forces, the report warned, could become over-reliant on the AI to predict future crimes, and discount other relevant information.

“It is essential that the correct balance is struck to ensure due regard is paid to the insights derived from analytics, without making officers over-reliant on the tool and causing them to disregard other relevant factors. Adequate training focused on cognitive bias and fair decision-making would appear essential to ensure officers are able to consistently achieve the correct balance”, the report concluded.

More sophisticated AI systems have been developed since predictive policing tools first started being used by police in 2004, with the use of facial recognition and video analysis, mobile phone data extraction, social media intelligence analysis, predictive crime mapping and individual risk assessment all becoming more common.

The report can be downloaded here.