This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Lexis+ AI
Lui Asquith

Associate, Russell-Cooke

Quotation Marks
"With technology constantly advancing, it is reasonable to expect the obtaining and processing of data, including biometric data will only increase. "

The balancing act over AI

Practice Notes
The balancing act over AI


Lui Asquith explores the importance of upholding fundamental rights and innovation when regulating artificial intelligence

Artificial Intelligence (AI) is of paramount importance for our future, while its usage and the opportunities it brings are continuously increasing. This in turn, increases the amount and variation in personal data created and processed.

We know that governments and companies are already deploying AI to assist in making decisions that can have major consequences for the freedoms of individual citizens and societies, through surveillance and the replacement of independent thought and judgement with automated control. How AI is used to ensure it operates lawfully, fairly and without discrimination must be of consideration as the UK Government makes changes to our data protection regime. Ensuring our fundamental rights are protected under new data protection laws, while we try and allow for innovation is not an easy task for the legislator, but it is one that must tackled effectively.

The government has emphasised that the simplifying of the UK’s data protection regime is needed to help unlock economic growth by boosting organisations’ profits. The UK data protection regime currently comprises of UK GDPR (that is, the retained EU law version of the General Data Protection Regulation ((EU) 2016/679), along with the Data Protection Act 2018 (DPA 2018) and the Privacy and Electronic Communications (EC Directive) Regulations 2003 (SI 2003/2426) (PECR). However, in the government’s view, some elements of current data protection legislation, namely the UK GDPR and the DPA 2018, “create barriers, uncertainty and unnecessary burdens for businesses and consumers.”

And on 8 March 2023, Michelle Donelan, the Secretary of State for Science, Innovation and Technology introduced the Data Protection and Digital Information Bill (No.2) (DPDI Bill) with an intention to: “update and simplify the UK’s data protection framework with a view to reducing burdens on organisations while maintaining high data protection standards.”


Ensuring the new DPDI Bill will at the very least maintain, if not improve, privacy and data protection rights can only happen it keeps pace with information-gathering technology, which we have not always been very good at. As the Grand Chamber of the Strasbourg Court said in S v United Kingdom (2009) 48 EHRR 50: “the protection afforded by art.8 of the Convention would be unacceptably weakened if the use of modern scientific techniques in the criminal-justice system were allowed at any cost and without carefully balancing the potential benefits of the extensive use of such techniques against important private-life interests … any state claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance in this regard.”

We know the DPA 2018 specifically has been used to effectively protect someone’s data protection rights within the context of AI. For instance, the case of Ed Bridges v South Wales Police [2020] EWCA Civ 1058 was the first to consider the use of Automated Facial Recognition (AFR) technology. AFR involves the extraction of a person’s biometric data from an image of their face and the comparison of this data with the facial biometric data from images contained in a database. The claimant (Ed Bridges) contended that use of AFR has profound consequences for privacy and data protection rights and specifically, part of the claimant’s argument was that the AFR breached data protection law and failed to assess the impact adequately.,

AFR and the infringement of rights that can come with it, is not unique. With technology constantly advancing, it is reasonable to expect the obtaining and processing of data, including biometric data will only increase. We know for instance, that the UK government already uses algorithms and big data to make decisions across a vast range of areas, including tax, welfare, criminal justice, immigration and social care. While AI and automated decision making (ADM) are not synonymous, ADM is frequently utilised in AI systems to power decision-making processes. These forms of processing will involve the processing of huge amounts of personal data and if so, would clearly engage Article 8 of the Convention and data protection rights (as they currently exist).

Automated Decision Making (ADM)

The DPDI Bill is in its (second) initial draft and will be subject to parliamentary scrutiny. We can see it is government’s intention to make amendments to GDPR-UK and DPA 2018, which suggests the prospective act aims to supplement the existing framework.

ADM is specifically considered within the DPDI Bill, which seeks to arguably relax the rules that already apply to ADM under Article 22 of the UK GDPR. It could signal an intention to lean towards a less regulated AI landscape. In its current form, the bill allows organisations to make use of ADM as long as they implement certain safeguards including:

·        provide the data subject with information about the decision.

·        allow the data subject to make representations about the decision.

·        enable the data subject to request human intervention by the organisation in relation to the decision.

·        allow the data subject to contest the decision.

As drafted, organisations would be able to rely on the wide-ranging lawful bases for data processing contained in Article 6 of the UK GDPR. This would include processing the data where there is a ‘legitimate interest’ and would in practice lower the bar that organisations are required to meet to legitimise their data processing. It is difficult to see how such an approach maintains ‘high level standards’ and makes for a stark contrast with the proposed EU Artificial Intelligence Act, which will be the first law on AI by a major regulator anywhere.

We know that the capabilities of AI technologies continue to advance at a rapid pace. Organisations are increasingly seeking ways to utilise them to take advantage of their numerous benefits and they certainly hold huge opportunity. But a key element of the prospective act must foster the need for AI to be developed with the principles of safety and respect for one’s privacy.

Lui Asquith is an associate at Russell-Cooke LLP

Lexis+ AI