This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Matthias Artzt

Senior Legal Counsel, Deutsche Bank AG

Quotation Marks
"GDPR provides meaningful rights to individuals who are subject to or affected by AI systems underpinned by processing of personal data."

Reconciling AI and EU GDPR

International
Share:
Reconciling AI and EU GDPR

By

Dr. Matthias Artzt explores potential conflict zones and areas of reconciliation between AI and EU GDPR

Artificial intelligence (AI) is the latest in a long wave of disruptive technology, offering significant benefits but also creating risks if deployed in an uncontrolled manner.

Recognizing this, many countries aim to build a legal framework to regulate and manage the development of AI to ensure that it supports their socio-economic development.

The EU’s AI Act, which is currently in draft stage and not expected to come into force before the end of 2024, sets out horizontal rules for the development, commodification and use of AI-driven products, services and systems.

It is worth noting that the AI Act is the world´s first major legislative package in this area. The AI Act is accompanied by the AI Liability Directive, which will enable individuals to bring claims against those deploying AI systems. Although the UK GDPR is similar to EU GDPR, the UK’s approach to AI regulation seems likely to be rather different from that of the EU’s AI Act. 

GDPR provides meaningful rights to individuals who are subject to or affected by AI systems underpinned by processing of personal data. For example, they can access a detailed description of how they were profiled through Article 15, as well as obtain human intervention, express their point of view and file objections where automated decision-making is involved.

Additionally, providers and users of AI systems have a priori obligations when they act as ‘controllers’ under the GDPR, to implement data minimization, storage limitation, purpose limitation, confidentiality and, most relevant, fairness requirements in how they collect and use personal data, just to name a few of their obligations. 

However, there are tensions between some of these principles and the usage of AI systems, particularly with regards to the proposed AI Act.

Principle of transparency

Pursuant to Article 5 section (1)(a) GDPR personal data has to be processed lawfully, fairly and in a transparent manner in relation to the data subject. Notably, the data subject has the right to obtain from the controller confirmation as to whether his or her personal data is being processed.

It remains unclear how to reconcile this right with the mechanism inherent to all AI systems. Indeed, the issue is what kind of information is actually required to enable the individual to make an informed decision on whether and how to enforce their data subject rights set out in the GDPR. There has been a controversial debate on whether it is legally required to deliver an ‘information overload’ to the individual concerned or if it suffices to convey only the kind of information the requestor can easily understand.

Principle of purpose limitation

Article 5 section (1)(b) GDPR states that personal data shall be collected for specific, explicit and legitimate purposes and not further processed in a manner that is incompatible with the purposes stated. Any further processing beyond that requires the consent of the data subject.

The principle of purpose limitation is probably not in line with the fundamental functionality of AI systems since it is practically impossible to determine its purposes already in the design phase. Particularly, the AI system may identify its own purpose when processing personal data.

From the data protection perspective, any data processing operation for “unknown” or hypothetical purposes which are not determined beforehand, is clearly not admissible. One possible solution is to interpret the legal term “purpose” very broadly. However, this approach is highly contestable since the controller must determine the purpose of a particular data processing operation from the outset.

Data minimization

The principle of data minimization pursuant to Article 5 section (1)(c) GDPR means that only data which is necessary to meet the purpose determined by the controller can be collected and processed. This partly overlaps with the principle of purpose limitation. Notably, the data minimization principle requires a controller to ensure that the period for which the personal information is stored is limited to a strict minimum.

In an AI environment, an enormous volume of data (not necessarily personal data) is collected. The way an AI system works clashes with the data minimization principle, which forces the controller to strictly limit the volume of personal data being processed.

From the GDPR perspective, it is not a proportionate practice to increase substantially the amount of personal data in the training dataset to have only a slight improvement in the performance of the AI system. More data will not necessarily improve the performance of Machine Learning models. On the contrary, more data could bring more bias.

Human oversight

Most notably, the AI Act and the GDPR appear to be misaligned regarding a key aspect when it comes to the implementation of “high-risk” AI systems as defined in the AI Act. The latter mandates that high-risk AI applications are designed and developed in such a way that there is effective human oversight during the period in which the AI system is in use. The concept of 'human oversight', however, does not seem to be in sync with the notion of 'human intervention' as that is used in Article 22 section 3 GDPR.

Pursuant to Article 22 section 1 GDPR it is prohibited to solely rely on automated processing of personal data to reach decisions that produce legal or similarly significant effects for individuals. As an exception to this rule, decisions which are based on automated data processing without human intervention are only permitted if:

·       the individual has explicitly consented.

·       the decision is necessary for entering into or performing a contract with the individual concerned.

·       EU or Member State law authorizes the decision-making.

The European data protection authorities have emphasized in their guidance that human involvement in this context cannot simply be 'fabricated'. Human intervention should therefore be carried out by someone who has the authority and competence to change the decision, where required.

Contrary to the specific human intervention requirement in the GDPR, 'human oversight' as set out in Article 14 of the proposed AI Act appears to have a much broader function aimed at preventing or minimizing the risks to real persons and to their fundamental rights that could emerge when a high-risk AI system is used. Whereas the human intervention requirement in the GDPR empowers individuals to take action in respect of automated decision-making that harms their interests, the AI Act's 'human oversight' obligations focus on risk-mitigation measures that providers or users of such AI systems must implement.    

Issues of practical application are likely to arise when a high-risk AI system triggers compliance requirements under both the AI Act and the GDPR. For instance, it is, for the time being, unclear whether a user of a high-risk AI application who qualifies as a controller under the GDPR will be able to argue that a decision which is based on automated data processing is authorized by EU law (Article 22 section (2)(b) GDPR). Here Article 14 of the proposed AI Act may kick in with the result that the prohibition under Article 22 section 1 GDPR doesn´t come into play.

If that argument is followed, the user of a high-risk AI system which comprises an automated decision-making will only need to demonstrate that he falls under the regime of Article 14 of the proposed AI Act and, hence, he is not legally obliged to meet the rights of individuals to obtain human intervention, to express their point of view and to contest the decision made by an algorithm.

The prohibition set out in Article 22 section 1 GDPR can certainly apply to AI systems that are not considered ‘high-risk’ under the AI Act. While this is welcome, it would hardly be a logical outcome if implementing high-risk AI systems can rule out the Article 22 GDPR prohibition to the detriment of the people affected by such AI applications.

In this context, Article 14 of the proposed AI Act does not remedy the situation as it only requires providers to incorporate features that enable human oversight, but not to ensure human oversight as a default. Not only that but providers and users when acting as controllers have no obligation to ensure human intervention or the option to contest an AI-powered decision-making when it comes to implementing high-risk AI systems. It remains to be seen whether this gap will be closed during the ongoing legislative process with regards to the AI Act.

Dr. Matthias Artzt is a senior legal counsel at Deutsche Bank AG Frankfurt and a data protection practitioner. He is the editor of the Handbook of Blockchain Law (published 2020) and of the International Journal of Blockchain Law.