This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Lexis+ AI
Paul Kavanagh

Partner, Dechert

Quotation Marks
High-risk AI systems are subject to prescriptive compliance requirements

The EU AI Act: comprehensive regulation and implications for AI stakeholders

International
Share:
The EU AI Act: comprehensive regulation and implications for AI stakeholders

By

Paul Kavanagh, Dylan Balbirnie and Anita Hodea provide an overview of the EU's new AI Act and its requirements for different AI categories

The EU Artificial Intelligence Act (the AI Act) has been described as ‘the world’s first comprehensive AI law.’ Its requirements vary significantly depending on the intended use of an AI system.

The AI Act affects various stakeholders in the AI ecosystem. It primarily targets ‘providers’ of AI systems, broadly defined as organisations supplying AI under their own brand. ‘Providers’ will be subject to the AI Act if they: (a) put their AI on the market in the EU, or (b) the output of their AI system is used in the EU. It also applies to ‘deployers’, who are users of AI systems. Deployers are subject to the AI Act if: (a) they are located or established in the EU, or (b) the AI system’s output is used in the EU.

The act will also affect ‘importers’ – organisations located or established in the EU that offer AI systems in the EU under the brand of a non-EU organisation – and ‘distributors’ – anyone in the supply chain that makes AI systems available on the EU market (that is not a provider or an importer).

What AI systems does the AI Act regulate?

The AI Act’s definition of ‘AI system’ aligns with the OECD’s definition of AI. Key aspects of the definition of ‘AI system’ are that the system operates with some degree of autonomy and infers from inputs how to generate outputs.

Certain systems/activities are generally excluded from the scope of the AI Act, including AI systems intended solely for scientific R&D; R&D and testing activities related to AI systems/models outside real-world conditions and before they are marketed; and AI systems used exclusively for military, defence and national security.

Risk-Based Framework

Obligations for AI systems vary by type of system and intended use. Many AI systems will be largely unregulated by the AI Act but providers and deployers should consider the impact of existing laws, such as the GDPR. In addition, providers and deployers of all AI systems must ensure a sufficient level of AI literacy amongst their staff interacting with AI.

The key categories of AI regulated by AI Act (and discussed below) are:

  • Prohibited AI systems – which are banned.
  • High-Risk AI systems - which are subject to prescriptive compliance requirements.
  • Chatbots and Generative AI – particular use cases are subject to tailored requirements.
  • General-Purpose AI models - which are subject to specific requirements, particularly where the model has ‘systemic risk’.

AI systems that fall within multiple categories must meet the requirements of each applicable category.

Prohibited AI

AI uses that are prohibited entirely include:

  • Emotion recognition in the workplace/education - AI systems used to detect emotional states in workplace and education settings.
  • Untargeted image scraping for facial recognition databases - AI systems creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Subliminal techniques and manipulation causing significant harm - AI systems that deploy subliminal, manipulative or deceptive techniques to distort a person’s or group’s behaviour leading the AI system to cause (or be likely to cause) that person or others significant harm.
  • Social scoring systems evaluating individuals based on their social behaviour or assumed characteristics, leading to detrimental or unfavourable treatment of people in a different context or that is unjustified or disproportionate.
  • Biometric categorisation systems using biometric data to deduce or infer a person’s race, political opinions, trade union membership, religious beliefs, sex life or sexual orientation.
  • Predictive policing – profiling individuals to predict the likelihood that they will commit crime.
  • ‘Real time’ identification for law enforcement – using ‘real-time’ remote identification systems in public spaces for law enforcement (except in specified circumstances).

High-Risk AI

Certain AI systems are considered to pose significant risks to health, safety or fundamental rights and are categorised as ‘high-risk’. Providers of these systems face prescriptive compliance obligations, while importers, distributors and deployers have more limited obligations.

Certain AI systems in the following fields will be deemed ‘high-risk’ unless the AI system in fact does not pose significant risks to the health, safety, or individuals’ fundamental rights:

  • Biometrics – AI systems used for emotion recognition, certain biometric identification systems and biometric categorisation based on sensitive attributes.
  • Critical infrastructure – AI systems used as safety components in managing and operating critical infrastructure, such as water or electricity supply, or road traffic.
  • Education and vocational training – AI systems used in admissions, assessments/evaluations, access to education/training, or detecting cheating in tests.
  • Employment – AI systems for recruitment, evaluations or decisions relating to work-allocation, promotion, or termination.
  • Public services – AI systems used to assess access to state benefits, healthcare or classification of emergency calls.
  • Credit / Insurance – AI systems for credit rating or the risk assessment/pricing for life and health insurance.
  • Law enforcement, migration and border control – AI systems with various specific functions in the fields of law enforcement, migration and border control.
  • Administration of justice – AI systems used to assist courts and tribunals to determine cases.
  • Democratic processes – AI systems used to influence the outcome of elections.

Additionally, an AI system will be ‘high-risk’ if used as a safety component of a product (or is itself a product) subject to specified EU product safety legislation (such as regulations governing vehicles and toys).

There are certain obligations for providers of AI systems. They must establish risk management systems and implement appropriate data governance and management practices. They need to maintain certain technical documentation and ensure the AI system can automatically generate logs throughout its lifetime.

Providers must maintain sufficient levels of transparency and implement human oversight measures. They should achieve appropriate levels of accuracy, robustness, and cybersecurity. Additionally, providers must affix a CE marking to indicate conformity with the AI Act and register on a database of high-risk AI systems. Ongoing monitoring of the AI system’s compliance is required, and they must report serious incidents to regulators within prescribed timeframes.

Providers must also make a ‘declaration of conformity’ with the AI Act (and the GDPR where personal data is processed). Generally, this can be based on self-assessment, however, for certain biometric AI systems, compliance must be assessed by an independent certified body.

If a provider of a high-risk AI system is established outside the EU, it must appoint an authorised representative in the EU.

While most obligations for high-risk AI fall on the provider, importers and distributors also have obligations such as conducting diligence to ensure the AI system complies with the AI Act and not putting the AI system on the EU market if there are reasons to consider that it does not comply.

Importers or distributors can become directly responsible for the AI system’s compliance (typically the responsibility of the provider) if they put their own brand on the high-risk AI system or make substantial changes to the AI system.

Deployers of high-risk AI systems must use the AI system in accordance with the provider’s instructions for use. They must assign human oversight to competent individuals and ensure that input data supplied is relevant and sufficiently representative.

It is necessary to monitor the operation of the AI system and report any risks to health and safety and fundamental rights that are beyond what is considered reasonably acceptable, as well as serious incidents, without undue delay to the provider and regulators.

Deployers must keep logs automatically generated by the AI system and comply with transparency obligations where high-risk AI systems are deployed in the workplace or used to make decisions about individuals. Additionally, they must provide a reasoned explanation of decisions made using AI that have a significant effect on an individual, upon request.

Public bodies and deployers using high-risk AI systems for credit checks, life insurance quotes or, public services must assess the impact on individuals’ fundamental rights.

Deployers using high-risk AI systems for emotion recognition or biometric categorisation must inform affected individuals who are subject to the system and comply with the GDPR.

Chatbots and Generative AI

For AI that is not prohibited, high-risk or a general-purpose model, the AI Act’s provisions focus on transparency.

Where AI systems are interacting directly with individuals, such as chatbots, providers must ensure it is clear to individuals that they are interacting with AI. Providers of systems that create AI-generated content must ensure that the content is marked in a machine-readable manner to indicate that it is AI-generated.

Deployers must disclose that ‘deep fakes’ are artificially generated or manipulated. Deployers must also disclose that informational content on matters of public interest is artificially generated or manipulated (unless there is sufficient human review/control).

General-Purpose AI Models

General-purpose AI models are characterised by their ability to perform a wide range of distinct tasks and integrate into various downstream systems and applications.

Providers of general-purpose AI models must:

  • keep technical documentation up to date;
  • provide and maintain information/documentation for AI system providers planning to incorporate general-purpose AI models into their systems;
  • implement a policy to comply with EU copyright law;
  • publish a comprehensive summary of the training data used for the general-purpose AI model; and
  • appoint an authorised representative in the EU (if the provider is established outside the EU).

Particularly powerful general-purpose AI models that create ‘systemic risk’ face further obligations and are required to:

  • implement an adequate level of cybersecurity protection for the general-purpose AI model and its physical infrastructure;
  • perform model evaluations in accordance with standardised protocols and tools to identify and mitigate systemic risk, and continuously assess and mitigate such risk;
  • assess and mitigate potential systemic risks associated with the development, placing on the market, or use of the general-purpose AI model; and
  • document and report any ‘serious incidents’ to the appropriate authorities.

Penalties and Enforcement

The maximum penalties vary based on the obligation breached. Overall, the maximum penalty is the greater of €35m or up to 7 percent of a group’s total worldwide annual turnover for the preceding financial year, but for many obligations the maximum fines are the higher of €15m or 3 percent of worldwide annual turnover.

EU member states will designate their AI regulators. It is unclear whether new AI-focused regulators will be created or if existing authorities, such as data protection regulators, will enforce the AI Act. Enforcement may be divided amongst different regulatory bodies. The European Commission will have exclusive enforcement powers for general-purpose AI models.

Implementation Timeline

Following the act being enshrined into law, there will be a transition period with most obligations taking effect after two years. The rollout will be phased:

  • 6 months: bans concerning prohibited AI systems become applicable.
  • 9 months: codes of practice from the newly established European AI Office should become available.
  • 12 months: requirements concerning general purpose AI will become applicable.
  • 24 months: most rules in the AI Act will take effect.
  • 36 months: obligations relating to AI systems that are ‘high-risk’ because they are subject to specified EU product safety legislation will become applicable.

Next Steps

The AI Act will introduce a new regulatory aspect of AI governance that businesses will need to evaluate alongside existing legal frameworks (such as data privacy, intellectual property and anti-discrimination) when offering or deploying AI where there is a relevant connection to the EU.

The first step for most businesses will be to evaluate how they use AI, and the types of AI they use or offer, to assess how the AI Act categorises the relevant AI systems. ‘High-risk’ uses will involve the most significant regulatory burden under the AI Act and for many applications, other laws, such as data protection law, might be more pertinent. Providers of ‘high-risk’ AI systems and developers of AI systems that are considering ‘high-risk’ use cases should be developing their AI governance strategies at an early stage to ensure their products are able to comply with the AI Act without late-stage remediation.

Lexis+ AI