This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Jan Stappers

Director of Regulatory Solutions, Navex

Quotation Marks
The EU’s AI regulations are setting the stage for a global standard for AI use that will continue to be expanded upon and adopted internationally

AI governance in the EU: pioneering a global standard

Feature
Share:
AI governance in the EU: pioneering a global standard

By

Jan Stappers discusses the need for organisations to prioritise compliance with evolving AI regulations

While artificial intelligence (AI) is certainly not new technology, it has started dominating conversations in the workplace in recent years. The significant strides made with AI have the potential to, and already are, revolutionising most industries – from healthcare to finance, and everything in between. However, with its increasing use comes the need for regulatory governance to prevent adverse effects.

As usual, the European Union (EU) is leading the way

The pattern of EU-driven regulatory leadership is also known as the ‘Brussels effect’ and is impacting global regulatory compliance and governance priorities in more areas than AI. Simply put, it describes the phenomena of when the EU enacts regulations, the rest of the world often follows.

This is repeated in many regulatory areas, including the General Data Protection Regulation (GDPR), environmental standards, and many others. While the EU has long been at the forefront of developing regulations in general, for the purpose of this article, we will outline how the EU is leading the way with AI guidance and striving to create a global standard for its governance. Here are some important considerations in how the EU is pioneering AI regulation and the resulting impacts:

  • The introduction of wide-reaching AI regulation: The EU is pioneering the regulation of AI, setting a global standard with the Artificial Intelligence Act, aimed at ensuring responsible AI use.
  • The components of the AI Act: Highlights of the Act include bans on specific AI applications, rules for high-risk AI systems, transparency requirements, and provisions for consumer rights and penalties for violations. Prohibited uses of AI include biometric categorisation, untargeted facial image scraping, emotion recognition in certain settings, social scoring and manipulation of human behaviour. Regarding consumer rights, this includes a right for consumers to levy complaints and receive ‘meaningful explanations’. Violations of the Act could incur fines ranging from €35 million or 7% of a company’s global turnover, contingent on the size of the company and the nature of the infraction.
  • The global regulatory landscape: While Europe advances in regard to AI regulation, other nations including the US and China are also grappling with governance approaches, indicating a global trend towards AI regulation.
  • Controversy and industry response: While generally accepted as a necessary bit of legislation, the EU AI Act has faced criticism from industry and consumer groups regarding the potential financial burdens, competitive disadvantages and inadequacies in protecting consumers.
  • The road ahead and final approval: Despite the milestones achieved already, the Act still requires final approval from EU member states, and full implementation is not expected until 2026. Success will depend on diligent execution, oversight and collaboration with standard-making bodies.

More on AI regulations from the EU

Now that we have highlighted the basics of the Act, let’s take a closer look at the key takeaways from the rules and regulations outlined by the EU.

The regulations are extensive

The EU’s AI regulations cover a broad range of applications, including high-risk AI systems, such as facial recognition, biometric identification and autonomous vehicles. They set out requirements for transparency, including information on data collection and processing, as well as the use of algorithms. These steps are critically important in ensuring AI systems are developed and used ethically, thus promoting human-centric AI that respects fundamental rights and values.

We are already seeing new related laws outside of the EU, such as the New York state law that prohibits employers and employment agencies from using automated employment decision tools, unless a bias audit is completed and required notices are provided. This is closely tied to the human-centric principles that should be prioritised to ensure using AI does not unintentionally or unfairly lead to employment discrimination.

Ethical standards and secure by design principles

The regulations are designed to ensure AI systems are developed and used ethically and are particularly important given the potential for AI to be used for discrimination.

The ‘secure by design’ approach requires developers to integrate security measures into the design of AI systems from the outset, ensuring they are safe and secure from the start. It includes three key principles:

  • Take ownership of customer security outcomes;
  • Embrace radical transparency and accountability;
  • Lead from the top.

These were created to urge software manufacturers to take steps to ensure products do not unintentionally harm customers and are well-suited for the development of new AI technology.

Global standards

The EU’s regulations are often seen as a global standard to strive towards. With the AI Act, other countries are expected to follow suit using the EU regulations as the base for new laws.

As other countries are expected to adopt AI regulation, it is imperative for organisations to create plans for compliance by adopting a principles-based approach that may include aligning with standards that may not necessarily apply – yet. As these regulations will evolve over time in how and where they are adopted and what they encompass, going beyond the minimum compliance requirements at the time will help organisations play the long game and think longer term. After all, AI is not going away, nor is the need to govern its use.

Key challenges for AI governance

Though the need for AI governance and compliance practices are highly relevant and the technology is revolutionary for most organisations, staying compliant with the changing landscape can be challenging. With new laws cropping up regularly, understanding which apply and how to implement compliance standards is quite literally tracking a moving target. A few obstacles they face in ensuring requirements of AI regulations are met include:

Lack of expertise: Does your company have experts in AI, specifically when it comes to safety, security and compliance? For many, the answer is no, and those organisations may not have the necessary expertise to implement AI systems that comply with the regulations. This could be due to a shortage of skilled professionals or a lack of understanding of the regulations themselves. Or, simply because we are entering a new era of compliance and governance needs after recent developments in AI technology outpaced the laws that govern it.

Cost and complexity: Another obstacle is the cost associated with compliance. Regulatory compliance can be costly, particularly for smaller organisations – but the price of non-compliance is far greater. There will be a need to invest in new technology or additional staff to ensure the organisation has adequate resources to stay compliant. Further, the regulations in development are complex and difficult to navigate, particularly for organisations operating in multiple jurisdictions. For those operating in several states or countries, there may be numerous laws with various thresholds to consider. This applies to everything from hiring practices to software and product development, and everything in between.

Since AI technology impacts products, services, hiring, internal use of AI (like large language models (LLM) that need policies, procedures and governance principles), ‘AI compliance’ is not any one thing. This complexity means organisations will need resources and expertise to manage the many variables.

Compliance’s role in overcoming AI obstacles

As with complying with any regulation, it is a journey, not a destination. Compliance leaders will not achieve their goals in one fell swoop, but through ongoing efforts and best practices. Let’s look at 10 best practices for compliance leaders to ensure they overcome the obstacles associated with AI regulations.

  1. Stay informed: Regularly monitor and stay updated on relevant AI regulations, guidelines and best practices at local, national and international levels. This includes keeping track of updates from regulatory authorities and industry associations. Setting up alerts for new developments based on keywords or the locale can be an effective way to watch for news in this space.
  2. Develop and update policies: Clear and comprehensive AI compliance policies that align with relevant regulatory requirements are a must. Regularly review and update these policies as new regulations emerge or existing ones change.
  3. Train and educate employees: Provide regular training and education to employees on AI compliance policies and procedures, ensuring they understand their roles and responsibilities in maintaining compliance.
  4. Collaborate with internal teams: Work closely with internal teams, such as IT, legal and data privacy, to ensure a coordinated approach to AI compliance. This includes sharing information on regulatory updates and collaborating on the development and implementation of compliance policies.
  5. Conduct risk assessments: Periodically assess your organisation’s AI systems and processes to identify potential compliance risks. Then, develop and implement risk mitigation strategies to address these, working cross-functionally to ensure all stakeholders are involved.
  6. Implement monitoring and auditing processes: Establish ongoing monitoring and auditing processes to ensure adherence to AI compliance policies and regulatory requirements. This includes tracking performance where AI is used, reviewing data collection and processing practices, and identifying potential compliance gaps.
  7. Establish clear reporting mechanisms: Create clear reporting mechanisms for employees to report potential AI-related compliance issues. Encourage open communication and transparency within the organisation.
  8. Remediate non-compliance promptly: If instances of non-compliance are identified, take swift and appropriate action to remediate the issue and prevent future occurrences. This may involve updating policies, retraining employees or making changes to how AI systems are used.
  9. Engage with external stakeholders: Collaborate with regulators, industry associations and other external stakeholders to stay informed about emerging AI regulations and best practices. Participate in industry forums and contribute to the development of AI regulatory standards.
  10. Foster a culture of compliance: Promote a culture of compliance within the organisation, emphasising the importance of adhering to AI regulations and prioritising the ethical use of AI technologies. Encourage employees to take responsibility for maintaining compliance and to actively contribute to the organisation’s compliance efforts.

Final words

The EU’s AI regulations are setting the stage for a global standard for AI use that will continue to be expanded upon and adopted internationally.

Compliance officers need to be aware of these regulations and their requirements to ensure their organisations stay compliant. While overcoming obstacles associated with compliance can be challenging, it is essential to ensure AI is developed and used ethically and safely. Because of the nature of artificial intelligence and the sheer power of this technology, best practices will help protect consumers, employees and employers. By prioritising compliance with AI regulations, organisations can reap the benefits of AI, while also protecting the rights and values of individuals.