This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Alison Berryman

Partner - Head of UK, Biztech Lawyers

Quotation Marks
"Governments around the world have been working to address the issues, although the progress of legislation is much slower than the progress of technology"

The why, what where and when of AI regulation

News
Share:
The why, what where and when of AI regulation

By

Alison Berryman explores steps governments have taken to regulate AI technology

 

There’s no denying that Artificial Intelligence (AI) is the hot ticket in technology right now. The technology isn’t new - it has been hard at work in fields as diverse as credit scoring, recruitment search and self-driving cars (to name but a few) for well over a decade - but over the past few months, and most famously with the release of ChatGPT in November 2022, AI technology has taken a big leap forward both in its capabilities and in public awareness. 

AI presents many potential benefits and opportunities. However, many have warned of the associated risks. Professor Steven Hawking famously warned that: “unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation.” It is now becoming increasingly important to develop appropriate safeguards.

Governments around the world have been working to address the issues, although the progress of legislation is much slower than the progress of technology. China has implemented some regulations (albeit limited in scope at present) and more general AI focused regulations anticipated in the EU and Brazil this year. Other jurisdictions seem less keen to legislate on AI yet, but there are plenty of discussions taking place about what controls are needed, and pre-legislation frameworks and guidance have been published in countries including the UK, USA, Australia, Canada, Japan and Singapore.

What are the risks?

?       Safety: the main concern is that we have no idea whether AI will always prioritise human safety and wellbeing.  The potential for malfunctions of dangerous equipment present risks to human life, the distortion of human behaviour via illicit manipulation present risks to the stability of political regimes, and there is even the existential threat of an AI system having so much power that it could wipe out humanity entirely. Many of the proposed laws address this with a human-centric risk-based approach.

?       Privacy and security: AI systems can collect and process large amounts of personal data, which could be used to violate an individual's privacy or endanger security. For example, AI-powered devices in our homes and workplaces can capture all manner of information about us, which could be misused or subject to a security breach. Existing regulations address these risks in many jurisdictions and would apply to AI as it would to any technology, but such protection is not yet available worldwide.

?       Bias and discrimination: Much like a human, if AI is trained on information that includes bias, the AI will assimilate that bias. We are already seeing this in, for example, facial recognition software that fails to recognise black faces, and reports of adverts and articles that include pictures of women in sportswear being suppressed by social media algorithms, apparently because they were identified as ‘racy,’ when a male equivalent received no such treatment.

?       Accountability: It can be difficult to determine who is responsible for the actions of an AI system. A complex system with many contributors, a huge range of training materials, and often one or more human operators, will have many possible points of failure. It may be impossible to identify who was at fault if something goes wrong, for example when a self-driving car causes an accident. In the EU at least, specific regulations will likely be created to determine where liability will sit in specified scenarios.

?       Transparency: A common factor underpinning each of the other risks is that we don’t have visibility of what an AI system will do in any given situation. Whether we’re talking about a lender’s algorithm approving or denying someone for credit, a recruitment platform suggesting who should be offered a job, or a self-driving car in a difficult situation deciding who to avoid and who to run over… because the AI has the capability to learn and evolve beyond the programming given to it by its human creators, it is by definition out of human control. It is unsurprising then that, alongside rules that completely prohibit certain types of AI and extensive rules on quality control, transparency around AI’s coding and training is one of the key aspects of many of the proposed AI frameworks and regulations. 

Regulating AI businesses

While the details are not yet all defined, it is clear that there will be additional burdens on creators of AI technology in many jurisdictions. Most governments (including the UK) recognise the need to support the tech industry, and have stated specifically that their rules will be mindful of the need to sustain innovation. However, obligations relating to governance, quality control, record keeping, reporting and transparency will inevitably have time and cost implications. This is particularly likely to be the case for those creating AI that is considered “high risk,” meaning that the AI system can have a significant effect on the lives of humans. 

Transparency obligations will also likely have an impact. It will be interesting to see whether legislators and regulators follow a similar path to the various privacy regulations around the world to ensure transparency of AI and, if so, whether this will relate to all AI products or just some specific applications that have been deemed ‘high risk.’ Possibly we will soon see ‘AI Notices’ alongside the current Privacy Notices, website pages dedicated to explaining a company’s AI decision making practices.

The enforcement of AI regulation will inevitably vary widely between different jurisdictions, but some jurisdictions that have provided guidance to date have indicated that the responsibility for regulating AI would sit mostly with existing supervisory bodies, in the areas for which they have responsibility - financial services regulators would be expected to regulate the use of AI in the finance sector, data privacy regulators would ensure that systems complied with privacy laws, and so on. 

Conclusion

Whether you are intrigued or terrified by AI, there is no doubt that it is here to stay and is developing at an impressive rate. 

Concerns over safety are justified, but are unlikely to halt progress. Fortunately, both creators and regulators seem alive to the risks posed by this step change in technological advancement. Around the world, laws, frameworks and ethical models are being proposed and discussed, to ensure that AI is used to benefit humans and that we don’t end up living in the Matrix - or, more realistically, in a world where life changing decisions can be made by a machine that is neither fair nor caring. 

With some regulations already implemented, and others close to their final form, it feels as if we are on the cusp of a whole new area of law. And many would say not a moment too soon, as we launch into a new era of technology.

Alison Berryman is a senior technology lawyer at Biztech Lawyers bitzechlawyers.com