This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Helen Simm

Partner, Browne Jacobson

How do you regulate AI?

Feature
Share:
How do you regulate AI?

By

Helen Simm examines what direction the government might take on AI in a post-Brexit Britain 

Sundar Pichai, CEO of Alphabet and its subsidiary Google said: “Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.” There are a plethora of challenges facing governments and regulators in the struggle to keep up with the rapid development of artificial intelligence (AI). AI has been globally identified as a significant area of strategic importance and is considered key to global economic development.

However, it is clear a balance needs to be struck between promoting technological innovation and ensuring the public’s rights are protected to increase public trust in this constantly developing field. There is no consensus in the approach that should be taken to the regulation of AI.

This has been highlighted by the European Commission’s (EC) white paper, On Artificial Intelligence – A European Approach To Excellence And Trust. This sets out the EC’s precautionary approach to AI regulation, specifically in the promotion of a European regulatory framework. This stands in stark contrast to the US’s traditional laissez-faire approach.

Following the UK’s departure from the EU and looking towards the end of the post-Brexit transitional period on 31 December 2020, the question now is: what direction will the UK government take? A European perspective One of the key messages outlined in the EC’s white paper is the requirement for a European regulatory framework.

The need for this stems, in the EC’s opinion, from the inability of current non-specific legislation to sufficiently apply to features of AI, for example, opacity, which would in turn make enforcement action difficult. The white paper identifies lack of trust as being one of the principal barriers to a broader uptake of AI.

To combat this, the EC’s proposed AI-specific European regulatory framework is thought to help grow the EC’s “ecosystem of trust” by imposing stricter legal standards on the use and development of AI. A collaborative approach is also encouraged in the form of a “network of national authorities”. It is envisaged that these authorities be appointed by member states to share ideas of best practice and aid in the application of, and compliance with, the AI legal framework.

The EC’s direction expressed in its white paper requires a lot more development. Although far from becoming legislation, the paper clearly identifies the EC’s intentions for the future regulation of AI which are increasingly distinguishable from the traditional views held by other key global players, such as the US.

A transatlantic AI alliance

The US’s AI strategy is to create a flexible regulatory environment to promote AI development and innovation rather than suffocate this nascent field by overregulation. The Office of Science and Technology Policy’s (OSTP) key concern is regulatory overreach as it is feared this may force AI innovation overseas, advising Europe to “avoid heavy handed innovation-killing models”.

AI should be considered “a tool with innumerable uses” and therefore it should not be regulated “as a phenomenon in and of itself”. Existing legislation is applicable to AI and should be used to govern its regulation. Rather than a government-led regime, the US advocates federal engagement with the private sector to establish a standardised regulatory approach.

The US’s AI strategy is sector-specific and is predominantly led by the large tech corporations who were traditionally advocates of self-regulation. Interestingly, there appears to have been a recent shift in the attitude of these market leaders, for instance, Mark Zuckerberg, CEO of Facebook, has called for greater regulation of AI as a means of building up public trust.

Pichai has commented that “there will inevitably be more challenges that no one company or industry can solve alone”. He acknowledges that existing legislation, for example, the General Data Protection Regulation 2016/679, can serve as a strong foundation for AI regulation and that government involvement will be key. In addition, Washington is on the brink of becoming a pioneer in AI regulation.

Washington State Legislature has proposed the Artificial Intelligence Profiling Act (HB2644) which seeks to restrict the use of AI-enabled profiling technology and provide more comprehensive consumer privacy protections. This shift in attitude could signal a change in the US’s approach to AI regulation. Although it appears that regulation will continue to be sector-specific without the creation of an AI regulator, leading tech giants seem keen to see further regulation in this field and are willing to collaborate with government and regulators.

The UK’s stance

A number of the ideas outlined in the EC’s white paper have already been considered by the House of Lords’ Select Committee on AI in its 2018 report, AI In The UK: Ready, Willing And Able?. The report concluded that “AI-specific regulation was not appropriate” and “existing regulators are best placed to regulate AI in their respective sectors”.

So the UK appears to be more aligned with the US’s sector-specific approach as opposed to being an advocate of an overarching regulator. Current legislation, particularly in relation to data protection and competition, is considered sufficient to regulate AI.

To pass AI-specific legislation could risk overregulation in this sector, which in turn may harm innovation in the UK. Similar to the views of the White House, The Alan Turing Institute noted that “AI is embedded in products and systems, which are already largely regulated” so it follows that AI-specific regulation is not necessary at this stage.

We can look to the 2020 budget to give us some indication of the UK government’s future intentions towards the technology and regulatory sectors. The government plans to invest heavily in innovation and has pledged to invest £22bn per year into public research and development. Without doubt, a proportion of this will be invested in the AI industry to enable the UK to compete in the global technology-driven economy.

This will inevitably spark further debate regarding AI regulation. The budget has also shown the government’s commitment to ensuring the UK stays “a dynamic and competitive regulatory environment” following our exit from the EU. To do this, the government is launching its Reforming Regulation Initiative to share ideas for regulatory reform between the public and businesses. It is envisaged that regulatory barriers will be reduced and regulators’ capacity will be increased to enable the potential of emerging technologies to be accessed while maintaining proportionate regulation.

The government is also promising to invest £10m into the Regulators’ Pioneer Fund. It is evident the government is committed to investing in innovative technologies and to framing the regulatory environment for them by turning to businesses and the public for their input.

Equally, any AI-specific legislation is unlikely to be passed or an AI regulator created in the UK in the near future. A new path It seems unlikely we will see a global consensus on AI regulation any time soon. The EC has set out Europe’s approach of which we await further details. Given that the US’s strategy continues to be led by the private sector, its approach could shortly take a different turn as market leaders’ attitudes move further towards stricter regulation of AI, albeit far from the scale proposed by Europe.

We are unlikely to see any EU legislation on this topic before the end of the transitional period, therefore the UK will be free to decide its own path. The opinions set out in the 2018 report remain relevant and have been supported by the announcement of the 2020 budget. These suggest that the UK will continue on its own course rather than following the EC’s views on AI regulation.  

Helen Simm is a partner at Browne Jacobson BrowneJacobson.com