UK’s strategy for safe artificial intelligence
Katharine Stephens takes a look at the UK’s AI White Paper and the developments needed
In his speech to London Tech Week 2023, the Prime Minister explained his vision to lead on artificial intelligence (AI) safety at home and overseas, while making the UK the best place in the world for tech AI. His vision included, not only to be doing cutting-edge safety research in the UK, but making the UK the intellectual and geographical home of global AI safety regulation.
The UK White Paper
Earlier in the year, the government published a White Paper entitled ‘A pro-innovation approach to AI regulation.’ The proposal is to build on existing regulatory regimes, while establishing a central coordinating function to oversee the regulatory landscape. The government’s plan is that the regulators, making use of their domain-specific expertise, should produce the regulatory guidance and advice. The White Paper sets out five cross-sectoral principles (safety, transparency, fairness, accountability and contestability) which will initially form a non-statutory framework, but will lead to the introduction of a statutory duty on regulators requiring them to have due regard to these principles.
The government has received largely encouraging responses from the Information Commissioner’s Office (IOC) and the Competition and Markets Authority (CMA). But although the government intends to continue with implementing work it needs to move faster. As pointed out in a joint report by Tony Blair and William Hague published to coincide with Tech Week, the risks are profound and the time to shape the technology is now. They warn that significant changes are needed, and very considerable investment must be made, if the UK is to become a leader in the development of safe, reliable and cutting-edge AI.
As a backdrop to this activity, the European Parliament voted on 14 June to approve its version of the AI Act, wanting to show the world that the EU is the first major jurisdiction to have mandatory rules governing AI. The Act still has to be finalised which will not happen until at least the end of the year and then it will not take effect for another couple of years. But Margarethe Vestager, the EU’s digital chief, is already reported as saying the Act should “pave the way” for the United Nations to come up with a response to AI.
In contrast to the EU’s top-down, risk-based approach, the UK’s proposal is much less prescriptive. Although, with its light-touch approach, the UK does have the potential to be more attractive to AI businesses, trade barriers caused by different international regulatory frameworks may in practice mean that they have to compete to more exacting standards imposed elsewhere. Recently analysis by the Department for Digital, Culture, Media and Sport highlights that 51 per cent of UK AI businesses (representing 40 per cent of AI sector revenues) are exporters. The EU’s AI Act may therefore become the de facto standard for all AI products.
Then, there is the lack of any proposals to help AI business navigate intellectual property (IP) issues, despite the clear recommendation by Sir Patrick Vallance in his ‘Pro-innovation Regulation of Technologies Review’ that the government should announce a clear policy position on the relationship between IP law and generative AI to provide confidence to innovators and investors. This followed last year’s recommendation, later pulled, by the Intellectual Property Office (IPO) to change the text and data mining (TDM) rules in the UK to make it easier to access the data that is required to train AI systems without infringing copyright and database rights. Currently, the UK has more prescriptive rules than the EU, the USA and Japan.
The White Paper records that the right balance needs to be struck between rightsholders and the AI sector. But instead of grasping the legislative nettle, the government in response to a question posed by the opposition during Tech Week, stated that the IPO is working with the AI sector and the creative industries to produce a code of practice, including the licensing of copyright-protected materials.
When the IPO conducted its consultation, users reported mixed experiences with seeking licenses, some pointing out that it was costly and unworkable when mining content from large numbers of individual rightsholders. A code of practice might help, but it lacks the clarity that a broader TDM exception, or even a compulsory licensing regime, would provide.
To conclude, a light-touch approach to AI regulation could have considerable benefits for a burgeoning UK AI industry, but it needs to be backed up with solutions on how to balance the industry’s needs for access to training data with the protection of copyright works.
Katharine Stephens is a partner Bird & Bird