Urgent talks on AI cyber risks

British regulators are evaluating risks from Anthropic's new AI model with major banks and the government agency for cybersecurity involvement
British financial regulators are in urgent discussions with the government's cyber security agency and major banks to assess the risks posed by the latest artificial intelligence model from Anthropic, as reported by the Financial Times on Sunday. The Bank of England, Financial Conduct Authority, and Treasury officials are collaborating with the National Cyber Security Centre to scrutinise potential vulnerabilities in critical IT systems that have been highlighted by Anthropic's AI advancements.
Fiona Phillips, who leads Marks & Clerk's AI & Cyber Security legal advisory practice, stated "The launch of Claude Mythos Preview to selected vendors is a game changer for the cyber industry, especially for defenders who will now face a surge in patching demands as the ongoing race between attackers and defenders intensifies in the effort to protect organisations from cybercrime.” She further noted that "At the same time, the Mythos release highlights concerns around AI regulation. Anthropic has taken a cautious approach by limiting public access to the model due to its potentially dangerous capabilities. This raises the issue of whether other companies will act with similar restraint, and to what extent governments or even international bodies should step in to regulate the development of AI systems powerful enough to have wide-ranging impacts on the global economy.”
These discussions come at a crucial juncture as concerns about AI regulation grow alongside its rapid development, emphasising the need for a balanced approach to safeguard both technology advancement and cyber security.










