AI, data and breaches: security in the age of machine learning

As AI transforms business operations, it also magnifies data breach risks—demanding smarter, AI-specific security strategies from both customers and suppliers
As artificial intelligence (AI) becomes deeply embedded in everyday business operations, companies are gaining transformative capabilities—but also facing new and evolving risks to personal data. From model inversion to data poisoning, the digital frontier is not just smarter—it’s more fragile.
While the GDPR and UK GDPR already set clear obligations around data security, these frameworks were not designed with AI-specific threats in mind. Yet, under Article 32, organisations must “regularly test, assess, and evaluate” their security measures. If your business is deploying AI—whether to process personal data or automate customer interactions—those risk assessments must now include novel, AI-specific breach scenarios.
Some of the dangers are obvious: poor design, lack of oversight, and vulnerabilities in third-party AI components. But others are more opaque. A chatbot hallucinating personal medical information, or a corrupted image classifier that falsely identifies someone as a criminal, could all fall under the legal definition of a personal data breach. And that means serious regulatory and reputational consequences.
The Illusion of Intelligence, The Reality of Risk
The complexity of AI models makes them harder to secure. Model inversion attacks, where outsiders extract sensitive training data from AI systems, are no longer theoretical. In fact, they pose a very real threat to data privacy: they can reveal identities, medical conditions, or behavioural traits from supposedly anonymised data.
Equally troubling is the opacity of IT supply chains. Many companies depend on third-party AI tools or frameworks, often open-source, which increases the attack surface exponentially. A vulnerability in a single component can cascade throughout the system, and determining responsibility can be difficult when roles of “customer” and “supplier” blur along the chain.
In this environment, traditional data breach plans are not enough. Businesses need a new playbook—one that considers both familiar risks and AI-specific ones.
Practical Steps for Businesses: Prevention Over Panic
First, organisations must maintain a live inventory of AI tools they’ve deployed, including those in test phases. From there, they should carry out AI-specific risk assessments that go beyond generic DPIAs. This means analysing how AI systems are trained, what data they touch, and where vulnerabilities lie—both for personal and non-personal data.
Incident response plans must be adapted to reflect AI-specific threats. If an employee misuses an AI model, or if a tool generates a harmful hallucination, who is responsible? Who reports the breach, and to whom? These questions must be clearly answered before an incident occurs.
Suppliers, meanwhile, must do more than build clever tools. They need to demonstrate privacy by design, provide usage guidance, and contractually commit to data minimisation and security updates. Suppliers must also guard their models from customer-side threats, especially if a client system is compromised and used as a vector for attack.
Contracts and Cooperation Matter More Than Ever
The lines between supplier and customer are increasingly fluid. In some cases, both may be joint controllers under data protection law. This shared responsibility makes it essential to have clear contractual terms around breach notification, minimum security standards, and liability in the event of a data leak. It also demands a collaborative spirit in risk assessment and incident response—particularly for high-risk AI deployments subject to the incoming EU AI Act.
As global regulatory regimes for AI begin to diverge—between the EU’s strict AI Act, the UK’s more principles-based approach, and looser regimes in the US or Asia—the risk of compliance gaps across borders is growing.
For smaller companies, accessing AI expertise may be difficult. But resources exist: the ICO and European Data Protection Board (EDPB) regularly publish guidance; government-backed tools like the UK’s AI Management Essentials are also in development. What matters is that businesses use these tools now—not after an incident.
Conclusion: Shared Systems, Shared Responsibility
AI is not just another IT tool. It is a complex, adaptive system that reshapes how data is collected, processed, and potentially leaked. Businesses and their suppliers must work together—from procurement to post-deployment—to safeguard privacy and preserve trust.
While the UK and EU are leading on AI-specific data regulation, companies operating globally must also track emerging laws in the US (such as state-level privacy bills) and Asia-Pacific, where AI deployment is accelerating under varied legal frameworks.
In a fragmented global AI environment, smart businesses won’t wait for a breach—they’ll build cross-border resilience before innovation outpaces responsibility.