SJ Interview: Lilian Edwards
For the October 2025 volume, Lilian Edwards speak to the Solicitors Journal
Lilian Edwards, Emerita Professor of Law, Innovation and Society at Newcastle University and now Director of Pangloss Consulting Ltd, has been at the forefront of internet law since the 1990s. Her career has spanned the rise of online platforms, the advent of the GDPR, and now the challenges of regulating AI. In this interview, she reflects on the major legal shifts over three decades, the contested future of data governance, and why lawyers should take Black Mirror seriously.
You’ve been at the forefront of internet law since the 1990s. Looking back, what have been the most profound legal shifts in the regulation of the digital world?
In the early days, when we were still calling them ISPs or online intermediaries, there was this sense they were benevolent. They were giving us all these lovely services for free, what The Register once called “the chocolate factory.” The language of the e-Commerce Directive reflected that. Platforms were seen as neutral middlemen, stuck between a rock and a hard place. If users posted bad content, it wasn’t really the platform’s fault, and the law sympathised with that.
So, broadly, we felt quite good about them. Google, Facebook, as was, these were seen as innovative, helpful companies. But slowly, certainly by 2003–2005, the perception started to change. People began to see platforms as exercising uncontrollable, opaque power over society. That power is destabilising in many ways, most notably democracy, through things like dark ads and targeted political advertising, and also in relation to children, with endless debates about social media, screen time, and mental health.
What changed? A few things. First, vertical integration: they became behemoths doing everything. They don’t just host content; they control media, news, search, and now even how we write and draw, through AI tools that many people never asked for. Second, concentration: in much of the West, the market consolidated into four or five giants. And third, the rise of algorithmic management. Platforms don’t just host; they decide what we see. Those algorithms are optimised for outrage, because outrage sells advertising.
So we’ve moved from platforms as benevolent intermediaries to platforms as non-neutral, profit-maximising, opaque, global powers. That’s the profound legal and social shift of the last 25 years.
Q: Your work on “data trusts” has been influential. How do you see this model fitting into the current debates around data governance and AI regulation?
My feelings about data trusts have shifted a lot. Originally, I hoped they would provide collective redress for harms that are individually small but socially significant, like spam emails, or targeted ads. What are the damages for getting three spam emails? Tiny. But collectively, these harms matter. A trust model could give people a way to act together.
In practice, though, the idea was taken in a different direction, toward “data altruism.” The notion was that people might donate their data, especially medical data, for socially beneficial projects. That’s not a bad idea, and Europe has even provided a legal structure for it under the Data Governance Act. But I’ve seen very little evidence of it producing transformative projects or sustainable intermediaries.
The more interesting development has been around collective action in other guises. Look at the Uber drivers, organised by James Farrar and Workers Info Exchange. Because Uber didn’t provide transparency about pay, drivers began using apps to collect their own trip data. That allowed them to bargain collectively. It’s not exactly a “data trust,” but it’s the same idea: collective empowerment through data.
So while I still like the principle, I no longer think data trusts, in the legal-technical sense, are the way forward. They’re hard to explain, hard to implement legally, and tied up in debates about whether data is property. But collective data rights, yes, that’s still vital.
Q: The EU has finalised its AI Act. Do you think the UK is missing an opportunity by not following a similar comprehensive regulatory route?
The AI Act is technically done, though not everything has come into force. The rules for general-purpose AI, including large language models, started this August. The high-risk AI provisions, covering areas like hiring, welfare, creditworthiness, and even automated judicial decisions, kick in about a year later.
One thing that’s often misunderstood: the AI Act doesn’t regulate “all of AI.” It’s actually very limited. A few things are banned outright, certain uses of facial recognition, for instance. A defined set of high-risk uses are heavily regulated at the design stage. There are some rules about labelling chatbots and deepfakes. And that’s it. Everything else, social media, algorithmic advertising and search, is outside its mandatory scope.
When the Act was finalised, I was frustrated the UK wasn’t aligning. The Conservative government at the time was desperate not to follow Europe. The rhetoric was all pro-innovation, anti-regulation. They wanted a libertarian “new Singapore,” which is ironic because Singapore actually has a lot of regulation.
That seemed misguided to me. The UK isn’t a major AI market. We like to say we’re “world-leading,” but successful UK AI companies usually get bought by American firms. What we do well is deploying, law firms using ChatGPT, for instance. A friend runs a property law firm in Glasgow and was very excited about using ChatGPT to check land titles. I had to warn him: are you sure you’re supervising that properly? It could go very wrong.
But the point is: we’re mostly deployers. And if you’re selling AI into Europe, you have to comply with the AI Act anyway. So why not harmonise? That was my view, until geopolitics intervened. The “Brussels effect” worked with GDPR, but now we’re seeing a “Mar-a-Lago effect”: Trump, Musk, Zuckerberg and others pushing back against EU regulation. Industry bodies are already trying to pause or water down the AI Act.
So ironically, the UK may benefit from not signing on. We could market ourselves as a friendlier home for US platforms. That wasn’t strategy; it was politics. But it may turn out to be lucky. We’ll see.
How can the legal sector, and society at large, balance the tension between innovation in AI and the need for safeguards to protect rights and prevent harm?
First you have to ask: what is “innovation”? It’s a word governments love, but almost never define. I’ve read through endless white papers on AI from successive governments, and they all chant “innovation, innovation, innovation,” but without clarity. Sometimes they mean support for SMEs. Sometimes open-source. Sometimes productivity growth. It’s vague.
The UK’s approach has been deregulation. Don’t pass the AI Act. Don’t pass the Digital Services Act. Keep rules minimal, except when children are involved and the press gets upset. The gamble is that AI might produce growth, but in the process they’re willing to risk established sectors, like the creative industries, which currently make far more money than AI. For example, the debate over text and data mining pits the AI industry’s hunger for training material against the interests of authors, musicians, and artists. That’s not neutral. That’s picking sides, and theyre having enormous trouble with it.
The mantra that regulation stifles innovation is wrong. Larger companies actually like regulatory certainty. A senior Google lawyer once told me: “We don’t care what the European rules are, we just want them the same everywhere. We don’t want to have to learn 27 different copyright laws.” That’s true of most multinationals. Harmonisation promotes markets.
Regulation also builds trust. Think about cars: nobody would buy them if they regularly burst into flames. Regulation makes consumers confident. The same is true of AI. If people think AI harms children, lies constantly, or hallucinates, they’ll avoid it. That kills markets.
So I’d say regulation, sensibly designed, isn’t the enemy of innovation. It’s the condition of it.
Do you believe existing data protection frameworks like the GDPR are sufficient for the AI era, or do we need an entirely new legal architecture?
Edwards: My feelings have varied, but broadly, I think GDPR is still fit for purpose. At heart, it’s just a sensible best-practice manual: have a reason for collecting data, don’t repurpose it, don’t keep it forever, keep it accurate. If you asked a group of ordinary people to design rules for handling personal data, they’d probably come up with something like GDPR.
There are two big problems, though. One is cross-border data transfers. European law says you can’t send personal data to countries without “adequate” protections. The US surveillance regime makes that extremely difficult. Attempts like Privacy Shield have collapsed. Nobody really knows how to fix this although the latest CJEU decision is more positive to the US.
The other problem is enforcement. The rules are reasonably fine; the political will to enforce them is lacking. Ireland is the main regulator for most big US tech firms in Europe, and its enforcement has been timid. The UK is the same: governments want investment, so they go easy on regulation. That undermines both rights and certainty.
So no, we don’t need a whole new architecture. GDPR works in principle. But without serious enforcement, it’s toothless.
In your experience advising regulators and governments, what is the most common blind spot policymakers have when legislating on emerging technologies?
I think there’s a structural problem in the UK political system. Ministers are rarely skilled in the areas they’re appointed to oversee.
This means government has a persistently weak understanding of technology policy. We have very few scientists in Parliament, very few MPs or ministers with real experience in technology. We used to have more people even from industry backgrounds, but that’s diminished. This isnt fatal in itself, of course. Ministers can be advised by civil servants or external experts. But if you don’t have a base level of knowledge, you’re heavily reliant on others, and that makes you vulnerable especial to lobbying. Ministers believe what industry says and don’t listen hard enough to other views such as civil society.
The other problem is that, lacking in-depth expertise, policymakers are too easily blinded by hype. That’s a recurring pattern. I’ve watched successive governments chase one “next big thing” after another: Bitcoin, blockchain, the metaverse, virtual reality, and now AI. Each time, the playbook is the same: fling large amounts of money at it, make grand claims about Britain being “world-leading,” and hope it delivers growth. There’s an air of desperation about it. And so far, it hasn’t worked.
What we need is more sustained, specialised expertise inside government. Advisors on digital policy who aren’t just political appointees, but who actually know the field. Less politicisation, more grounding in industry and science. It would be refreshing to see more people with practical experience in IT or digital industries shaping these strategies, and more integration of specialised digital legal expertise at policy level. Without that, policymaking will continue to be reactive, hype-driven, and superficial.
You’ve combined serious scholarship with cultural perspectives through initiatives like GikII. How important is it for lawyers to engage with the cultural and societal dimensions of technology?
If you’re a lawyer trying to understand technology’s impact, you can’t just read legislation or case law: you need to pay attention to the cultural narratives surrounding it. Popular fiction, for instance, often captures what people are most anxious or excited about. It doesn’t necessarily predict the future with accuracy, but it does give you a lens into the fears, hopes, and intuitions of society.
Take Black Mirror. I think it has been extraordinarily good at identifying themes that resonate with the public: universal surveillance, the creation of virtual avatars of people we know (or even of those who have died, which I’ve written about myself), the ubiquity of targeted advertising, or the all-pervasive reach of social media profiling. These episodes don’t just entertain; they crystallise abstract issues in a way that makes them accessible. People frequently cite Black Mirror as a way of articulating their unease with technology, and that in itself tells us something important about where regulation and law might need to pay attention.
The episode Nosedive is a good example. Every action you take affects your personal rating, and your entire social existence depends on that score. We may never arrive at that exact scenario, but it struck a cultural chord because it captured, in a clear metaphor, what many people already feel about social validation online and algorithmic judgment. It showed how technological systems could creep into daily life in ways that shape behaviour and identity and that’s something regulators, policymakers, and lawyers need to take seriously.
From a teaching perspective, I’ve found these cultural references invaluable. When I tell students or policymakers, “This is like Minority Report’s pre-crime system,” or “This is like Black Mirror’s Nosedive,” it clicks instantly. The metaphors give people a way to frame complex issues, whether it’s predictive policing, surveillance capitalism, or the erosion of privacy.
And beyond pedagogy, I think there’s a genuine utility here. Culture gives us a shorthand for the emotional and social dimensions of law and technology. It’s not just entertainment; it’s a mirror of our anxieties, a guide to public concerns, and a tool for engaging both lawyers and the wider society in the conversation.