Algorithmic power and the fragility of accountability

Algorithmic governance promises efficiency, but its opacity, bias and constitutional implications demand far stronger oversight and legal scrutiny
We find ourselves at an inflection point where the rhetoric of technological progress races ahead of the slower, steadier disciplines of law and regulation. Algorithmic systems, including machine learning systems, predictive analytics, automated decision-making and decision-support, have been embraced with remarkable enthusiasm by policymakers eager to demonstrate efficiency and innovation. Yet, as with previous moments of digital transformation, from early internet governance debates to the rise of platform power, we would be unwise to confuse enthusiasm with legitimacy, or automation with authority.
The promise of algorithmic regulation is seductive. Its champions speak of data-driven precision, neutrality, and the removal of human frailty from administrative decision-making. This vision, so often marketed with the glossy optimism found in certain corners of Silicon Valley, obscures a more uncomfortable reality. Algorithmic systems entrench human judgments, institutional aims, and historical biases, as expressed through a mathematical grammar that gives them the appearance of inevitability. Cathy O’Neil’s Weapons of Math Destruction remains a stark reminder that the authority of numbers can be as political as the authority of law.
Legal systems are, of course, no strangers to complexity. But they are premised on intelligibility. A citizen may disagree with a law; they may dislike a regulatory outcome; but the system is designed, normatively and institutionally, to allow them to understand it, challenge it, and seek redress. Algorithmic governance disrupts this equilibrium. A decision taken by an automated welfare assessment tool or predictive policing model can feel, to the individual subject and in its consequences, both inscrutable and unassailable. The Dutch SyRI judgment illustrates the tension: when the rationale for decision-making disappears into proprietary code and inferred correlations, reviewers including judges, struggle to grasp the object of scrutiny.
This is not merely a technical problem; it is a constitutional one. The rule of law requires more than outcomes – it requires processes that are visible, contestable, and reasoned. Yet opacity, whether due to commercial secrecy or model complexity, has become the quiet default of algorithmic regulation. The UK Information Commissioner’s Office has repeatedly emphasised, in its Guidance on AI and Data Protection, that transparency is not an optional extra. But we must recognise that even well-intentioned regulatory guidance risks being overwhelmed by the scale and speed of contemporary computational systems.
The temptation is to imagine that we can solve these challenges with a technical fix: a new auditing protocol here, an explainability tool there. Yet this misunderstands the problem. Algorithmic systems are not neutral instruments; they are political artefacts. Their design choices – what data to prioritise, what risks to detect, what “accuracy” means, are forms of regulatory judgment. As the EU’s AI Act moves through its phases of implementation, it implicitly acknowledges this by treating high-risk AI systems almost as constitutional actors. But the Act’s success will depend not on its drafting but on the capacity of institutions to operationalise its principles.
If we take accountability seriously, three commitments follow. First, algorithmic systems must be built within the orbit of public law, not adjacent to it. They must be designed for scrutiny, not merely adopted and later retrofitted with oversight. Second, regulators need to be equipped with technical and investigative authority, not simply guidance-issuing powers, if they are to engage with these systems on an equal footing. The work of bodies like Digital Regulation Cooperation Forum is a start, but only that. Third, we must confront the behavioural dimension of algorithmic governance. Systems that dynamically shape user behaviour, what Karen Yeung called “hypernudging”, blur the line between regulation and manipulation. They demand a democratic debate, not simply an engineering solution.
Ultimately, the challenge posed by algorithmic regulation is not whether we can integrate machines into governance – we already have, and we will continue to do so. The challenge is to ensure that in doing so we do not erode the foundational, human, commitments that give law its authority. The rule of law has endured precisely because it resists opacity, resists inevitability, and resists the concentration of power without justification. If algorithmic systems are to play an expanding role within public administration, they must be held to these same standards.
Innovation is not the enemy of accountability. But neither is it a substitute for it. Our task, then, is to ensure that the future of regulation is built with constitutional sensibilities intact: that technological ambition is matched with legal humility, and that the systems we build continue to serve the public, rather than the other way around.

