This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Jean-Yves Gilg

Editor, Solicitors Journal

Exterminate, exterminate

News
Share:
Exterminate, exterminate

By

Could time be up for killer robots even before they become a reality? Richard Easton considers the legal consequences of lethal autonomous weapons systems

In his 1942 short story Runaround, sci-fi master Isaac Asimov proposed
the 'Three Laws of Robotics'. The Three Laws are now a standard feature of fictional androids' programming. But should the first of Asmiov's Three Laws, that 'a robot may not injure a human being', be enshrined in international humanitarian law?

Killer robots

'Yes,' says Human Rights Watch in its April report on the military use of robots, 'Mind the gap: The lack of accountability for killer robots'.

If robotics continues to develop apace, lethal autonomous weapons systems (LAWS) - 'killer robots' able to sense, select, and engage targets without human input or supervision - will stalk the battlefields of the future. With faster-than-human reactions, LAWS would allow powerful states with 'casualty-averse' populations to wage war without endangering actual soldiers.

But what if future killer robots go on a Geneva-Convention-defying rampage? What will the laws of LAWS be?

Enter Human Rights Watch, with fears that killer robots
would create 'accountability-free period[s]' during which no one could be held criminally liable
for the algorithmic savagery
of compassionless droids potentially unable to distinguish between civilians and enemy combatants.

LAWS would create a jurisprudential minefield for international criminal law, according to Human Rights Watch. Killer robots are not natural persons and, as the Nuremberg Tribunal concluded, 'crimes against international law are committed by men, not by abstract entities'. Even if legal personhood were conferred on LAWS, the Rome Statute (the foundation document of the International Criminal Court),
for instance, limits liability to natural persons.

And, while autonomous weapons' actions could constitute the actus reus elements of an offence, their processing of data and consequent decision making lacks intentionality and, therefore, does not accord with the law's concept of mens rea. Even if capable of crime, mechanoids
are incapable of suffering and cannot, therefore, experience punishment.

Would culpability simply
shift to human operators? Not necessarily, frets Human Rights Watch. A human deploying LAWS with the specific intention that the robots should commit grave offences would be criminally liable.

However, evidential difficulties would abound when trying
to prove maleficent deployments, especially as autonomous weapons could decide to
'launch independently and unforeseeably an indiscriminate attack against civilians and those hors de combat'.

Might the doctrine of command responsibility, then,
fill the accountability vacuum?
A superior military officer can be held liable where he effectively controls lawless subordinates
but fails to take necessary and reasonable steps to punish
or prevent their criminality. However, whether a sentient 'thing' that lacks personhood and is doli incapax can legally commit a 'crime' for which a superior officer could be indirectly responsible is a barbed question.

And can a superior officer without a roboticist's understanding of AI technologies be said to
have actual or constructive knowledge of an autonomous machine's murderous capacities? What steps could
be taken in the fog of war to override robots with quicker responses than any human? Unless a model of robot known to have 'malfunctioned' by defying international law were recklessly deployed, it is unlikely a superior officer would be held accountable.

And, in the absence of accountability, there would exist, Human Rights Watch fears, impunity for international crimes involving killer robots. War crimes would become mere glitches.

Mission creep

But why not simply limit LAWS’ use to operations in civilian-free areas? Human Rights Watch points to similar limitations on the use of cluster munitions, which were developed for exclusive use against military targets or in unpopulated deserts, but were eventually used by ‘generally responsible militaries…in populated areas’. Mission creep might similarly see LAWS initially being used against isolated enemy strongholds and later in ‘cluttered’ environments such as cities.

Human Rights Watch, therefore, argued along with other NGOs at last month's second multilateral meeting
of the members of the 1980 Convention on Conventional Weapons in Geneva that an absolute ban on the use of LAWS should be promulgated before such droids are actually created. LAWS would then join blinding laser weapons, which were banned in 1995 under protocol IV of the 1980 Convention, as an example of military hardware prohibited in advance of its development.

Human Rights Watch's 'Mind the Gap' report is a rarity in legal thinking: a pre-emptive strike. While science pushes forward into the future, law is an inherently backward-looking discipline tied to precedent rather than innovation. Rather than wait for killer robots' development before considering the implications
for the law of such technology, Human Rights Watch has produced a futuristic legal briefing.

As cyberpunk novelist William Gibson said, 'the future is already here - it's just not evenly distributed'. And when it
comes to killer robots, the law may receive its share of the future early. SJ

Richard Easton is a solicitor at Sonn Macmillan Walker

@SMW_Law