Automated Robotic Weapons

House of Commons

House of Commons

In the House of Commons there are two red lines separating the two sides of the House. The lines are two sword-lengths apart. They are there to underline the fact that, in the House, disputes must be solved by discussion not swordfights.

But it has been many years since two combatants needed to be within a sword-length of each other to fight. Over the years technology has continually increased the distance needed between combatants. Today a military target in Pakistan can be taken out by a soldier sitting in Lincolnshire; a soldier who will later go home for his supper.

Unmanned Aerial Vehicle

Unmanned Aerial Vehicle

We have all become familiar with the idea of the ‘unmanned aerial vehicles’ (UVAs or ‘drone’s) used in Afghanistan and Pakistan during the ‘war on terror’. These drones are mostly used for surveillance. But increasingly they carry a ‘payload’ with the kinetic force to disable and destroy. This payload is currently ‘delivered’ by a human being using remote control. But it may only be a matter of time before these robotic weapons will be fully automated: the whole targeting process (or ‘kill chain’), having been programmed by a human, will be executed by the robot.

But how do we feel about a robot’s having its – er – finger on the trigger?

Robotic Weapons

Robotic Weapons

We might think this is fine. After all, we may say, the robotic weapon will have been programmed to obey international humanitarian and human rights laws. It will not deliver excessive force (which would be illegal) or fail to discriminate between combatants and non-combatants (ditto). Surely it can only be good to avoid the risks that the military is currently forced to take?

Child with Gun

Child with Gun

But such a response might rest on an over-estimation of the powers of artificial intelligence. Will it ever be possible to programme a robot to distinguish between a prone sniper and a wounded combatant? Or between a child with a toy gun and a soldier with a real one? Will it ever be possible to programme a robot to decide whether, in blowing up an enemy installation, the risk to children at a nearby school is ‘acceptable’? Or to weigh up the proportionality of taking any risk to civilians?

And how do we feel about a robot being programmed to kill anyone? Human beings are capable of compassion, of mercy and pity. Robots are programmed. If they have been told to kill they kill. They do not consider extenuating circumstances. Maybe AI can solve this?

Maybe. In the meantime there will be a discussion about how to regulate automated robotic weapons. Perhaps you should make a contribution?

Further Reading:,

Robo Wars (The Oxford Martin School)

Robo Wars (The Oxford Martin School)

paper released by the Oxford Martin School by Alex Leveringhaus and Gilles Giacca (and which formed the basis for this blog)

The Internet Encyclopaedia entry on Just War.


About Marianne

Marianne is Director of Studies in Philosophy at Oxford University's Department for Continuing Education
This entry was posted in Reflections, Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s