In the House of Commons there are two red lines separating the two sides of the House. The lines are two sword-lengths apart. They are there to underline the fact that, in the House, disputes must be solved by discussion not swordfights.
But it has been many years since two combatants needed to be within a sword-length of each other to fight. Over the years technology has continually increased the distance needed between combatants. Today a military target in Pakistan can be taken out by a soldier sitting in Lincolnshire; a soldier who will later go home for his supper.
We have all become familiar with the idea of the ‘unmanned aerial vehicles’ (UVAs or ‘drone’s) used in Afghanistan and Pakistan during the ‘war on terror’. These drones are mostly used for surveillance. But increasingly they carry a ‘payload’ with the kinetic force to disable and destroy. This payload is currently ‘delivered’ by a human being using remote control. But it may only be a matter of time before these robotic weapons will be fully automated: the whole targeting process (or ‘kill chain’), having been programmed by a human, will be executed by the robot.
But how do we feel about a robot’s having its – er – finger on the trigger?
We might think this is fine. After all, we may say, the robotic weapon will have been programmed to obey international humanitarian and human rights laws. It will not deliver excessive force (which would be illegal) or fail to discriminate between combatants and non-combatants (ditto). Surely it can only be good to avoid the risks that the military is currently forced to take?
But such a response might rest on an over-estimation of the powers of artificial intelligence. Will it ever be possible to programme a robot to distinguish between a prone sniper and a wounded combatant? Or between a child with a toy gun and a soldier with a real one? Will it ever be possible to programme a robot to decide whether, in blowing up an enemy installation, the risk to children at a nearby school is ‘acceptable’? Or to weigh up the proportionality of taking any risk to civilians?
And how do we feel about a robot being programmed to kill anyone? Human beings are capable of compassion, of mercy and pity. Robots are programmed. If they have been told to kill they kill. They do not consider extenuating circumstances. Maybe AI can solve this?
Maybe. In the meantime there will be a discussion about how to regulate automated robotic weapons. Perhaps you should make a contribution?