Drones, Robots and the Ethics of Armed Conflict in the 21st Century

Wow!

Robo Wars (The Oxford Martin School)

Robo Wars (The Oxford Martin School)

Did you know there are machines out there capable, once programmed, of searching out their target, and delivering their lethal ‘payloads’ without further human intervention? In other words the ‘decision’ to kill belongs to the machine itself.

Alex Leveringhaus argues that there is a case for banning such machines and insisting there should always be a human involved at the crucial moment. Alex’s case is that whereas a human is always capable of changing his or her mind, of exercising mercy and compassion, a machine just delivers, mindlessly and efficiently.

Abu Graib

Abu Graib

Will Wilson argues, on the other hand, that no human can be expected to analyse and make use of all the information that must be analysed in today’s armed conflicts – it must be automated. Machines, he insists, are more efficient, more capable of doing the job and less likely to make mistakes. Whilst recognising that only humans can be merciful and compassionate, he pointed out that humans can also be malicious and vindictive. The Abu Graib atrocities were not, after all, undertaken by machines.

Two soldiers fighting

Two soldiers fighting

Alex’s claim was vividly illustrated for me the following day when, discussing the weekend with a friend, he told me about a friend of his father’s who, during the second world war was about to kill a German soldier when he recognised him as his old German teacher. The German recognised him too and they decided instead to toss a coin to see who would take the other prisoner. The Englishman won. No-one died.

No machine could have recognised the German combatant as his old teacher, and seen that as a reason to hold his fire. Surely this is an excellent reason to insist that it is always a human who pulls the trigger?

On the other hand, if the soldier hadn’t been his old German teacher, he would have shot him and – well – that’s the end of it. One soldier kills another. Surely that’s what war is all about? Would the introduction of machines change that? Wouldn’t it just mean that the soldiers on each side are safer both physically and mentally. After all if machines do the dirty work, who cares if they get blown up? No machine will ever suffer post-traumatic stress, or burst into tears many years later at the thought of what they did during the war (as, in his eighties, my father did).

Robot

Robot

There is another problem, of course, with fully automated weapons. Will they ever be sufficiently free of risk to be worth it? They will have to learn, of course, and once a machine can learn its behaviour will not be fully predictable. Maybe a machine is capable of the sort of atrocity a human is? Could a robot run amok and kill all the villagers it is supposed to protect? If so who is to blame? Not the machine – machines are not moral agents. The programmer perhaps? Alex believes that the problem is not the ‘responsibility gap’ because the programmer can be blamed. But risk is a problem – will we ever be sure enough that a machine will not do this? Wil has faith in technology – he believes that we will reach a stage where the risk –albeit it there – will be worth taking for the sake of the humans who would otherwise be risking their lives.

This was an extraordinary weekend. Very enjoyable and extraordinarily thought-provoking. The weekend came about because, after one of my open day lectures, I was approached by a man who told me he was one of the officers contributing to army policy on fully automated weapons (or ‘killer robots’ as the tabloids would have it). We had a fascinating conversation and I asked him if he would so a weekend school for us.

Normally, of course, my speakers are philosophers, but this seemed too good an opportunity to miss. Paul Gittins agreed. In the event, however, Paul was posted to the Gulf and instead he sent Wil Wilson, who did an excellent job.

During the weekend one of the sessions was a ‘conversation’ between Wil and Alex and many people found it extremely illuminating. I shall certainly consider incorporating that into future weekends.

In the meantime what do you think – should we ban fully automated weapons or do you think they should be permitted?

Here is some extra reading:

Inside the Pentagon’s Effort to Build a Killer Robot (Time Magazine)

Human Rights Watch Campaign to Stop Killer Robots

Do Killer Robots Violate Human Rights (The Atlantic)

About Marianne

Marianne is Director of Studies in Philosophy at Oxford University's Department for Continuing Education
This entry was posted in Blogs, Monthly Conundrums and tagged , , , , , , , . Bookmark the permalink.

11 Responses to Drones, Robots and the Ethics of Armed Conflict in the 21st Century

  1. Scarlatti says:

    The injustice of the drones is unconnected to the human activity at the moment of killing. Someone could always recognize the target a week ahead of time as an old chum, and so sabotage the thing long in advance. What difference? Besides, the remote pilots are nowhere near the targets when they hit that kill button and vaporize that man.

    Only in the case that the drone was selecting the target itself, through its own resources, would a new situation arise. For instance if it were able to gather intelligence and add targets to the kill list on its own, only obeying a pre-programmed criteria or algorithm. But, if the names are already marked by humans in advance, it makes absolutely no difference how the killing is brought to finish.

    This is rather analogous to the automated car issue. Some people claim that the cars, as they must be programed in advance [they are not AI, but rather ‘embedded intelligence’ vehicles, with little learning capacity] will be making decisions that lead to deaths, in the case of saving two people and running a car with one driver off a cannon to (near) certain death, for instance.

    In that case it seems that since we already choose to inflict some tens of thousands of deaths by the choice of allowing driving (tout court), and since there will be lowered deaths with the automated cars, if they perform as claimed, only a kind of superstition can oppose them on the basis of ethical considerations. [Of course, some people are also opposed for the simple reason that people like to drive, and that once enough automated cars get on the road it will be like the end of the wagon and horse, forcibly obviated.]

    • Marianne says:

      Hi Scarlatti, sorry I took so long to approve and reply to this. During the weekend we were talking only of the situation (which has not yet arisen) where the drones would be fully automated (i.e. no human involved in the ‘kill chain’). No decision has yet been made on this from the point of view of legality, and the weekend was intended to contribute to the debate about whether it should be or not.

      Some would disagree with you that it makes no difference whether a human being is involved in the final command to kill. I am not sure whether names are always involved (and suspect not). I agree with you that we allow people to be killed on the roads with impunity and it seems unlikely that automated cars will add significantly to this. Possibly there will even be (as you suggest) fewer deaths.

      Marianne

  2. Scarlatti says:

    To be clear, my post also concerned only the ‘fully automated’ case. “Find, fix and finish,” is the slogan. That presupposes a name, i.e., a human-authorized target. Again, automated adding of names would change the issue greatly.

    The realest or rigorous argument against would be the potential opprobrium connected with ‘war without war’, no causalities on our side, raised to the level of an outrageously-automated slaughter. The moral or non-mercenary consideration would be that it is inhumane and cruel.

    There are no serious arguments concerned with the statistical efficacy or the mechanics of the matter.

    True, as Alfred North Whitehead remarked, it is the business of the future to be dangerous. We do not truly know without the experience, and, presumably, there are those who would not enjoy that experience.

    • Marianne says:

      Hi again Scarlatti, thanks for coming back. Thanks for clarifying that you were also talking about fully automated cases. I see what you mean about a human authorised target. That certainly makes me feel better about it. But even so, I can imagine situations in which the ‘human authorised target’ is, when found, cuddling his granddaughter’. Can we allow for this? There is something deeply uncomfortable about the fact there would be no casualties on our side, that indeed, that those whip watch the killing then go home for supper.

      I like the Whitehead quote.

  3. Scarlatti says:

    Allowing the following also falls under the remark of Whitehead, I believe there is a clarification that is helpful:

    There is an ambiguity in the debate based on a general lack of understanding about the extreme sophistication of the technology involved. The antilogarithms and sensors (heat, radar, lazer-topographic mapping, etc.) are decisively superior (compared to the human pilot) in accuracy of identifying life, i.e., the granddaughter, etc. Statistically there would surely be less non-targeted deaths. But individually, specifically, there would be computer-caused non-targeted deaths.

    This is what I meant by my assertion: There are no serious arguments concerned with the statistical efficacy or the mechanics of the matter.

    • Marianne says:

      Thanks for your remarks Scarlatti – i think the issues are very difficult, and that we are quite a way from solving them. Must the drone have been programmed NOT to kill in the event it detects its target embracing another body? What if the other body is the second in command rather than the granddaughter?

      • Scarlatti says:

        The “second in command” would have to be, already, present on that kill list in the simplest case.

        It seems to me that it is likely that with a human behind the trigger the temptation would be to violate that list.

        With respect to unintentional deaths, to reiterate, I believe the human remote-controler would surely cause more such deaths. And that the statistical superiority of the so-called embedded intelligence will only be marred by the fact that there would be individual cases of non-target deaths that are caused by those machines. For example, a wall falls over that leads to a domino effect that was especially hard to predict. Result, a schoolroom on the other side of the block is crushed and a class of children killed. Such deaths would be anecdotal (statistically anomalous) taken from a calculative statistical standpoint, but perhaps only from a calculative statistical standpoint.

        Personally, it seems to me, that what is very questionable is the terror produced by the very presence of such so-called remote-piloted vehicles in the air of one’s city or country. Something similar, however, could be said for the presence of piloted planes. It is questionable whether David Cameron, e.g., can be called a respectable world leader, if he promotes the terrorization of large populations in such a manner.

        This last consideration just goes to show that only the statistics can be taken clearly and calculatively. But it is somewhat useful to my mind to make that clear to oneself.

  4. Jorgen Larsen says:

    Imagine, if you will, a fully automated capability, programmed, to the task of searching out the facts and proof concerning certain events and finally in consequence thereof, delivering and effecting its fully automated response, i.e. the verdict and punishment. Hey presto, we have a fully automated police, judge, jury and henchman all round into one fell machine. Yet, even here, in this 1984/Judge Dred-fantasy, there is a physical button to be pressed by some wary biological entity at some point. There, my friends, lies the crux of the matter. Happy new Year!!!

  5. Chris Matthews says:

    Hi Marianne,

    I have enjoyed your lectures and the warmth of your gracious and patient sense of being. Thank you for sharing so much of your self and your values and points of view.

    I find it more alarming (than the actual advances in technology) that we ‘choose’ as a civilization to fully exploit new technology for military purposes first (above all other considerations); and then as some kind of afterthought and by-product allow these technologies to be used for communal benefit e.g. autonomous vehicles.

    This to me is the larger problem. The ability of newer military devices/machines to evaluate, acquire and engage a target is not a large progression from the ability of a cruise missile from the 1980’s to self navigate to its target and even make course corrections etc or a heat seeking missile from the same era.

    These older forms of technology could be considered more dangerous, because once they have been launched they cannot be recalled or stopped from pursuing their target.

    Back to the (smaller) problem of whether or not killer robots or autonomous military drones could cause more harm and be more dangerous – I think it is a fair and valid question and has many serious ethical implications beyond any malfunctions or problems caused by the technology itself.

    For example, the mere existence of such a technology and the concentration of (military) power and the imbalance it can create on the world stage (i.e. unipolar world instead of a multipolar collaborative world). Secondly, the concentration of power of the so called 1% represented by their agency in NGOs, our governments and international corporations vs the larger world’s population and citizens or for the common purposes and ‘good’ of humanity.

    Therefore, in my view these smaller problems are only a manifestation of a larger problem and that the deeper issues are not really about the technology and more about the human condition.

    However, when looking at the ‘larger’ problem(s) that this new technology represents – it can be said that first: all of the same problems discussed and raised about the nuclear bomb and nuclear warfare in the 1950’s are essentially the same; and secondly, the introduction of this new autonomous robot and drone technology and warfare are a progression towards a greater intensification and concentration of power and wealth for 1% of the world’s elites.

    And this is the main point of my concern, that the advances of technology have almost been exclusively focused on increasing the concentration of power and wealth of a smaller group of elites and less employed to benefit of humanity as a whole; and that the overall progression and use of these technologies have not changed since the introduction of nuclear fission and the atom bomb.

    • Marianne says:

      Dear Chris, I am sorry for taking so long to reply – shorter posts are likely to get quicker replies (especially during term)! Thank you for your kind words about my stuff – I enjoy the means by which I can spread philosophy these days. I suspect military technology has always driven non-military technology to some extent. Anyone who thinks the job of government is defence (and many people think this is their sole job, not just their first job) is likely to find this unsurprising. I agree that drone technology creates an imbalance. The last thing we expect is that one of OUR senior leaders will suddenly be taken out from a street in London. But this could happen, should the ‘other side’ catch up technologically-speaking. Marianne

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s