Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

When Drones Decide to Kill on Their Own
The Diplomat ^ | October 1, 2012 | J. Michael Cole

Posted on 10/02/2012 6:14:25 PM PDT by Altariel

It’s almost impossible nowadays to attend a law-enforcement or defense show that does not feature unmanned vehicles, from aerial surveillance drones to bomb disposal robots, as the main attraction. This is part of a trend that has developed over the years where tasks that were traditionally handled in situ are now operated remotely, thus minimizing the risks of casualties while extending the length of operations.

While military forces, police/intelligence agencies and interior ministries have set their sights on drones for missions spanning the full spectrum from terrain mapping to targeted killings, today’s unmanned vehicles remain reliant on human controllers who are often based hundreds, and sometimes thousands of kilometers away from the theater of operations. Consequently, although the use of drones substantially increases operational effectiveness — and, in the case of targeted killings, adds to the emotional distance between perpetrator and target — they remain primarily an extension of, and are regulated by, human decisionmaking.

All that could be about to change, with reports that the U.S. military (and presumably others) have been making steady progress developing drones that operate with little, if any, human oversight. For the time being, developers in the U.S. military insist that when it comes to lethal operations, the new generation of drones will remain under human supervision. Nevertheless, unmanned vehicles will no longer be the “dumb” drones in use today; instead, they will have the ability to “reason” and will be far more autonomous, with humans acting more as supervisors than controllers.

Scientists and military officers are already envisaging scenarios in which a manned combat platform is accompanied by a number of “sentient” drones conducting tasks ranging from radar jamming to target acquisition and damage assessment, with humans retaining the prerogative of launching bombs and missiles.

It’s only a matter of time, however, before the defense industry starts arguing that autonomous drones should be given the “right” to use deadly force without human intervention. In fact, Ronald Arkin of Georgia Tech contends that such an evolution is inevitable. In his view, sentient drones could act more ethically and humanely, without their judgment being clouded by human emotion (though he concedes that unmanned systems will never be perfectly ethical). Arkin is not alone in thinking that “automated killing” has a future, if the guidelines established in the U.S. Air Force’s Unmanned Aircraft Systems Flight Plan 2009-2047 are any indication.

In an age where printers and copy machines continue to jam, the idea that drones could start making life-and-death decisions should be cause for concern. Once that door is opened, the risk that we are on a slippery ethical slope with potentially devastating results seems all too real. One need not envision the nightmares scenario of an out-of-control Skynet from Terminator movie fame to see where things could go wrong.

In this day and age, battlefield scenarios are less and less the meeting of two conventional forces in open terrain, and instead increasingly takes the form of combatants engaging in close quarter firefights in dense urban areas. This is especially true of conflicts pitting modern military forces — the very same forces that are most likely to deploy sentient drones — against a weaker opponent, such as NATO in Afghanistan, the U.S. in Iraq, or Israel in Lebanon, Gaza, and the West Bank.

Israeli counterterrorism probably provides the best examples of the ethical problems that would arise from the use of sentient drones with a license to kill. While it is true that domestic politics and the thirst for vengeance are both factors in the decision to attack a “terrorist” target, in general the Israel Defense Forces (IDF) must continually use proportionality and weigh the operational benefits of launching an attack in an urban area against the costs of attendant civilian collateral. The IDF has faced severe criticism over the years for what human rights organizations and others have called “disproportionate” attacks against Palestinians and Lebanese. In many instances, such criticism was justified.

That said, what often goes unreported are the occasions when the Israeli government didn’t launch an attack because of the high risks of collateral damage, or because a target’s family was present in the building when the attack was to take place. As Daniel Byman writes in a recent book on Israeli counterterrorism, “Israel spends an average of ten hours planning the operation and twenty seconds on the question of whether to kill or not.”

Those twenty seconds make all the difference, and it’s difficult to imagine how a robot could make such a call. Unarguably, there will be times when hatred will exacerbate pressures to use deadly violence (e.g., the 1982 Sabra and Shatila massacre that was carried out while the IDF looked on). But equally there are times when human compassion, or the ability to think strategically, imposes restraints on the desirability of using force. Unless artificial intelligence reaches a point where it can replicate, if not transcend, human cognition and emotion, machines will not be able to act under ethical considerations or to imagine the consequences of action in strategic terms.

How, for example, would a drone decide whether to attack a Hezbollah rocket launch site or depot in Southern Lebanon located near a hospital or with schools in the vicinity? How, without human intelligence, will it be able to determine whether civilians remain in the building, or recognize that schoolchildren are about to leave the classroom and play in the yard? Although humans were ultimately responsible, the downing of Iran Air Flight 655 in 1988 by the U.S. Navy is nevertheless proof that only humans still have the ability to avoid certain types of disaster. The A300 civilian aircraft, with 290 people on board, was shot down by the U.S. Navy’s USS Vincennes after operators mistook it for an Iranian F-14 aircraft and warnings to change course were unheeded. Without doubt, today’s more advanced technology would have ensured the Vincennes made visual contact with the airliner, which wasn’t the case back in 1988. Had such contact been made, U.S. naval officers would very likely have called off the attack. Absent human agency, whether a fully independent drone would make a similar call would be contingent on the quality of its software — a not so comforting thought.

And the problems don’t just end there. It’s already become clear that states regard the use of unmanned vehicle as somewhat more acceptable than human intrusions. From Chinese UAVs conducting surveillance near the border with India to U.S. drones launching Hellfire missiles at suspected terrorists in places like Pakistan, Afghanistan or Yemen, states regard such activity as less intrusive than, say, U.S. special forces taking offensive action on their soil. Once drones start acting on their own and become commonplace, the level of acceptability will likely increase, further deresponsibilizing their users.

Finally, by removing human agency altogether from the act of killing, the restraints on the use of force risk being further weakened. Technological advances over the centuries have consistently increased the physical and emotional distance between an attacker and his target, resulting in ever-higher levels of destructiveness. Already back during the Gulf War of 1991, critics were arguing that the “videogame” and “electronic narrative” aspect of fixing a target in the crosshairs of an aircraft flying at 30,000 feet before dropping a precision-guided bomb had made killing easier, at least for the perpetrator and the public. Things were taken to a greater extreme with the introduction of attack drones, with U.S. Air Force pilots not even having to be in Afghanistan to launch attacks against extremist groups there, drawing accusations that the U.S. conducts an “antiseptic” war.

Still, at some point, a human has to make a decision whether to kill or not. It’s hard to imagine that we could ever be confident enough to allow technology to cross that thin red line.


TOPICS: Miscellaneous
KEYWORDS: donutwatch; drones; skynet; terminator
H-Ks: coming soon to a future near you?

Where are John and Sarah Connor when you need them?

1 posted on 10/02/2012 6:14:27 PM PDT by Altariel
[ Post Reply | Private Reply | View Replies]

To: Altariel

SKYNET ......


2 posted on 10/02/2012 6:16:28 PM PDT by The Sons of Liberty ("Get that evil, foreign, muslim, usurping bastard out of MY White House!" FUBO GTFO!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Altariel

“Still, at some point, a human has to make a decision whether to kill or not. It’s hard to imagine that we could ever be confident enough to allow technology to cross that thin red line.”


I don’t see much difference between being killed by an ‘intelligent’ drone and being killed by a dumb landmine.

The landmine is much more likely to kill non-combatants than the drone.


3 posted on 10/02/2012 6:29:24 PM PDT by UCANSEE2 ( If you think I'm crazy, just wait until you talk to my invisible friend.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Altariel

(Thanks for posting the article)

” Without doubt, today’s more advanced technology would have ensured the Vincennes made visual contact with the airliner, which wasn’t the case back in 1988.”


Baloney. If you wait until you can see the ‘enemy’, you are dead. And that is even more true today.


4 posted on 10/02/2012 6:32:45 PM PDT by UCANSEE2 ( If you think I'm crazy, just wait until you talk to my invisible friend.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: UCANSEE2
The difference being, the landmine, of necessity, is in the exact same position, and non-combatants, hopefully, are aware of the presence of the minefield and will avoid the area.

The drone, in contrast, is actively patrolling.

It would seem scenes like this will soon not be limited to science fiction:


5 posted on 10/02/2012 6:40:24 PM PDT by Altariel ("Curse your sudden but inevitable betrayal!")
[ Post Reply | Private Reply | To 3 | View Replies]

To: Altariel
In my headache-debilitated stupor I misread the title as "When Drones Decide to Kill on Their Own". Which does raise the spectre of anti-drone drones, programmed to seek out other flying automatons and destroy them.
6 posted on 10/02/2012 6:45:40 PM PDT by ctdonath2 ($1 meals: http://abuckaplate.blogspot.com)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Altariel
and non-combatants, hopefully, are aware of the presence of the minefield and will avoid the area.

Unless there is a drone chasing them and they are fleeing for their lives.

7 posted on 10/02/2012 6:52:25 PM PDT by UCANSEE2 ( If you think I'm crazy, just wait until you talk to my invisible friend.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: Altariel

Oh my God, we’ve created the Cylons.


8 posted on 10/02/2012 7:17:03 PM PDT by ROCKLOBSTER (Celebrate "Republicans Freed the Slaves" Month)
[ Post Reply | Private Reply | To 1 | View Replies]

To: The Sons of Liberty

Lets start building Counter-Drones...!!!!

send up our own drones to shoot down the Pre-killer Drones.

make it the same size...with Lipstick and Nylons!!!!


9 posted on 10/02/2012 7:58:18 PM PDT by LtKerst
[ Post Reply | Private Reply | To 2 | View Replies]

To: Altariel
Most of those ethical questions disappear once operations are carried out without concern for lives of locals. Somalia, Yemen, Afghanistan all present such opportunities. The drone arrives into the zone of operations, weapons interlock is disabled, and then the drone looks for targets and eliminates them one by one until all the ammo is expended. Then the drone returns to base. This is the method of operations of machines in the Terminator movies. The only difference is that there is a human mind somewhere far behind the machines.

This is a dead end. You cannot try to kill anyone who wanders outside. But the task of correct identification of a mililtary target is complex enough that even humans cannot do it reliably. It's even more complicated in lands like Afghanistan where the dividing line between a civilian and a combatant does not exist. A wise general would use only two tools - containment and annihilation. The tool of conquest, with retention of most of the population, will not be effective (as we see every day in Afghanistan.) Therefore drone operations aimed at selective elimination of opponents will not be effective either. You leave them alone (with a wall around the country) or you nuke it from orbit. There is no middle ground.

10 posted on 10/02/2012 8:56:40 PM PDT by Greysard
[ Post Reply | Private Reply | To 1 | View Replies]

To: ROCKLOBSTER

11 posted on 10/02/2012 9:05:47 PM PDT by firebrand
[ Post Reply | Private Reply | To 8 | View Replies]

To: Altariel

How long til our enemies will be able to field drones over the CONUS? Russia, China, the Taliban?

There you are, late one night, sound asleep in your bed somewhere in Western Nebraska, when a stealthy Taliban drone slipping silently overhead detects a Bible on your dresser in the moonlight and decides to take out the filthy infidel below. Fifteen seconds later a smart bomb flies down your chimney and your little house on the prairie is no more.

In other words, if we can do it to them, won’t they endeavor, forever, and by assisting each other to that end, to do it to us in spades someday? And doesn’t that mean we should utterly and mercilessly destroy them now while we still can, before that day arrives? Just wondering.


12 posted on 10/03/2012 1:12:25 AM PDT by LibWhacker
[ Post Reply | Private Reply | To 1 | View Replies]

To: LibWhacker

Utterly and mercilessly destroy Russia, China and the Taliban?

That IS one approach to warfare. Let’s see, that would require killing somewhere upwards of 1.5B people.

Two groups of which have lots of nuclear weapons with which to respond.

You might want to rethink your approach to strategy.


13 posted on 10/03/2012 3:55:05 AM PDT by Sherman Logan (Perception wins all the battles. Reality wins all the wars.)
[ Post Reply | Private Reply | To 12 | View Replies]

To: Sherman Logan
So to utterly and mercilessly destroy a regime, you would argue it was necessary to kill everyone living under it?

Shame... SHAME... SHAME!!!

No, I think you can see now that there is no such requirement. Not even close.

I will, however, concede your point, and any other obvious points you wish to make, that Russia and China are fully armed nuclear powers and we'd better be careful about attacking them. But let's assume, for sake of argument, that the brains in the Pentagon won't go full Alzheimer's on us and forget them.

The question remains, how do we prevent that day arriving when our enemies can fly stealthy drones over our heads? If you know a way other than utter destruction of these regimes, I'm all ears. Something like the fall of the Soviet Union is not utter destruction and is obviously insufficient.

14 posted on 10/03/2012 7:15:53 AM PDT by LibWhacker
[ Post Reply | Private Reply | To 13 | View Replies]

To: LibWhacker
Utterly and mercilessly destroy Russia, China and the Taliban?

I'll grant you that the Taliban is not Afghanistan. But Russia and China refer to nations, not regimes. What do you think I'm supposed to assume you when you call for their utter and merciless destruction?

Let us assume we "utterly destroy" the present regimes in these countries, but do not exterminate the populations.

Do you have any logical reason at all to believe whatever regimes replace them would not resent the "utter and merciless destruction?"

Which of course would put us back to where we are now.

The Romans and Mongols had the right idea with regard to enemies. There are only two ways to deal with them that don't pile up trouble for the future.

Exterminate them.

Or turn them into friends.

Of course the second is not something we get to decide on our own. The enemy may not WANT to be our friend.

15 posted on 10/03/2012 7:26:26 AM PDT by Sherman Logan (Perception wins all the battles. Reality wins all the wars.)
[ Post Reply | Private Reply | To 14 | View Replies]

To: Sherman Logan
Do you have any logical reason at all to believe whatever regimes replace them would not resent the "utter and merciless destruction?"

WWII seemed to take the starch out of Nazi Germany. That's logical, historical and military reason all wrapped up into one nice pretty little package for you. :-)

And there are more like it. Lots of them.

It's a matter of definition -- what you mean by "utter destruction" of a regime. If there is some residual institutional resentment in the new government, I'd argue the old one hadn't been fully and sufficiently destroyed; i.e., hadn't been fully removed from power.

16 posted on 10/03/2012 8:54:11 AM PDT by LibWhacker
[ Post Reply | Private Reply | To 15 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson