Skip to comments.When Drones Decide to Kill on Their Own
Posted on 10/02/2012 6:14:25 PM PDT by Altariel
Its almost impossible nowadays to attend a law-enforcement or defense show that does not feature unmanned vehicles, from aerial surveillance drones to bomb disposal robots, as the main attraction. This is part of a trend that has developed over the years where tasks that were traditionally handled in situ are now operated remotely, thus minimizing the risks of casualties while extending the length of operations.
While military forces, police/intelligence agencies and interior ministries have set their sights on drones for missions spanning the full spectrum from terrain mapping to targeted killings, todays unmanned vehicles remain reliant on human controllers who are often based hundreds, and sometimes thousands of kilometers away from the theater of operations. Consequently, although the use of drones substantially increases operational effectiveness and, in the case of targeted killings, adds to the emotional distance between perpetrator and target they remain primarily an extension of, and are regulated by, human decisionmaking.
All that could be about to change, with reports that the U.S. military (and presumably others) have been making steady progress developing drones that operate with little, if any, human oversight. For the time being, developers in the U.S. military insist that when it comes to lethal operations, the new generation of drones will remain under human supervision. Nevertheless, unmanned vehicles will no longer be the dumb drones in use today; instead, they will have the ability to reason and will be far more autonomous, with humans acting more as supervisors than controllers.
Scientists and military officers are already envisaging scenarios in which a manned combat platform is accompanied by a number of sentient drones conducting tasks ranging from radar jamming to target acquisition and damage assessment, with humans retaining the prerogative of launching bombs and missiles.
Its only a matter of time, however, before the defense industry starts arguing that autonomous drones should be given the right to use deadly force without human intervention. In fact, Ronald Arkin of Georgia Tech contends that such an evolution is inevitable. In his view, sentient drones could act more ethically and humanely, without their judgment being clouded by human emotion (though he concedes that unmanned systems will never be perfectly ethical). Arkin is not alone in thinking that automated killing has a future, if the guidelines established in the U.S. Air Forces Unmanned Aircraft Systems Flight Plan 2009-2047 are any indication.
In an age where printers and copy machines continue to jam, the idea that drones could start making life-and-death decisions should be cause for concern. Once that door is opened, the risk that we are on a slippery ethical slope with potentially devastating results seems all too real. One need not envision the nightmares scenario of an out-of-control Skynet from Terminator movie fame to see where things could go wrong.
In this day and age, battlefield scenarios are less and less the meeting of two conventional forces in open terrain, and instead increasingly takes the form of combatants engaging in close quarter firefights in dense urban areas. This is especially true of conflicts pitting modern military forces the very same forces that are most likely to deploy sentient drones against a weaker opponent, such as NATO in Afghanistan, the U.S. in Iraq, or Israel in Lebanon, Gaza, and the West Bank.
Israeli counterterrorism probably provides the best examples of the ethical problems that would arise from the use of sentient drones with a license to kill. While it is true that domestic politics and the thirst for vengeance are both factors in the decision to attack a terrorist target, in general the Israel Defense Forces (IDF) must continually use proportionality and weigh the operational benefits of launching an attack in an urban area against the costs of attendant civilian collateral. The IDF has faced severe criticism over the years for what human rights organizations and others have called disproportionate attacks against Palestinians and Lebanese. In many instances, such criticism was justified.
That said, what often goes unreported are the occasions when the Israeli government didnt launch an attack because of the high risks of collateral damage, or because a targets family was present in the building when the attack was to take place. As Daniel Byman writes in a recent book on Israeli counterterrorism, Israel spends an average of ten hours planning the operation and twenty seconds on the question of whether to kill or not.
Those twenty seconds make all the difference, and its difficult to imagine how a robot could make such a call. Unarguably, there will be times when hatred will exacerbate pressures to use deadly violence (e.g., the 1982 Sabra and Shatila massacre that was carried out while the IDF looked on). But equally there are times when human compassion, or the ability to think strategically, imposes restraints on the desirability of using force. Unless artificial intelligence reaches a point where it can replicate, if not transcend, human cognition and emotion, machines will not be able to act under ethical considerations or to imagine the consequences of action in strategic terms.
How, for example, would a drone decide whether to attack a Hezbollah rocket launch site or depot in Southern Lebanon located near a hospital or with schools in the vicinity? How, without human intelligence, will it be able to determine whether civilians remain in the building, or recognize that schoolchildren are about to leave the classroom and play in the yard? Although humans were ultimately responsible, the downing of Iran Air Flight 655 in 1988 by the U.S. Navy is nevertheless proof that only humans still have the ability to avoid certain types of disaster. The A300 civilian aircraft, with 290 people on board, was shot down by the U.S. Navys USS Vincennes after operators mistook it for an Iranian F-14 aircraft and warnings to change course were unheeded. Without doubt, todays more advanced technology would have ensured the Vincennes made visual contact with the airliner, which wasnt the case back in 1988. Had such contact been made, U.S. naval officers would very likely have called off the attack. Absent human agency, whether a fully independent drone would make a similar call would be contingent on the quality of its software a not so comforting thought.
And the problems dont just end there. Its already become clear that states regard the use of unmanned vehicle as somewhat more acceptable than human intrusions. From Chinese UAVs conducting surveillance near the border with India to U.S. drones launching Hellfire missiles at suspected terrorists in places like Pakistan, Afghanistan or Yemen, states regard such activity as less intrusive than, say, U.S. special forces taking offensive action on their soil. Once drones start acting on their own and become commonplace, the level of acceptability will likely increase, further deresponsibilizing their users.
Finally, by removing human agency altogether from the act of killing, the restraints on the use of force risk being further weakened. Technological advances over the centuries have consistently increased the physical and emotional distance between an attacker and his target, resulting in ever-higher levels of destructiveness. Already back during the Gulf War of 1991, critics were arguing that the videogame and electronic narrative aspect of fixing a target in the crosshairs of an aircraft flying at 30,000 feet before dropping a precision-guided bomb had made killing easier, at least for the perpetrator and the public. Things were taken to a greater extreme with the introduction of attack drones, with U.S. Air Force pilots not even having to be in Afghanistan to launch attacks against extremist groups there, drawing accusations that the U.S. conducts an antiseptic war.
Still, at some point, a human has to make a decision whether to kill or not. Its hard to imagine that we could ever be confident enough to allow technology to cross that thin red line.
Where are John and Sarah Connor when you need them?
“Still, at some point, a human has to make a decision whether to kill or not. Its hard to imagine that we could ever be confident enough to allow technology to cross that thin red line.”
I don’t see much difference between being killed by an ‘intelligent’ drone and being killed by a dumb landmine.
The landmine is much more likely to kill non-combatants than the drone.
(Thanks for posting the article)
” Without doubt, todays more advanced technology would have ensured the Vincennes made visual contact with the airliner, which wasnt the case back in 1988.”
The drone, in contrast, is actively patrolling.
It would seem scenes like this will soon not be limited to science fiction:
Unless there is a drone chasing them and they are fleeing for their lives.
Oh my God, we’ve created the Cylons.
Lets start building Counter-Drones...!!!!
send up our own drones to shoot down the Pre-killer Drones.
make it the same size...with Lipstick and Nylons!!!!
This is a dead end. You cannot try to kill anyone who wanders outside. But the task of correct identification of a mililtary target is complex enough that even humans cannot do it reliably. It's even more complicated in lands like Afghanistan where the dividing line between a civilian and a combatant does not exist. A wise general would use only two tools - containment and annihilation. The tool of conquest, with retention of most of the population, will not be effective (as we see every day in Afghanistan.) Therefore drone operations aimed at selective elimination of opponents will not be effective either. You leave them alone (with a wall around the country) or you nuke it from orbit. There is no middle ground.
How long til our enemies will be able to field drones over the CONUS? Russia, China, the Taliban?
There you are, late one night, sound asleep in your bed somewhere in Western Nebraska, when a stealthy Taliban drone slipping silently overhead detects a Bible on your dresser in the moonlight and decides to take out the filthy infidel below. Fifteen seconds later a smart bomb flies down your chimney and your little house on the prairie is no more.
In other words, if we can do it to them, won’t they endeavor, forever, and by assisting each other to that end, to do it to us in spades someday? And doesn’t that mean we should utterly and mercilessly destroy them now while we still can, before that day arrives? Just wondering.
Utterly and mercilessly destroy Russia, China and the Taliban?
That IS one approach to warfare. Let’s see, that would require killing somewhere upwards of 1.5B people.
Two groups of which have lots of nuclear weapons with which to respond.
You might want to rethink your approach to strategy.
No, I think you can see now that there is no such requirement. Not even close.
I will, however, concede your point, and any other obvious points you wish to make, that Russia and China are fully armed nuclear powers and we'd better be careful about attacking them. But let's assume, for sake of argument, that the brains in the Pentagon won't go full Alzheimer's on us and forget them.
The question remains, how do we prevent that day arriving when our enemies can fly stealthy drones over our heads? If you know a way other than utter destruction of these regimes, I'm all ears. Something like the fall of the Soviet Union is not utter destruction and is obviously insufficient.
I'll grant you that the Taliban is not Afghanistan. But Russia and China refer to nations, not regimes. What do you think I'm supposed to assume you when you call for their utter and merciless destruction?
Let us assume we "utterly destroy" the present regimes in these countries, but do not exterminate the populations.
Do you have any logical reason at all to believe whatever regimes replace them would not resent the "utter and merciless destruction?"
Which of course would put us back to where we are now.
The Romans and Mongols had the right idea with regard to enemies. There are only two ways to deal with them that don't pile up trouble for the future.
Or turn them into friends.
Of course the second is not something we get to decide on our own. The enemy may not WANT to be our friend.
WWII seemed to take the starch out of Nazi Germany. That's logical, historical and military reason all wrapped up into one nice pretty little package for you. :-)
And there are more like it. Lots of them.
It's a matter of definition -- what you mean by "utter destruction" of a regime. If there is some residual institutional resentment in the new government, I'd argue the old one hadn't been fully and sufficiently destroyed; i.e., hadn't been fully removed from power.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.