Skip to comments.One Congressmanís Crusade to Save the World From Killer Robots
Posted on 07/17/2014 6:38:32 PM PDT by markomalley
If a robot soldier commits a war crime, who is held accountable?
You can't punish a collection of parts and coding algorithms. But can you blame a human commander, who gave a legal order only to see the robot carry it out incorrectly? And what about the defense manufacturers, which are often immune from the kind of lawsuits that would plague civilian outfits if their products cost lives.
The culpability question is one of a host of thorny moral dilemmas presented by lethal robots. On the one hand, if effective, robot soldiers could replace ground troops and prevent thousands of American casualties. And robots aren't susceptible to many of the weaknesses that plague humans: exhaustion, sickness, infection, emotion, indecision.
The Massachusetts Democrat is part of a crusade for an international ban on killer robotsmachines that can decide without human input whom to target and when to use force.
The only way to stop killer robots, said McGovern and a series of panelists he assembled for a Capitol Hill briefing this week, is to ban them before they even exist. Much like drones, once someone gets a killer robot, it's only a matter of time before everyone else is racing to catch up. And despite some countries' commitment to evaluating the technology responsibly, good intentions never won an arms race.
"The only thing harder than getting a ban in place is getting a ban in place after something is developed," McGovern said.
McGovern is racing technology, but he believes he has time: He thinks it will take another two to three decades before the technology would be available.
McGovern's Tuesday panel is part of an ongoing effort by anti-robot activists to raise awareness about the issue. They hope lawmakers will share their concerns and join their push for a worldwide ban. "The U.S. should show leadership on this," said the Human Rights Watch's Steve Goose. "If the U.S. were able to get out in front it would lead the way for many other nations."
So why is it so important that robots never see the battlefield? For some of the panelists, the issue is a moral one. "Do we really want to establish a precedent where it's OK for machines to take the lives of human beings?" said Dr. Peter Asaro, a founder of the International Committee for Robot Arms Control.
For most, though, the chief worry is judgment, and humans' innate ability to read context. "Soldiers have to rely on intention or subtle clues," said Bonnie Docherty, an arms expert at Human Rights Watch and a lecturer at Harvard Law. "We have serious concerns that a fully autonomous weapon could ever reach that level."
Especially in battlefields where soldiers aren't always wearing distinguishing uniforms, the ability to recognize actions from other humans becomes important. Even in cases where a robot can tell friend from foe, it might have trouble recognizing if the enemy is surrendering or is wounded.
Media depictions like Terminator have anthropomorphized warrior robots, which "implies a level of cognitive ability that these machines do not have," said Paul Scharre, who has worked on the Defense Department's autonomous-weapon policies. "Images from science fiction are not very accurate or very helpful."
Killer robots won't look like humans, and they probably won't act like them either. "What [robots] really lack is a meaningful understanding of context and situation," Asaro said. "It's hard to believe that a machine could be making those kinds of meaningful choices about life and death."
Other concerns include the possibility of malicious hackers taking over a robot army. And then there's the possibility of a "flash war" starting over a mistake. If one robot malfunctions and fires, robots on the other side could return fire automatically, starting a conflict at the speed of circuitry before a human could intervene.
The arms-race worry is very real, Asaro said. Unlike nuclear weapons, which require extreme technical sophistication, killer robots won't be hard to replicate. "Once these technologies are in existence, they'll proliferate widely," he said. "There are even software systems that could be implemented through the Internet."
Despite all these concerns, robot advocates say the rush to ban the technology outright is ill-conceived. While preaching caution on development, they also say it's important to test the systems' limits before crafting policy.
They fear a ban based on imaginations of an android toting a machine gun could interfere with lifesaving technologies like rapid-response air-defense missiles. And while context recognition remains a huge challenge, advocates say it's at least worth exploring whether robot warriors can actually reduce civilian casualties in some circumstances.
There's also the challenge of enforcement. Even if a ban were enacted, it would be hard to tell if a drone fired a missile on its own or some other weapons system was operating under the commands of a human or an algorithm.
It's not that anyone has killer-robot plans just yet. In fact, the panelists agreed the U.S. has been thoughtful and responsible in approaching the issue. The Defense Department even issued a policy statement on the technology in late 2012 that established a five- to 10-year moratorium on developing killer robots.
But an American stand-alone policy might not be enough. According to Scharre, at least 23 countries have joined the race to build armed drones. It's not hard to imagine a similar push to build machines that could replace combat soldierswith or without U.S. involvement.
Meanwhile, the issue will get more and more tricky. We won't make the jump from a flesh-and-blood soldier to a T-1000, but some combat systems could gradually phase in more and more autonomy.
Some robots will have in-the-loop systems, where human operators monitor actions and can override at any point. The longer-term prospect is an out-of-the-loop robot, one that carries out missions with minimal supervision and no possibility for human control.
Panelists agreed that the best chance for a ban will probably come wrapped in language other than "robot ban." They hope to persuade countries to agree to something in more positive languagethat their autonomous weapons will have a human operator monitoring them and ready to take over at any time.
Regardless of just what is allowed, it's important that militaries know where to draw the line before they have the technology to build killer robots. A treaty "frees up weapons developers to know what they're allowed to do," Scharre said.
As robots get more complexand better able to read and respond to human cuesit's likely some advocates will argue they deserve a more prominent place in combat. But for McGovern and his allies, such weapons would have to meet a challenge they now deem impossible: Can you build a robot not only with a brain but with a soul?
At times like this, I sure miss Quix.
It’s best to get some Old Glory Robot Insurance.
No Quisp or Quake either?
We already have killer robots in a sense. Munitions with proximity fuses, and nuclear ICBMs, are such robots. However their criteria for performing lethal/destructive actions are rather crude. My personal impression is that something more complex probably won’t even be desirable, let alone deployed. How does a robotic guard, for example, know a raccoon from a human?
Maybe he should start by saving the world from Chevrolet.
Technology is neither good nor evil. It just IS, like nature.
Maybe we can ban killer robots, just like we did the atomic bomb.
The mistake is to assume that the owner of the robot cares.
I was thinking about those self-driving cars.
Reprogrammed they could be a pretty lethal weapon.
McGovern’s sugar daddy died many years ago, since then he has become a pair of lips in search of an actual sentence
there was never a mention of “coherence”
in the will
Shades of the Robocop reboot.
Is an autotracking mechanism a “robot”?
How about a landmine?
A traffic light IS a “robot” and called such in South Africa.
killer robots are everywhere!
If you haven’t seen Robocop, the new one has a lone critic of the drone program being challenged on tv by the black “conservative” host played by Samuel L. Jackson (racist).
And ultimately the corporation behind them is shown to be corrupt (as are the police who outsourced law enforcement, and our military is hapless).
heavy handed. And very cold. Not the fun film the first one was (that touched on some of the same subjects).
Ultimately it becomes a live action first person shooter video game (too much CGI cartooning). Yawn.
I haven’t seen the retro70s X-Men film yet but understand that it too touches on killer robots (this time urged by a homophobic, I mean mutant phobic, legislator...).
I frankly can’t take Jim McGovern seriously on this issue, because he’s so vehemently pro-abortion.