Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Trust me, I'm a robot
Economist.com ^ | Jun 8th 2006 | No specific individual credited

Posted on 06/12/2006 6:18:24 PM PDT by annie laurie

IN 1981 Kenji Urada, a 37-year-old Japanese factory worker, climbed over a safety fence at a Kawasaki plant to carry out some maintenance work on a robot. In his haste, he failed to switch the robot off properly. Unable to sense him, the robot's powerful hydraulic arm kept on working and accidentally pushed the engineer into a grinding machine. His death made Urada the first recorded victim to die at the hands of a robot.

This gruesome industrial accident would not have happened in a world in which robot behaviour was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer. The laws appeared in “I, Robot”, a book of short stories published in 1950 that inspired a recent Hollywood film. But decades later the laws, designed to prevent robots from harming people either through action or inaction (see table), remain in the realm of fiction.

Indeed, despite the introduction of improved safety mechanisms, robots have claimed many more victims since 1981. Over the years people have been crushed, hit on the head, welded and even had molten aluminium poured over them by robots. Last year there were 77 robot-related accidents in Britain alone, according to the Health and Safety Executive.

With robots now poised to emerge from their industrial cages and to move into homes and workplaces, roboticists are concerned about the safety implications beyond the factory floor. To address these concerns, leading robot experts have come together to try to find ways to prevent robots from harming people. Inspired by the Pugwash Conferences—an international group of scientists, academics and activists founded in 1957 to campaign for the non-proliferation of nuclear weapons—the new group of robo-ethicists met earlier this year in Genoa, Italy, and announced their initial findings in March at the European Robotics Symposium in Palermo, Sicily.

“Security, safety and sex are the big concerns,” says Henrik Christensen, chairman of the European Robotics Network at the Swedish Royal Institute of Technology in Stockholm, and one of the organisers of the new robo-ethics group. Should robots that are strong enough or heavy enough to crush people be allowed into homes? Is “system malfunction” a justifiable defence for a robotic fighter plane that contravenes the Geneva Convention and mistakenly fires on innocent civilians? And should robotic sex dolls resembling children be legally allowed?

These questions may seem esoteric but in the next few years they will become increasingly relevant, says Dr Christensen. According to the United Nations Economic Commission for Europe's World Robotics Survey, in 2002 the number of domestic and service robots more than tripled, nearly outstripping their industrial counterparts. By the end of 2003 there were more than 600,000 robot vacuum cleaners and lawn mowers—a figure predicted to rise to more than 4m by the end of next year. Japanese industrial firms are racing to build humanoid robots to act as domestic helpers for the elderly, and South Korea has set a goal that 100% of households should have domestic robots by 2020. In light of all this, it is crucial that we start to think about safety and ethical guidelines now, says Dr Christensen. Stop right there

So what exactly is being done to protect us from these mechanical menaces? “Not enough,” says Blay Whitby, an artificial-intelligence expert at the University of Sussex in England. This is hardly surprising given that the field of “safety-critical computing” is barely a decade old, he says. But things are changing, and researchers are increasingly taking an interest in trying to make robots safer. One approach, which sounds simple enough, is try to program them to avoid contact with people altogether. But this is much harder than it sounds. Getting a robot to navigate across a cluttered room is difficult enough without having to take into account what its various limbs or appendages might bump into along the way.

Regulating the behaviour of robots is going to become more difficult in the future, since they will increasingly have self-learning mechanisms built into them, says Gianmarco Veruggio, a roboticist at the Institute of Intelligent Systems for Automation in Genoa, Italy. As a result, their behaviour will become impossible to predict fully, he says, since they will not be behaving in predefined ways but will learn new behaviour as they go.

Then there is the question of unpredictable failures. What happens if a robot's motors stop working, or it suffers a system failure just as it is performing heart surgery or handing you a cup of hot coffee? You can, of course, build in redundancy by adding backup systems, says Hirochika Inoue, a veteran roboticist at the University of Tokyo who is now an adviser to the Japan Society for the Promotion of Science. But this guarantees nothing, he says. “One hundred per cent safety is impossible through technology,” says Dr Inoue. This is because ultimately no matter how thorough you are, you cannot anticipate the unpredictable nature of human behaviour, he says. Or to put it another way, no matter how sophisticated your robot is at avoiding people, people might not always manage to avoid it, and could end up tripping over it and falling down the stairs. Legal problems

So where does this leave Asimov's Three Laws of Robotics? They were a narrative device, and were never actually meant to work in the real world, says Dr Whitby. Quite apart from the fact that the laws require the robot to have some form of human-like intelligence, which robots still lack, the laws themselves don't actually work very well. Indeed, Asimov repeatedly knocked them down in his robot stories, showing time and again how these seemingly watertight rules could produce unintended consequences.

In any case, says Dr Inoue, the laws really just encapsulate commonsense principles that are already applied to the design of most modern appliances, both domestic and industrial. Every toaster, lawn mower and mobile phone is designed to minimise the risk of causing injury—yet people still manage to electrocute themselves, lose fingers or fall out of windows in an effort to get a better signal. At the very least, robots must meet the rigorous safety standards that cover existing products. The question is whether new, robot-specific rules are needed—and, if so, what they should say.

“Making sure robots are safe will be critical,” says Colin Angle of iRobot, which has sold over 2m “Roomba” household-vacuuming robots. But he argues that his firm's robots are, in fact, much safer than some popular toys. “A radio-controlled car controlled by a six-year old is far more dangerous than a Roomba,” he says. If you tread on a Roomba, it will not cause you to slip over; instead, a rubber pad on its base grips the floor and prevents it from moving. “Existing regulations will address much of the challenge,” says Mr Angle. “I'm not yet convinced that robots are sufficiently different that they deserve special treatment.”

Robot safety is likely to surface in the civil courts as a matter of product liability. “When the first robot carpet-sweeper sucks up a baby, who will be to blame?” asks John Hallam, a professor at the University of Southern Denmark in Odense. If a robot is autonomous and capable of learning, can its designer be held responsible for all its actions? Today the answer to these questions is generally “yes”. But as robots grow in complexity it will become a lot less clear cut, he says.

“Right now, no insurance company is prepared to insure robots,” says Dr Inoue. But that will have to change, he says. Last month, Japan's ministry of trade and industry announced a set of safety guidelines for home and office robots. They will be required to have sensors to help them avoid collisions with humans; to be made from soft and light materials to minimise harm if a collision does occur; and to have an emergency shut-off button. This was largely prompted by a big robot exhibition held last summer, which made the authorities realise that there are safety implications when thousands of people are not just looking at robots, but mingling with them, says Dr Inoue.

However, the idea that general-purpose robots, capable of learning, will become widespread is wrong, suggests Mr Angle. It is more likely, he believes, that robots will be relatively dumb machines designed for particular tasks. Rather than a humanoid robot maid, “it's going to be a heterogeneous swarm of robots that will take care of the house,” he says.

Probably the area of robotics that is likely to prove most controversial is the development of robotic sex toys, says Dr Christensen. “People are going to be having sex with robots in the next five years,” he says. Initially these robots will be pretty basic, but that is unlikely to put people off, he says. “People are willing to have sex with inflatable dolls, so initially anything that moves will be an improvement.” To some this may all seem like harmless fun, but without any kind of regulation it seems only a matter of time before someone starts selling robotic sex dolls resembling children, says Dr Christensen. This is dangerous ground. Convicted paedophiles might argue that such robots could be used as a form of therapy, while others would object on the grounds that they would only serve to feed an extremely dangerous fantasy.

All of which raises another question. As well as posing physical danger, might robots also be dangerous to humans in less direct ways, by bringing out their worst aspects, from warfare to paedophilia? As Ron Arkin, a roboticist at the Georgia Institute of Technology in Atlanta, puts it: “If you kick a robotic dog, are you then more likely to kick a real one?” Roboticists can do their best to make robots safe—but they cannot reprogram the behaviour of their human masters.


TOPICS: Science
KEYWORDS: ai; asimov; ethics; future; robot; robotics; science; technology; threelaws
Navigation: use the links below to view more comments.
first previous 1-2021-22 last
To: jwh_Denver

That's BRILLIANT! I love it! Could just program some "test motivations" into Charlie and then see what would happen. Genius on the "public domain"! I only thought as far ahead as simple inter/intranets - er... "cybernets". Now the "PD" is truly exponential and purposeful. Love it!

LAWNMOWER MAN is like "Flowers from Algernon", which would also be fun to see again.

Email me for brews! Bill


21 posted on 06/15/2006 5:05:01 PM PDT by LittleBillyInfidel ("Hello Mullah. Hello Fatwa. Little Billy. Not Sinatra.")
[ Post Reply | Private Reply | To 20 | View Replies]

To: LittleBillyInfidel

Out of all the Star Trek movies the one I believe had the best overall most fascinating plot was the first one. Here this tiny most inadequate computer is sent on a mission to collect data. After hundreds of years in space it comes across a totally mechanized planet who sees this Voyager as a little inept at fulfilling its programming and builds a planet sized world to do the job right. All the while working under the programming of the Voyager.

With our posts I see that the movie left us at a point where after the couple morphs with Voyager it lets us do the imagination of what happens afterwards. Haven't done much thinking on it yet but the first thing I would say it that "it" becomes aware.


22 posted on 06/16/2006 6:47:32 PM PDT by jwh_Denver (I'm politicked off!)
[ Post Reply | Private Reply | To 21 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-22 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson