Why Asimov`s Three Laws of Robotics Are Unethical

Jonathan Bartlett: The laws of robotics use ordinary, purposeful language. It works for ordinary, determined people. But robots are not humans. Therefore, these rules should be adapted so that they are those of robot manufacturers, and not those of the robots themselves. These can be translated into rules for robot manufacturers as follows: Instead of laws restricting robot behavior, robots should be empowered to choose the best solution for a particular scenario. The flaw is this: they assume that morality and moral decisions can be made by means of an algorithm, that discrete yes/no answers are sufficient to “solve” moral dilemmas. They are not enough. (Or, to be adequate, many, many, many more “laws” would be needed than those stated to cover the wide range of “what if” and “but” qualifications that always pop up.) Interesting argument, but if you can give a moral equivalence between theoretically self-confident humans and robots is a completely different cauldron of fish. I`m not a fan of the three laws just because they`re a stupid protection. If all that stands between humanity and a robotic revolution are three little computer-programmed logical premises, you can pretty much guarantee that war is coming and that we fleshy guys will be pwned. What for? Because the software crashes.

Period. The second the coded construction of laws becomes imperfect – Microsoft ThreeLaws, Service Pack 2, anyone? – Here are the conquering hordes of bloody death droids. It is better to develop robots that respect human life of their own free will, so that our collective headstone is not “dead from a software bug”. So Asimov was right to worry about the unexpected behavior of the robots. But if we take a closer look at how robots actually work and what tasks they are designed for, we find that Asimov`s laws are not clearly applicable. Take military drones. They are robots tasked by humans to kill other people. The whole idea of military drones seems to violate Asimov`s first law, which forbade robotic injuries to humans. In Asimov`s fictional universe, these laws were incorporated into almost all of his “positronic” robots.

These were not mere suggestions or guidelines, they were embedded in the software that governs their behavior. Furthermore, the rules could not be circumvented, rescinded or revised. Well! It may be a good thing that general artificial intelligence of the type that Asimov tried to legislate for is questionable anyway for various reasons. Eric Holloway said: I find it ironic that while there are supposedly objective moral laws for robots, humans themselves do not have objective moral laws instead of laws to restrict robot behavior, we believe robots should be empowered to maximize possible courses of action so that they can choose the best solution for a particular scenario. As we describe in a new Frontiers article, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible. At the other end of the scale, there are robots designed to provide people with social assistance. More ingeniously, they act as companions, moving alongside their users as they pick up and carry, reminding them of appointments and medications, and sending alerts when certain types of emergencies occur. Asimov`s laws are an appropriate ethic when the safety of an elderly person is the main goal of the robot.

Today, Asimov`s three laws of robotics are much better known as the Runaround, the story in which they first appeared. The laws seem to be a natural response to the idea that robots will one day be commonplace, requiring internal programming to prevent them from hurting people. First task: I love the fact that there is a website called Asimovlaws.com dedicated to the ethical and technical issues surrounding the development of user-friendly artificial intelligence. Really, this internet thing has real potential. So what`s the first topic of debate on this super cool dorktacular website? That of Isaac Asimov. The second problem is that no technology can yet reproduce Asimov`s laws in a machine. As Rodney Brooks of iRobot – named after Asimov`s book, they were the ones who brought you the military robot Packbot and the robot vacuum cleaner Roomba – says: “People ask me if our robots follow Asimov`s laws. There`s a simple reason [they don`t]: I can`t incorporate Asimov`s laws into it. For example, in one of Asimov`s stories, robots are made to follow the laws, but they are given a certain sense of “human.” The robots anticipate what is currently happening in real ethnic cleansing campaigns and only recognize people of a certain group as “humans”. They obey the law, but still commit genocide.

“I think the kind of robots Asimov envisioned will soon be possible,” Goertzel replied. However, in most of its fictional worlds, it seems that robots were the pinnacle of robotics and AI engineering at the human level. That seems unlikely. Shortly after reaching human-like robots in Asimov`s style, it seems that massively superhuman AIs and robots will also be possible. Brendan Dixon comes and says: It`s even worse than what he says! The “laws” are ambiguous, even for a human being. For example, what does it mean not to “harm”? Actually sticky enough to work out. Part of the reason Asimov`s laws seem plausible is that fears about the harm robots could do to humans are not unfounded. In the United States, there have already been deaths due to faulty self-driving cars. “However, I think humans are made of atoms, so it would be theoretically possible to build a synthetic life form or a robot with moral significance,” says Helm. “I`d like to think no one would do that.

And I guess most people won`t. But there can inevitably be a showboat fool who seeks notoriety because he is the first to do something – anything – even something so unethical and stupid. “Maybe we`d better hope it never gets tested in real life? Anyway, it`s science fiction Saturday here at Mind Matters News, so we asked some of our contributors about reactions to Stokes` laws and doubts about them: Chris Stokes, a philosopher at Wuhan University in China, says, “A lot of computer engineers use the three laws as a tool to think about programming.” But the problem is that they don`t work. Speedy is caught in a conflict between two of the laws that robots follow in Asimov`s stories as a basic ethical program: always follow human instructions and always protect your existence (as long as it doesn`t lead to human injury). The first problem is that laws are fiction! They are a plot device that Asimov invented to advance his stories. What`s more, his stories almost always revolved around how robots might follow these logical and resounding ethical codes, but go astray and the unintended consequences that result.

Yayım tarihi