Robo-Morality: Can philosophers program ethical codes into robots?
The science fiction canon is filled with stories of robots rising up and destroying their human masters. From its beginnings in Frankenstein up through the stories of Isaac Asimov and Philip K. Dick, to The Terminator and The Matrix and beyond, popular culture is filled with a fear of humanity’s hubristic creations. The more intelligent they are, the scarier they become.
This is one reason why the Office of Naval Research (ONR) grant of $7.5 million to university researchers at Tufts, Rensselaer Polytechnic Institute, Brown, Yale, and Georgetown to build ethical decision-making into robots is simultaneously comforting and eerie. The goal of this interdisciplinary research—carried out by specialists in Artificial Intelligence (AI), computer science, cognitive science, robotics, and philosophy—is to have a prototype of a moral machine within the next five years.
The questions of whether or not a machine has moral agency or can exhibit intelligence are interesting, albeit esoteric, topics for philosophers to ruminate over. The aim of this research is not to puzzle over abstract conundrums but rather to find out the considerations that normal humans take into account when making moral decisions, and then to implement these considerations into machines. As Steven Omohundro, a leading AI researcher, points out in a May 13 article at the news site Defense One, “with drones, missile defense, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions.” The military prohibits armed systems that are fully autonomous. However, military technology is getting more and more sophisticated and in scenarios where lives are at stake, machines that are capable of weighing moral factors will be important.
Matthias Scheutz, a researcher at Tufts who will take the lead on this research, gives the example of a robot medic en route to deliver supplies to a hospital. On the way, the robot encounters a wounded soldier who needs immediate assistance. Should the robot abort the mission in order to save the soldier? Modern robots cannot weigh factors like the level of pain the wounded soldier is experiencing, the level of importance of its current mission, or the moral worth of saving a life, they merely carry out what they were programmed to do. However, a robot that had a moral decision-making system would be able to weigh factors like these to make a moral, rational decision.
The applications of this research are not limited to the military. Autonomous machines are becoming more and more prevalent in daily life as technological progress marches on. Paul Bello, director of the cognitive science program at the ONR, points out in the same Defense One article by Patrick Tucker that “Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications.” To clarify, the cars are currently legal in four states and the District of Columbia, and prototypes are just beginning to be tested out, but many are anticipating fully autonomous cars to be in widespread use a few decades from now.
It isn’t inconceivable to imagine a real life “trolley problem,” the famous thought experiment where a trolley malfunctions and starts hurtling towards a group of five people. The only way to stop this trolley is by pushing someone else in the way, saving those five lives but killing another. This scenario is convoluted, as thought experiments often are, but as self-driving cars become more sophisticated they may be able to perform the moral calculus necessary to make decisions in difficult scenarios such as that.
Now, of course, lying just underneath the surface of this seemingly utopian technological progress is the troubling notion that machines cannot actually think (although this has long been a point of contention among philosophers, Alan Turing to name one). They simply have a set of moral rules or considerations programmed into them by their designers that will be carried out exactly. It seems that when we call a human action “moral” we are taking into consideration the fact that it was willed, that it was performed with intentionality—that the human could have done otherwise. A robot following a strict set of programs doesn’t seem to have any of these things going for it, so we might hesitate before calling it any more “moral” than a tree or a rock.
That’s all really an issue of semantics. More troubling is the fact that humans are far from concordance when it comes to deciding what moral code to follow, so the designers of these military robots will get to determine the moral system that gets programmed into them. And, as history has repeatedly shown, the military isn’t exactly the most moral of institutions. For example, moral robots could be programmed to profile enemy combatants as any male of military age within a specified combat zone, as human-operated drones already do. The issue here is not the new technology that will allow robots to make more complex decisions, but the operational morality already in place. These new and improved robots will be no more and no less moral than the people who design them.
In a recent editorial lambasting the ONR project (“The Three Laws of Pentagon Robotics”), David Swanson cites Isaac Asimov’s three laws of robotics, the first of which is “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” I think many people would agree that not killing or hurting people is a good starting place for morality. However, as defender of the nation, the U.S. military’s job entails killing people who threaten or harm us, and so with these new robots something far more sinister replaces Asimov’s utopian vision.
Some of the most famous philosophical moral systems—the utilitarian ethic of maximizing pleasure while minimizing pain, for example, or the Kantian categorical imperative’s emphasis on only performing actions that you could will everyone to perform—have no place in war. One would hope that this revolutionary technology will be used to make robots moral despite the ravages of war, but more likely than not it will be used to make robots “moral” for the purpose of victory at any cost. As Omohundro warns, “the military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.” Robots that can make moral decisions will fundamentally change the landscape of combat. At the same time, they won’t be all that different from human soldiers, who have to make moral decisions all the time.
It seems very likely that this project will result in a level of robotic sophistication never before achieved. And while technology has the potential to do a lot of good, from saving wounded soldiers to making driving to work a lot easier and less dangerous, a healthy dose of cynicism is needed regarding its use in the military because it also has the potential to make the already twisted ethics of combat even more ruthlessly efficient.