Whose life should your car save? Source: Azim Shariff, Iyad Rahwan and Jean-Franc
The widespread use of self-driving cars promises to bring substantial benefits to transportation efficiency, public safety and personal well-being. Car manufacturers are working to overcome the remaining technical challenges that stand in the way of this future. Our research, however, shows that there is also an important ethical dilemma that must be solved before people will be comfortable trusting their lives to these cars.
As the National Highway Traffic Safety Administration has noted, autonomous cars may find themselves in circumstances in which the car must choose between risks to its passengers and risks to a potentially greater number of pedestrians. Imagine a situation in which the car must either run off the road or plow through a large crowd of people: Whose risk should the car’s algorithm aim to minimize?
This dilemma was explored in studies that we recently published in the journal Science. We presented people with hypothetical situations that forced them to choose between “self-protective” autonomous cars that protected their passengers at all costs, and “utilitarian” autonomous cars that impartially minimized overall casualties, even if it meant harming their passengers. (Our vignettes featured stark, either-or choices between saving one group of people and killing another, but the same basic trade-offs hold in more realistic situations involving gradations of risk.)
A large majority of our respondents agreed that cars that impartially minimized overall casualties were more ethical and were the type they would like to see on the road. But most people also indicated that they would refuse to purchase such a car, expressing a strong preference for buying the self-protective one. In other words, people refused to buy the car they found to be more ethical.
This is a version of the classic “tragedy of the commons”: People acting in their self-interest behave contrary to the actions that everyone knows are necessary for the common good. One solution to such dilemmas is for the government to enforce regulations. But our research suggests that when it comes to self-driving cars, Americans balk at having the government force cars to use potentially self-sacrificial algorithms.
Car manufacturers, for their part, have generally remained silent on the matter. That changed recently when an official at Mercedes-Benz indicated that in those situations where its future autonomous cars would have to choose between risks to their passengers and risks to pedestrians, the algorithm would prioritize passenger safety. But the company reversed course soon after, saying that this would not be its policy.
Mercedes was confronting the same dilemma suggested by our research. Carmakers can either alienate the public by offering cars that behave in a way that is perceived as unethical or alienate buyers by offering cars that behave in a way that scares them away. In the face of this, most car companies have found that their best course of action is to sidestep the question: Ethical dilemmas on the road are exceedingly rare, the argument goes, and companies should focus on eliminating rather than solving them.
That’s a commendable goal, but the widespread adoption of driverless cars will happen only when people are comfortable with carmakers’ solutions to these ethical dilemmas, however seldom they arise. Last June, the first fatal accident in a driverless car drew considerable media attention, and the victim was a passenger; imagine the level of public interest in the first driverless car accident that harms someone not in the car.
This is why, despite its mixed messages, Mercedes-Benz should be applauded for speaking out on the subject. The company acknowledges that to “clarify these issues of law and ethics in the long term will require broad international discourse.” Bill Ford Jr., the executive chairman of the Ford Motor Co., recently called on the auto industry to engage in “deep and meaningful conversations” with the public on the subject.
To promote such a discussion, we have created an online platform, which we call the Moral Machine, and which allows people all over the world to share their intuitions about what algorithmic decisions they see as most ethical. So far, more than 2 million people from more than 150 countries have participated.
The sooner driverless cars are adopted, the more lives will be saved. But taking seriously the psychological as well as technological challenges of autonomous vehicles will be necessary in freeing us from the tedious, wasteful and dangerous system of driving that we have put up with for more than a century.
Azim Shariff is an assistant professor of psychology at the University of California, Irvine. Iyad Rahwan is an associate professor of media arts and sciences at the MIT Media Lab. Jean-François Bonnefon is a research director at the Toulouse School of Economics. They wrote this for The New York Times.
| }
|