The widespread use of self-driving cars promises to bring substantial benefits to transportation efficiency, public safety and personal well-being. Car manufacturers are working to overcome the remaining technical challenges that stand in the way of this future. Recent research conducted and lead by Azim Shariff, assistant professor of psychology at the University of California, Irvine, shows that there is also an important ethical dilemma that must be solved before people will be comfortable trusting their lives to these cars.
Shariff echoes the National Highway Traffic Safety Administration when it was noted autonomous cars might find themselves in circumstances in which the car must choose between risks to its passengers and risks to a potentially greater number of pedestrians.
Imagine a situation in which the car must either run off the road or plow through a large crowd of people: Whose risk should the car’s algorithm aim to minimize?
A suggested solution to the dilemmas is for the government to enforce regulations. The research suggests when it comes to self-driving cars, Americans balk at having the government force cars to use potentially self-sacrificial algorithms.
As mentioned, the dilemma was explored in a series of studies recently published in the journal Science. Shariff and his team presented participants with hypothetical situations that forced them to choose between “self-protective” autonomous cars that protected their passengers at all costs, and “utilitarian” autonomous cars that impartially minimized overall casualties, even if it meant harming their passengers.
It has been argued the studies, although valued in hypotheticals, are exaggerated and should not be considered in the courts or as an ethical priority on the shoulders of our government. This argument expands by addressing the probability of a car to be in such a situation naturally is far less than the vehicle malfunctioning to cause the accident, which would stop the vehicle from ever seeing an open market.
It is more likely the car would be programmed to preventatively slow down, stop or avoid a collision.
A large majority of the respondents in the study agreed cars that impartially minimized overall casualties were more ethical, and were the type they would like to see on the road.
But most people also indicated that they would refuse to purchase such a car, expressing a strong preference for buying the self-protective one. In other words, people refused to buy the car they found to be more ethical.
Car manufacturers, for their part, have generally remained silent on the matter. That changed last month when an official at Mercedes-Benz indicated in those situations where its future autonomous cars would have to choose between risks to their passengers and risks to pedestrians, the algorithm would prioritize passenger safety. But the company reversed course soon after, saying that this would not be its policy.
Carmakers can either alienate the public by offering cars that behave in a way that is perceived as unethical, or alienate buyers by offering cars that behave in a way that scares them away.
In the face of this, most car companies have found that their best course of action is to sidestep the question. The argument is ethical dilemmas on the road are exceedingly rare, and companies should focus on eliminating rather than solving them.