Programming Cars to KillBy Samuel Greengard | Posted 2016-03-28 Email Print
What happens when a mechanical part fails or there's a landslide, and a self-driving car must choose between saving its passenger or a motorist in another car?
The MIT Technology Review recently presented a story titled "Why Self-Driving Cars Must Be Programmed to Kill." Although the topic seems to careen entirely into the sensationalistic category, it actually represents a very real and disturbing dilemma for companies manufacturing products. There's a growing need to embed ethical decision making into systems that rely on artificial intelligence (AI) and algorithms.
Self-driving vehicles, as the article points out, are at the nexus of this technology conundrum. As automakers embed automatic and autonomous functions in cars and trucks—things like automatic braking, automated steering and self-parking functions, for instance—there's a need to think about what happens during an unavoidable accident (rather than the human negligence we typically describe as an "accident").
For example, what happens when a mechanical part fails or a landslide takes place and the car must make a choice between saving its passenger or a motorist in another car? How does the motor vehicle steer, break and sense the environment around it? Which safety systems spring into action and how do they work?
It's a given that manufacturers will embed features and capabilities that make autos and driving safer. Heck, simply removing phone- and food-wielding humans from the equation is a huge step forward. And while there's a clear need to understand liability laws and design products that operate in an ethical and legally permissible way in a digital world, there's also a gray area that is completely unavoidable.
And that's where the rubber hits the proverbial road. As the article points out: "If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation."
Unfortunately, there are no clear answers, and right and wrong are highly relative terms in this context.
When researchers at the Toulouse School of Economics in France presented the question about how autonomous vehicles should operate to several hundred Amazon Mechanical Turk participants, the results were fairly predictable: Cars should be programmed to minimize death tolls. However, respondents also noted that they had strong reservations about these systems. Simply put: People were in favor of cars that sacrifice the occupant to save other lives …but they don't want to ride in such a vehicle.
As we wade deeper into robotics, drones, 3D printing and other digital technologies, similar questions and ethical conundrums will occur. It may not be long until every organization requires a chief ethical officer to sort through the moral and ethical implications of technology.