Autonomous Cars self-driving

Published on March 20th, 2017 | by Carolyn Fortuna

How Self-Driving Cars Will Sometimes Kill To Save — Teslas Included

March 20th, 2017 by  
 

Self-driving cars that kill to save? This is an ethical dilemma known as the “trolley problem.” A runaway trolley will kill people on either of two tracks. Which lever will you choose: the one that kills more people or fewer people? In either case, people will be killed, and you make the choice who lives and who dies.

How can a person decide between two really bad options? By choosing what’s called the lesser of two evils, that’s how. And a recent article from Wired magazine applies this ethical dilemma of the trolley problem to the not-so-future world of self-driving cars.

self-driving

In some brightly lit room filled with cubicles, a programmer right now is teaching a machine to make a choice in a desperate driving situation to save some people while making the choice to kill others. In cases of multiple possible pedestrian fatalities, it’s likely the driver who’d be killed in order to save the most possible people.

Tesla CEO Elon Musk is an innovator who is attracted to “things that change the world or that affect the future, and wondrous, new technology where you see it, and you’re like, ‘Wow, how did that even happen? How is that possible?’” He professes great faith that his companies are change agents. For example, Tesla’s Autopilot system is often referred to as the default for driverless automation, and autonomous driving features are slowly becoming an integrated feature for most vehicles.

Last year, Musk pledged that, by the end of 2017, he’d produce a Tesla that can drive itself from Los Angeles to New York City. Oh, and no human driver assistance would be necessary. To get to that Level 5 autonomy goal, Tesla has been accumulating data through the fleets of cars it already has on the road, comparing driver reactions to that of machines. It’s known eerily as Shadow Mode.

Musk has acknowledged that “worldwide [self-driving] regulatory approval will require something on the order of 6 billion miles.” And, really, no matter how many comparative testing miles Tesla accumulates, a realization about self-driving capacity remains hidden from many conversations: deaths will occur even with driver assistance like Tesla’s Autopilot.

Car crashes kill more than 30,000 people in the US annually. Think of all the factors that affect a human driver: physical (tired), emotional (angry), psychological (confused), or intellectual (distracted) factors all come into play when a person gets behind the wheel. “The foundation is laid for cars to be fully autonomous, at a safety level we believe to be at least twice that of a person, maybe better,” Musk has been quoted as saying. He feels it is “disturbing” that the media reports on nearly every self-driving car road accident yet fails to engage in discourse around the “1.2 million people that die every year in manual crashes.”

“If, in writing some article that’s negative, you effectively dissuade people from using an autonomous vehicle, you’re killing people,” Musk adds. Ah, another ethical dilemma. …

And it’s not just Musk who attests to the future safety potential of machines that can drive themselves. In September 2016, the Federal Automated Vehicles Policy disseminated a set of proactive safety guidelines to provide safety through autonomous innovation, recognizing that “automated vehicles hold enormous potential benefits for safety, mobility, and sustainability.”

But the public may not be convinced of these “potential benefits” of autonomous driving. In a March, 2017 survey, AAA found that 78% of people polled were afraid of riding in a self-driving car, a statistic that remained unchanged from the last year. Although most people surveyed said they fear traveling in a fully self-driving car, 59% also said they would like autonomous technology in their next vehicles. “Convincing the public must begin with understanding what the public is worried about and what the psychological mechanisms involved are,” says Iyad Rahwan of the MIT Media Lab. Rahwan’s research has determined so far that most people wouldn’t buy a self-driving car that would have the potential to choose to kill them in a dangerous traffic situation.

Elon Musk has moved beyond the dilemma of whether or not vehicles should have self-driving capabilities. “People may outlaw driving cars because it’s too dangerous,” he has envisioned. “You can’t have a person driving a two-ton death machine.” And the day is coming in the not-so-near future when Tesla’s Autopilot will be approximately 10 times safer than the US vehicle average, according to Musk, and that’s when “the beta label will be removed.”

That should be a fascinating day for Tesla and the general public, who profess both to fear autonomous driving and to want it in their own vehicles. We’ll be cautiously more safe then, in all likelihood.

Photo by PeterThoeny via Foter.com (CC BY-NC-SA)





Tags: , , ,


About the Author

Carolyn grew up in Stafford Springs, CT, home of the half-mile tar racetrack. She's an avid Formula One fan (this year's trip to the Monza race was memorable). With a Ph.D. from URI, she draws upon digital media literacy and learning to spread the word about sustainability issues. Please follow me on Twitter and Facebook and Google+



  • Damien

    Read and heard a lot about that argument and I think it’s a false debate because humans will always be worse and have slower reaction times than computers. The human reaction in a situation to kill one person or several would be like “ah crap…” not enough time to make that decision as humans are way too slow.

    • Steve Hanley

      That may be so, but the article is not about what human drivers might or might not do, it is about what self driving cars are being programmed to do. And the implications go far beyond operating a vehicle. They expand outward in a dizzying array of permutations that impact the whole sphere of Artificial Intelligence.

      Brave New World Part Deux……

Back to Top ↑