Autonomous vehicles are about much more than physics equations in which moving objects negotiate rules of the road. They symbolize and embody an accepted psychology of how humans interact while operating motor vehicles. Those commonly accepted understandings have been culturally transmitted over the last century across generations of human drivers and pedestrians who have moved together in the same public spaces.
Central to this perception is that a driver is rational: through experiences of practice and repetition, a driver deliberately — and often subconsciously — follows the rules of the road. And now we have come to expect that autonomous vehicles, too, will apply logic and rationality as they supplant human drivers in negotiating those tenuous decisions that emerge from everyday driving situations.
I remember when my mom took me on Sunday afternoons to a deserted local parking lot and coached me as I attempted to learn how to drive. She told me that figuring out how the car worked wasn’t what was difficult in driving: it was anticipating the decisions of other drivers on the road and reacting accordingly. Now, so many years later, I understand what she meant. I’ve encountered drunk drivers, drivers avoiding an unexpected obstacle like a deer or an escaped trashcan, tentative drivers who don’t travel at the expected traffic speed, showy drivers zipping in and out of moving traffic, drivers experiencing road rage, cars veering unexpectedly due to black ice — like you, I’ve experience many driving situations that have required me to make instantaneous, deliberate decisions. I’ve had to consider — nearly instantaneously — multiple layers of options that potentially included physical harm to me and others as well as damage to my vehicle, others’ vehicles, and property.
The Trolley Problem for a New Generation of Technology Applications
Last month on Gas2, we discussed how, in the era of autonomous driving capabilities, programmers must teach machines to make a choice in desperate driving situations to save some people while killing others. In cases of multiple possible pedestrian fatalities, it would, in all likelihood, be the driver who’d be killed in order to save the most possible people.
That ethical scenario is known as “the trolley problem,” in which an operator must decide the track of least fatalities for the destination of a runaway trolley. Automakers have confronted such age-old transportation conundrums since the introduction of gasoline-powered vehicles on city streets. As early as 1908, auto accidents in Detroit were a menacing problem. In two months that summer, 31 people were killed in car crashes, and so many were injured that totals went unrecorded.
Today, automakers attempt to minimize media stories about crashes, yet the likely ubiquity of self-driving cars in the very near future inspires attorneys to have fuzzy and soothing dreams of endless litigation. Regardless of their points of view, most automakers, ethicists, and attorneys concur that the highest duty is not to kill. Automakers, particularly, take the proverbial high road (pun intended), rationalizing that self-driving technology will help drivers to find their better angels by letting the machine make the best possible navigating decisions based on a serious of pre-programmed variables yet also knowing when to assume control from an autonomous vehicle in emergencies.
And, of course, as historian David Mindell reminds us, “autonomy” is, actually, a myth. As with drones, humans will be in control of and responsible for autonomous vehicles.
The Challenges that Autonomous Vehicles Face
An autonomous vehicle’s sensors accumulate data on nearby objects and their criteria, such as size and speed. Then they group and categorize these objects according to predetermined variables of likely behaviors. Not all autonomous vehicle systems are the same. Tesla, for example, opts for a computer vision-based vehicle detection system over the lidar technology that is currently preferred by other automakers and technology companies. The Center for Global Policy Solutions states that about 30 companies are working on autonomous vehicle technology in 2017.
But not everyone is convinced that autonomous vehicles are the best and most efficient way of the transportation future. For one thing, a machine interprets a highway differently based on weather conditions like full sun, fog, or twilight; autonomous vehicles must filter out road features regardless of driving conditions. According to John Leonard, a roboticist at MIT, snow is particularly difficult for autonomous sensors, which are matched with maps to help the machine create meaning behind road features. Updated maps are crucial, he says, as “modern algorithms run on data— it’s their fuel.”
Work zones are particularly difficult because their codes supersede the regular markers by which the vehicles are taught to navigate. With no warning, an autonomous vehicle enters a construction area where cones become the defacto road markers, where humans with flags point to temporary lanes, and where traffic lights blink endlessly and helplessly yellow. Perhaps humans will be available in call centers to assist autonomous vehicles to navigate unexpected road obstacles. However, the increasing sophistication of remote diagnostics and the use of software downloads to correct equipment issues may minimize the need for human technicians to troubleshoot road hazards. On-demand real-time actionable data from vehicle connectivity may assuage current concerns about difficult traffic situations.
Autonomous vehicles also require contextual clues, but body language isn’t easy to code, for example, nor is human behavior, which is not consistently predictable. Delegating responsibilities back to the driver will call for a machine’s ability to assess situations, which might be challenging when a human has been asleep or otherwise disoriented.
Some people argue that the advent of autonomous vehicles will remove the enjoyment of driving. They say that luxury cars sales, which are based on driving pleasure, will become obsolete when systems default to autonomous operation. And, with black boxes like those proposed by Germany’s Bundestag or information-reporting systems like those already in effect at Tesla that compromise individual rights to privacy, additional concerns arise when autonomous vehicles become part of autonomous vehicle transportation discussions.
How Automakers Describe Their Autonomous Vehicle Programming Decisions
Laws and technology don’t always co-exist easily. Makers of autonomous vehicles of all sorts are keenly aware of the 2009 automatic train-control system malfunction that led to 21 lawsuits and 84 out-of-court claims. As it stands right now, some states allow autonomous vehicles on public roads if a human is ready to assume control behind the steering wheel. Other states are testing pilot autonomous vehicles, while still others out-and-out won’t even hear of it. European testing of self-driving cars and tractor-trailers has begun, while Japan permitted an autonomous vehicle road test as early as 2013.
So, at least for the time being, the law assumes that a human being is the driver in control. To reinforce this basic understanding, for example, Tesla warns its drivers with beeps and other sounds to maintain contact with the steering wheel. Tesla released its Autopilot in 2015: a semi-autonomous technology that permits vehicles to operate with very little driver steering input. Often seen as the default in autonomous vehicle technology, Tesla leads the way in self-driving technology.
Other automakers are close at hand, however, with Mercedes-Benz ready to release the next generation of its Drive Pilot system in summer 2017 in the nonpareil S-Class. Activated by engaging a button on the steering wheel, the Mercedes will maintain lane speed and placement. Although the system does not require the driver to maintain contact with the wheel, it will need a driver acknowledgment every 10 seconds. A pair of capacitive-touch buttons on the steering wheel moves from visual notification and to an escalated and continual bonging if ignored.
On its website, Mercedes/ Daimler discusses the technology involved in its autonomous vehicle research alongside the issues involved in bringing that technology to mainstream streets and roads.
“The required sensors and cameras have long been used in series production vehicles and undertake increasing numbers of tasks on the driver’s behalf. Today’s discussion no longer revolves around whether the technology will deliver on its promise but whether people want what the technology can deliver. And whether society and legislators are ready for this ‘revolution in automobility.'”
Interestingly, Mercedes also alludes on its website to the major social changes that autonomous driving will bring and the associated psychological and legal barriers that will need to be confronted in order for that technology to gain social acceptance. The company asserts that a
“shared combination of liability among the driver, owner and manufacturer offers a balanced distribution of risks, protects victims and has proven itself in practice. The liability model also provides a reasonable basis for new systems and for the next stages of automated driving.”
That perspective about spreading out the liability responsibility among constituent groups seems pervasive among automakers, regardless of the style of the vehicle or the location of the manufacturer. The issue of liability for self-driving vehicles is too scary for automakers to take lightly.
Needless to say, as other automakers play catch-up with Tesla and Mercedes in their autonomous vehicle capacities, they, too, face programming choices and subsequent legal scrutiny. Will automakers decide that egalitarianism is preferable and share data with their competitors and regulators to better the overall autonomous vehicle industry? What’s at stake, ultimately, are the choices that must be made when machines replace humans as we as a society wrestle with relinquishing control and agency in our lives.
The Ethicists Weigh in on Autonomous Vehicles
Much of the currently discussion about autonomous vehicles concerns the high incident of human driver error. Law professor Bryant Walker Smith says, “Am I concerned about self-driving cars? Yes. But I’m terrified about today’s drivers.”
“When you drive down the street, you’re putting everyone around you at risk,” Ryan Jenkins, a philosophy professor at Cal Poly, told The Business Insider. “When we’re driving driving past a bicyclist, when we’re driving past a jogger, we like to give them an extra bit of space because we think it safer; even if we’re very confident that we’re not about to crash, we also realize that unexpected things can happen and cause us to swerve, or the biker might fall off their bike, or the jogger might slip and fall into the street.” That sense of guardianship is essential to the driving experience.
A study published in Science surveyed people about their existing mental models regarding how autonomous vehicles should act. More than 75 percent of participants in one part of the survey said that they would lean toward a scenario that sacrifices one passenger rather than kills 10 pedestrians. But, overall, the study found people prefer to ride in a driverless car that protects the occupants at all costs.
Some individuals feel that, as part of the transition to autonomous vehicle technology, the public has the right to know which vehicles on the road are being operated strictly by humans and which are not as well as pre-programmed ethical priorities. The Consumer Watchdog group sent representative Wayne Simpson to testify before NHTSA in 2016. He argued,
“The public has a right to know when a robot car is barreling down the street whether it’s prioritizing the life of the passenger, the driver, or the pedestrian, and what factors it takes into consideration. If these questions are not answered in full light of day … corporations will program these cars to limit their own liability, not to conform with social mores, ethical customs, or the rule of law.”
Autonomous vehicles may introduce a new generation to roadways with fewer traffic accidents and snarls. They’ll help us to reduce pollution by creating fleets of autonomous vehicles. They may allow significantly greater mobility for people whose stamina or intellectual or physical capacities don’t suffice for the needs of operating a vehicle. They could increase productivity. And, if the technology is distributed equally across multiple socioeconomic groups, then living standards could improve as a result.
But, as with any new technology, multiple issues need to be worked out before practical implementation can occur. Reducing ethical dilemmas into algorithms may be quite difficult for a computer to follow. Workers displaced by autonomous-vehicle technology may eventually find new jobs or they may not in what is likely to be a rapid transition to autonomous vehicles.
Moreover, the ethical issues surrounding autonomous vehicles are profound. Issues such as technology errors, poor maintenance, improper servicing, security vulnerabilities, or other failings will occur with autonomous vehicles. A whole slew of major disruptions and new harms is currently being anticipated by multiple interested groups in an attempt to avoid problems with autonomous vehicles in the future.
Ethical innovations are part of a larger picture as we attempt to replace what has been the common driving experience for generations with what will be, likely, safer, easier, and more freeing with autonomous vehicles. Have you ever wondered how you’d program a machine to navigate complex traffic situations? Try MIT’s Moral Machine, an online exercise in which players make decisions that incorporate moral judgments about vehicles, pedestrians, and choices. Your responses may surprise you.
Photo credit: Foter.com