Tesla Still Not Sure What Caused Fatal Model S Crash

Tesla representatives have informed the staff of a Senate subcommittee that it is still trying to understand what led to the fatal crash that took the life of Joshua Brown on a Florida highway on May 7. It says either the radar and camera that make up the hardware portion of its semi-autonomous Autopilot system failed to detect a tractor trailer that was crossing the roadway or they saw the truck but misidentified it as an overpass or overhead road sign. A month ago, Elon Musk tweeted that “Radar tunes out what looks like an overhead road sign to avoid false braking events.”

Tesla fatality

On Friday, Tesla told committee staff members that the Autopilot system did not fail but the emergency braking system may not have performed as expected. It distinguishes between the two systems even though the average driver may think of them as one and the same. Technically, the emergency braking function is supposed to operate whether or not Autopilot is activated. Data retrieved from the car after the collision shows that the brakes were never applied by Mr. Brown of the car’s computer.

Even though it is not foolproof, Tesla continues to insist that its Autopilot system is safer than a human driver. In a blog post by the Rand Corporation, author Shawn McKay makes several relevant points. “It [the death of Joshua Brown] is a chilling reminder that our evolving relationship with our increasingly robotic motor vehicles needs to be a partnership, an undertaking with humans and machines managing the risks.”

McKay goes on to say, “With many companies making significant investments in automated vehicle technology, your next car may likely relieve you of many everyday driving tasks while traveling on the freeway and, eventually, navigating city streets. Your vehicle is still a machine with a childlike sheen, observing and reacting to the world in a very simple way. So you will need to be the adult in the relationship.”

“Automated vehicles will continue to misinterpret the world in which they operate. Both operators and bystanders will need to understand the characteristics and limitations of the automated vehicles of the future. Unfortunately, the mechanisms that Tesla uses to keep the driver engaged, including sensing hands on the steering and specific warnings within the user manual, were not enough to prevent this accident. Proper human engagement with the machine, also known as the human-to-machine interface, will continue to be a critical area of development for automated vehicles.

“To manage this partnership, humans must remember that these highly technical vehicles are still only machines that will invariably struggle to understand the context of the world they operate in. With that in mind, the engagement becomes more sophisticated and perhaps safer. Sure, the machine will relieve humans of mundane tasks, but it will also require humans to be more vigilant as we exercise higher levels of cognition in interpreting and reacting to the decisions our vehicles will be making.”

McKay’s thoughts are quite appropriate today as we stand on the verge of a new era of self driving cars. As people, we expect machines to be perfect, even when the evidence suggests they are not. It’s a human failing, but as McKay points out, for the time being and into the foreseeable future, we remain ultimately responsible for our own safety and that of others on the roads around us regardless of how sophisticated the autonomous features of our cars become. The time when we can take our hands off the wheel and eyes off the road is still quite far in the future.

Source: New York Times

Steve Hanley

Closely following the transition from internal combustion to electricity. Whether it's cars, trucks, ships, or airplanes, sustainability is the key. Please follow me on Google + and Twitter.