The National Transportation Safety Board [NTSB] released its preliminary analysis of the March incident in which a self-driving Uber car struck and killed 49-year-old Elaine Herzberg while she was crossing a street in Arizona.
Although the fatality happened in another state, it had broad-reaching implications for California and the Bay Area, which remain the hot spot for self-driving vehicular technology and its trials on public roadways.
NTBS cautions that this initial report “by its nature does not contain probable cause” and that the investigation will continue until the agency reaches a robust conclusion about who, if anyone, was at fault.
The brief analysis does include several key insights into what happened and how self-driving tech operates on the roads. Among other things:
- The car detected the pedestrian a long time before the collision: “According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact.” Since the car was clocking 43 miles per hour at the time of the incident, the detection happened while there was roughly 378 feet between the vehicle and what it initially registered as an “unknown object.”
- A pedestrian walking a bike at night may have temporarily confused the system: “System software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle, with varying expectations of future travel path.” The report doesn’t indicate whether or not it’s significant that the car failed to register the obstruction as a pedestrian or if this affected its expectations.
- The car was under human control for the collision, but only for the final second: “At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed. [...] The vehicle operator intervened less than a second before impact by engaging the steering wheel. [...] The operator began braking less than a second after the impact.”
- While in autonomous mode, the car wasn’t able to brake in time to prevent the fatality: “Emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.”
- In fact, many systems are restricted when the car is in autonomous mode: “The systems included a collision avoidance function with automatic emergency braking, known as City Safety, as well as functions for detecting driver alertness and road sign information. All these Volvo functions are disabled when the test vehicle is operated in computer control.”
The collision was not the result of malfunction: “All aspects of the self-driving system were operating normally at the time of the crash, and that there were no faults or diagnostic messages.”
NTSB will continue investigating and eventually produce a more detailed report. Uber has suspended its self-driving car program on public roads, at least for the time being.
In California, the most recent traffic collision involving a self-driving car happened when a Mountain View driver rear-ended a Google-owned Waymo car on April 6. Nobody was seriously hurt. So far, only a handful of self-driving crashes in California have significantly injured anyone.