Column: Time to talk expectations for self-driving cars
Since Elaine Herzberg’s death last month when an Uber autonomous test vehicle struck and killed her in Tempe, Ariz., we have learned more information about this tragic incident.
Video reveals that Herzberg had been walking her bicycle outside a crosswalk on the four-lane highway in the evening hours; yet the Uber test vehicle’s lidar and radar sensors should have observed her in sufficient time to stop before striking her. Additional videos of the crash site also suggest that Herzberg would have been clearly visible to a human driver.
According to a report by Reuters, Uber had reduced the number of safety sensors on test vehicles when it changed its fleet of self-driving vehicles from Ford Fusions to Volvos in 2016. The Fusions employed seven lidar sensors, which use laser lights rather than sound waves, as well as seven radar units and 20 cameras. The Volvos use a single, roof-mounted lidar sensor along with 10 radar units and seven cameras.
Velodyne, the company that manufactures the Volvos’ lidar unit, reports that the roof-mounted sensor leaves a 3-meter blind spot around the vehicle. As this is a continuing investigation, it is unclear whether the lidar sensor played a definitive role in the cause of the accident. However, late last month the husband and daughter of Herzberg reached an undisclosed financial settlement with Uber.
The first recorded death involving an autonomous vehicle has had serious repercussions for many companies actively involved with many state-level test pilot programs. By the end of March, Arizona Gov. Doug Ducey had indefinitely suspended Uber from further autonomous car testing (although his decision did not affect testing by other companies).
Further, Uber announced that it did not plan to renew its permit to test autonomous vehicles in California for the month of April. Toyota also announced that it suspended public testing of its autonomous vehicles in Arizona, while Nvidia Corp. is also suspending its fleet of autonomous vehicles from operating in California and New Jersey.
Yet, in contrast to other autonomous vehicle companies now reconsidering their options, Waymo, the self-driving automobile division of Alphabet, recently announced that it would purchase 20,000 autonomous, all-electric Jaguar I-Pace sport utility vehicles from Jaguar Land Rover. As part of a robot taxi service to be initiated by the company in Tempe, Arizona, later this year, these vehicles are scheduled for deployment by 2020.
The Society of Automotive Engineers has developed a five-level progression of self-driving features (subsequently adopted by the National Highway Traffic Safety Administration), ranging from Level 1 (requires driver assistance) to Level 5 (full automation).
Today, Level 2 vehicles manufactured by Tesla, Volvo, Mercedes Benz and Cadillac are available to manage both your speed and steering under certain conditions, such as highway driving. Recently Audi announced that its new A8 model is the first production vehicle to attain Level 3 status, where the car (when the system is engaged), rather than the driver, actively monitors the vehicle’s environment.
The autonomous vehicle companies have a reputation for hyperbole regarding the advent of the commercially available driverless vehicle, effectively a Level 4 (high automation) autonomous vehicle production model that generally does the driving, but will have a steering wheel and pedals for a human driver.
The next few years, however, will require companies to focus on safely implementing Level 3 technology into production vehicles. The public policy arena is now focusing on the negative effects of policies and practices of information technology companies on American society, and autonomous vehicles are a now part of this emerging “techlash” by consumers and their elected representatives.
With Level 3 commercially available autonomous vehicles now becoming available, it is time for an active, public discussion on the safety expectations of autonomous vehicles tested on roadways, as few Americans understand the full repercussions of what this technology means for the future of their society.
The National Safety Council estimated there were 40,200 vehicular deaths in the U.S. during calendar year 2016, and the. Department of Transportation reports that 94 percent of fatal vehicular crashes are due to human error. The American public needs to recognize that autonomous vehicles will eliminate “human error,” as the artificial intelligence guiding these systems will improve overall vehicular performance over time.
However, there will be a human cost to contrast with the societal benefits of dramatically fewer vehicular-related deaths, and this is where state and federal public policy and regulatory processes, with broad stakeholder involvement, will need to focus institutional efforts on developing transparent and effective technology safety standards for autonomous vehicles.
Thomas A. Hemphill is a professor of strategy, innovation and public policy in the School of Management at the University of Michigan-Flint. He wrote this for InsideSources.com.