LINKEDINCOMMENTMORE

It was still dark on a Friday morning in November when a California Highway Patrol officer started following a Tesla Model S on Route 101 between the San Francisco International Airport and Palo Alto. The gray sedan was going 70 miles per hour with a turn signal blinking, cruising past multiple exits. The officer pulled up alongside and saw the driver in a head-slumped posture. Lights and sirens failed to rouse him. The car, the officer guessed, was driving itself under the control of what Tesla calls Autopilot.

Every Tesla is equipped with hardware that the automaker says will enable its vehicles to be capable of driving themselves on entire trips, from parking space to parking space, with no input from the driver. At the moment, the company limits its cars to a system that can guide them from on-ramp to off-ramp on highways. The system is smart enough, it seems, to keep the Tesla driving safely even with a seemingly incapacitated driver, but not yet smart enough to obey police sirens and pull over.

This case appears to be the first time law enforcement has stopped a vehicle on an open road under the control of an automated system. There was no way for police to commandeer the driving software, so they improvised a way to manipulate Tesla’s safety programming. A highway patrol car blocked traffic from behind while the officer following the Tesla pulled in front and began to slow down until both cars came to a stop.

The incident encapsulates both the high hopes and deep anxieties of the driverless future. The Tesla’s driver failed a field sobriety test, according to the police, and has been charged with driving under the influence; a trial is scheduled for May. The car, which seems to have navigated about 10 miles of nighttime highway driving without the aid of a human, may well have saved a drunken driver from harming himself or others. Neither Tesla nor the police, however, are ready for people to begin relying on the technology in this way.

Robots can’t take control of the roads until automakers, engineers, lawmakers and police work through a series of thorny problems: How can a cop pull over an autonomous car? What should robot drivers do after a collision? How do you program a vehicle to recognize human authorities?

Michigan state trooper Ken Monroe took Ford engineers on ride-alongs around Flint last summer. The engineers were especially curious about what he wanted drivers to do as he came up behind them with lights flashing, and how those responses differed depending on whether he was pulling over a car or trying to get past.

“While I was responding to an emergency, they said, OK, you’re approaching this vehicle here. What is the best-case scenario that you can find for that vehicle to do?’” They spoke at length, Monroe says, about how an autonomous vehicle could recognize when it was being pulled over. “The biggest cue that we came up with was just the length of time that the police vehicle was behind the AV.”

In addition to its testing in Miami and Washington, Ford has been working with police in Michigan for nearly two years as part of preparations for the rollout of autonomous ride-hailing and delivery cars scheduled for 2021.

Teaching autonomous cars to pull to the right is a relatively straightforward task. The point of the lights and sirens, after all, is to be noticed from far away. “If it’s salient to a human, it’s probably salient to a machine,” says Zachary Doerzaph, director of the Center for Advanced Automotive Research at the Virginia Tech Transportation Institute. The greater challenges come when police and other first responders are outside their vehicles: “It’s all of these other cases where that last 10 percent of development could take the majority of the time.”

LINKEDINCOMMENTMORE
Read or Share this story: https://www.detroitnews.com/story/business/autos/mobility/2019/02/25/someday-self-driving-car-will-pull-police/39107733/