Washington — Automakers are working to put self-driving cars on U.S. roads as quickly as they can, but developers are still grappling with questions about whether they can be hacked or tricked into making driving errors.
Safety advocates have argued that self-driving cars are prime targets for hackers who specialize in computer takeovers. Others have pointed out that autonomous cars rely heavily on sensors and mapping devices to read traffic signs — and that could make them susceptible to sabotage if signs are altered in certain ways.
“The current state of vehicles on the road today — the new, modern car, not even self-driving — have become rolling computers,” said John Simpson, Consumer Watchdog’s privacy project director.
Simpson cited a 2015 hack of a non-self driving 2014 Jeep Cherokee by security researchers in a real-world test that included disabling the SUV’s engine functions and controlling the air conditioning, locks and the radio. That led to a recall of 1.4 million Fiat Chrysler Automobiles NV cars, SUVs and pickups to fix the security flaw.
“The vulnerability has been demonstrated and I think it’s only going to get worse with autonomous vehicles,” Simpson said.
The debate about the potential vulnerability of self-driving cars to hackers is occurring as automakers and technology companies move to show they can operate cars without human drivers. Google’s self-driving car spinoff, Waymo, announced last week that driverless Chrysler Pacifica minivans will roam the Phoenix area for ride-hailing trips with engineers sitting in the back seat.
Automakers have taken pains to show they are committed to addressing potential cyber vulnerabilities. They point to the 2015 formation of the industry-run Automotive Information Sharing and Analysis Center that allows car manufacturers to confidentially share information about potential cyberattacks.
“Like many industries, auto engineers use ‘threat modeling’ and simulated attacks with the latest methods to test security and to help design controls to enhance data integrity,” said Gloria Bergquist, vice president of communications and public affairs for the Alliance of Automobile Manufacturers, which lobbies for major automakers.
Researchers at the University of Michigan, University of Washington, Stony Brook University and University of California, Berkeley, have shown that lidar sensors — which use bounced laser light to measure distance — could be tricked. In their tests, stop signs were altered with black and white stripes and additional words like “love” and “hate” that were enough to convince self-driving cars that they were encountering 45 mile-per-hour speed limit signs.
Study authors cautioned their findings do not mean all self-driving cars are vulnerable to hackers and saboteurs.
“Our work does not demonstrate any vulnerabilities in any autonomous vehicles currently being developed,” Ivan Evtimov, a Ph.D. student of computer science at the University of Washington who worked on the study, said in an email. He noted that the study only tested “one part of a long control pipeline.”
With all that said, Evtimov warned that engineers and others should be aware of these vulnerabilities and take steps to ensure security.
The issue appears to resonating with drivers. A recent Cox Automotive study on consumer perceptions toward self-driving cars showed 40 percent of drivers said concerns about potential software hacks are the biggest barriers to acceptance.
The cybersecurity issue has roiled debate about legislation in Congress. Under legislation that was approved by the U.S. House and is pending in the U.S. Senate, automakers and technology companies would each be allowed to sell thousands of self-driving cars per year.
The House self-driving bill requires automakers to develop cybersecurity plans within 180 days of the measure becoming law. The Senate’s bill gives carmakers 18 months to craft those plans.
Supporters of the self-driving bills in Congress have argued that they contain provisions to address cybersecurity concerns.
“The technology behind self-driving vehicles is developing rapidly, and these vehicles’ capabilities will grow exponentially in just a matter of years,” U.S. Sen. Gary Peters, D-Bloomfield Township, said in a statement. “Testing these technologies to ensure they are working safely and are secure from possible cyberattacks is a critical step to preparing self-driving vehicles for our roads.”
U.S. Rep. Debbie Dingell, D-Dearborn, added: “Concerns about cybersecurity are real, which is why we worked hard in crafting the SELF DRIVE Act to set up a process that requires rigorous detection and response practices to protect against potential attacks.”
U.S. Sen. Ed Markey, D-Mass., has introduced legislation to require the National Highway Traffic Safety Administration and the Federal Trade Commission to establish federal standards for securing cars and protecting drivers’ privacy.
“These vehicles are obviously already computers on wheels and they’re going to continue to accelerate in that direction as the technology deploys,” Markey said in a Senate hearing on self-driving trucks in September. “Obviously there are going to be vast opportunities for cyber threats to be launched against these vehicles since they’ll just be computers for all intents and purposes.”
Former National Highway Transportation Safety Administration chief David Strickland, who is now general counsel for the Self-Driving Coalition for Safer Streets lobbying group in Washington, said a tremendous amount of preemptive work has done by manufacturers and government agencies.
Strickland cautioned, however, that when it comes to hackers, “You’re dealing with criminals.”
“It’s isn’t like something where you can have a static environment where you can study it to death,” he said. “The notion of creating a cyber-proof system is not reality.”