UM working to stop hackers of robot cars
University of Michigan researchers working with the Mcity robotic-car testing facility have developed a tool to determine how vulnerable self-driving cars are to hackers who might want to take control of a car or lock its systems for ransom.
The vulnerabilities could also extend to networks that will connect with autonomous vehicles, such as financial networks that process payments for tolls and parking, or road sensors for cameras and traffic signals, according to developers of the tool known as the Mcity Threat Identification Model.
“Automated cars rely so much on sensor input,” said Andre Weimerskirch, lead author of a report released Thursday. “Obviously if you can forge into it, manipulate it or do a denial of service, that’s a huge issue. There’s also cases like ransom, where someone can hack into a car and say ‘$100 or your car won’t start.’ ”
Weimerskirch, who leads Mcity’s cybersecurity working group, also is vice president of cybersecurity for Lear Corp.
The release of the tool comes as automakers and technology companies move to show they can operate cars without human drivers, and Congress wants to give them wide latitude to do so. Google’s self-driving car spinoff, Waymo, announced in October that driverless Chrysler Pacifica minivans will roam the Phoenix area for ride-hailing trips with engineers sitting in the back seat.
Automakers have taken pains to show they are committed to addressing potential cyber vulnerabilities. They point to the 2015 formation of the industry-run Automotive Information Sharing and Analysis Center that allows car manufacturers to confidentially share information about potential cyberattacks.
Gloria Bergquist, vice president of communications and public affairs for the Alliance of Automobile Manufacturers, which lobbies for major carmakers in Washington, dismissed warnings about self-driving cars being vulnerable to potential hacking as hyperbole intended to help sell Mcity’s tool, although she said automakers respect the work going on at Mcity.
“Many groups are seeking to sell their services or products today and for them, it is useful to suggest little is happening or they have a special product no one else has,” Bergquist said in an email. “In fact, automakers are already anticipating an increasingly interconnected future and have been taking actions to prepare for it.”
Safety advocates have argued that self-driving cars are prime targets for hackers who specialize in computer takeovers. Some have pointed out that autonomous cars rely heavily on sensors and mapping devices to read traffic signs — and that could make them susceptible to sabotage if signs are altered in certain ways.
“Today’s cars have already become computers on wheels and woefully little attention has been paid to ensuring their cybersercurity,” John Simpson, Consumer Watchdog’s privacy project director, said Thursday. “Too many manufacturers of robot cars hype the supposed benefits the vehicles might someday offer, without adequately addressing the security, public policy and ethical questions the vehicles raise.”
He points to a 2015 hack of a non-self-driving 2014 Jeep Cherokee by security researchers in a real-world test that included disabling the SUV’s engine functions and controlling the air conditioning, locks and the radio. That led to a recall of 1.4 million Fiat Chrysler cars, SUVs and pickups to fix the security flaw.
Simpson said automakers have given short shrift to cybersecurity concerns in their rush to put self-driving cars on the road.
Weimerskirch said automakers will likely have to take additional steps to protect self-driving cars from hackers.
“I think the car has to be resilient,” he said, nothing that automakers may consider allowing self-driving cars to operate in “safe mode” that is similar to computers, or notifying drivers about potential attacks and asking them how to proceed.
“The real concern,” he concluded, “is denial of service, where it might not do what you require from your car.”