LINKEDINCOMMENTMORE

A sporty black sedan speeds dangerously close to a cliff on a road winding through an arid landscape.

The car recovers and swerves back onto the cracked asphalt, but another sharp turn is coming. It straddles the edge of the cliff, its tires spinning through sand. Then it falls. Sage brush and rock outcroppings blur past as it plummets.

No driver emerges from the car. No police show up.

The crash occurred in a modified “Grand Theft Auto” video game, an example of the virtual simulations researchers at University of Pennsylvania are running to evaluate autonomous vehicles.

“We can crash as many cars as we want,” said Rahul Mangharam, associate professor at University of Pennsylvania’s department of electrical and systems engineering.

Mangharam and his team of six are pursuing what they describe as a “driver’s license test” for self-driving cars, a rigorous use of mathematical diagnostics and simulated reality to determine the safety of autonomous vehicles before they ever hit the road.

Complicating that task is the nature of the computer intelligence at the heart of the car’s operation. The computer is capable of learning, but instead of eyes, ears, and a nose, it perceives reality with laser sensors, cameras, and infrared. It does not see or process the world like a human brain. Working with this mystery that scientists call “the black box” is a daunting, even spooky, element of the work at Penn.

“They’re not interpretable,” Mangharam said. “We don’t know why they reached a certain decision; we just know they reached a certain decision.”

Mangharam describes autonomous vehicles as continuously executing a three-step process. The first step is perception, the system’s attempt to understand what is in the world around it. Then, data gathered is used to make a plan, which starts with the destination, formulates a route, and then decides how to navigate that route. The third step is the process of driving, the application of brakes, gas, and steering to get where the vehicle is directed to go.

The Penn scientists run the autonomous driving software, called Computer Aided Design for Safe Autonomous Vehicles, through both mathematical diagnostics and the virtual reality test drives on “Grand Theft Auto” to see where the system fails. The video game is particularly useful because the autonomous driving system can be rigged to perceive it similarly to reality and because the virtual environment can be perfectly controlled.

Because of the uncertainty about how the robot driver is identifying objects, researchers are concerned that it might be come to the right answer, but for the wrong reasons.

Mangharam used the example of a tilted stop sign. Under normal circumstances, the computer could recognize a stop sign correctly every time. However, if the sign were askew, that could throw off the features the computer uses to recognize it and a car could drive right past it. Scientists need to understand not just what the car does wrong, but also at what stage of the driving process the error happens.

“Was the cause of the problem that it cannot perceive the world correctly and made a bad decision,” Mangharam said, “or did it perceive the world correctly and make a bad decision?”

LINKEDINCOMMENTMORE
Read or Share this story: http://detne.ws/2o1sA9E