Google Reports Some Close Calls in Tests of Self-Driving Cars

Google Reports Some Close Calls in Tests of Self-Driving Cars
Advertisement
Google's fleet of 53 driverless cars, currently being tested on roads in California and Texas, have never been at fault in an accident. But in 13 cases, the vehicles came pretty close and the driver had to step in to prevent a crash, according to a new company report on the California tests.

The report stated that on 272 occasions in a 14-month span, drivers took control of autonomous vehicles because the software was failing. In 69 other incidents, the test drivers chose to take control of the autonomous vehicles to ensure that the vehicles operated safely.

The new data shows that autonomous cars are making progress, Google said. But experts cautioned that the company's report doesn't provide enough information to definitively say whether the technology is safe.

Google's test drives have been watched very closely because they have put driverless cars on real roads for the first time. Even minor incidents between human drivers and Google's cars have garnered media scrutiny because of the huge interest in the technology.

(Also see:  Google Says Will Add More Partners for Self-driving Cars)

The report, which was required by California rules, is the most detailed to date on how the cars are performing. Google is also testing the technology in Austin, but Texas did not obligate the company to release similar data.

The report shows an overall decline in incidents in which the technology fails since the fall of 2014.

"We're really excited about these numbers. It seems to be a pretty good sign of progress," Chris Urmson, who leads Google's self-driving-car project, said in an interview with The Washington Post.

Experts cautioned that the findings should be taken with a grain of salt.

"It's not going to be reflective on the quality of the system," said Alain Kornhauser, chairman of Princeton University's autonomous-vehicle engineering program. "From an evaluation standpoint, I don't think there's anything you can read into it in the end."

How good the cars' performance looks can be skewed by the situations they face, Kornhauser said. Favorable road conditions will make a car look much more impressive than tough situations.

"It's informative, but it shouldn't be treated as a true measure of the vehicle's safety," said Aaron Steinfeld, a Carnegie Mellon professor who researches human-robot interaction.

The most significant improvement in the report is the rate at which the cars detect a system failure and request the test driver to take over - incidents that Google and regulators call "disengagements." These situations happened once every 785 miles in late 2014 but once every 5,318 miles in the fourth quarter of 2015.

The measure is an indicator of the stability of the overall system. Urmson said he was pleased with the improvement as his engineers have focused on adding new capabilities to the software. He said a focus on stability will come before the technology is released to the public.

While the rate at which test drivers chose to take control of the cars decreased in early 2015, it took an upward turn late in 2015. Google says that's because the cars were pushed into more difficult circumstances.

"If you only drove on Sunday afternoon, you might get the software to the point where you don't have any of the disengagements," Urmson said. "But then [if] you throw it into rush-hour traffic on Monday morning, the driving environment is just that much more challenging."

He cited recent rain in the San Francisco Bay area and roads with dense exhaust fog as tougher challenges the cars have faced recently.

According to the report, the most common reason test drivers had to take control of the autonomous vehicles was a perception discrepancy - essentially an error in how the car saw the world.

For example, the car might think another vehicle has turned 10 degrees in its lane when it is really proceeding straight down the lane. Or the car might stop because it sees trash on the road, which a human driver wouldn't stop for.

The second most common reason the report cites for test drivers intervening was what Google calls software discrepancies. These can be very slight differences in how the software is operating the car, such as a measure from a sensor coming at every 11 milliseconds instead of every 10 milliseconds.

© 2016 The Washington Post

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Twitter Adds Live Periscope Broadcasts to Timelines
Amazon Faces US Fine for Failing to Report Injuries
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »