We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Self-driving cars are the future of transportation. According to some reports, 10 million vehicles will hit the road by 2020. They’ll ferry passengers from place to place, like driverless taxis. They’ll transport packages and raw materials from city to city. And they’ll deliver groceries, meals, and packages to homes and apartments across the country.
But for all the optimism surrounding autonomous cars, there’s an equal amount of skepticism — and concern.
In a pair of surveys published by the American Automobile Association in January and Gallup in May, 63 percent of people reported feeling afraid to ride in a fully self-driving vehicle and more than half said they’d never choose to ride in one.
Those sentiments haven’t changed much. Three separate studies this summer — by the Brookings Institution, think tank HNTB, and the Advocates for Highway and Auto Safety (AHAS) — found that a majority of people aren’t convinced of driverless cars’ safety. More than 60 percent said they were “not inclined” to ride in self-driving cars, almost 70 percent expressed “concerns” about sharing the road with them, and 59 percent expected that self-driving cars will be “no safer” than human-controlled cars.
That’s despite the fact that about 94 percent of car crashes are caused by human error and that in 2016 the top three causes of traffic fatalities were distracted driving, drunk driving, and speeding. According to the National Safety Council, Americans‘ odds of dying in a car crash are one in 114. In 2016, motor vehicle deaths claimed 40,000 lives.
So what will it take to convince a skeptical public that autonomous cars are ready for the road? In short: a lot more testing.
A typical self-driving setup consists of three primary kinds of sensors: distance-measuring lidar, color cameras, and radars. Onboard computers fuse the data streams together and apply intelligence, which derives insights. The goal is to have the system distinguish a pedestrian from a bicyclist, a four-way intersection from a roundabout, dogs from small children, and millions of other objects, road types, and driving styles.
In order to “teach” the systems how to navigate public highways and city streets safely, Waymo, Uber, and others recruit safety drivers for self-driving joyrides, during which they make note of mistakes and unexpected behaviors. But because no two drives are the same, they also put the systems through millions of virtual, computerized, and highly customizable trials.
Waymo calls its simulation platform Carcraft, after the popular World of Warcraft video game series. At any given time, more than 25,000 virtual self-driving cars drive digitalized streets of Phoenix, Mountain View, Austin, and other cities where Waymo has deployed self-driving cars, as well as test tracks. To date, the fleet has driven 5 billion simulated miles.
That’s a start, but a far cry from the baseline some researchers believe self-driving cars need to reach. The RAND Corporation, for example, estimates they’ll have to rack up 11 billion miles before we’ll have reliable statistics on their safety. For the sake of comparison, the 20 companies testing self-driving cars in California collectively logged just over 1 million miles in two years.
Sanjoy Baruah, a professor of computer science and engineering at Washington University in St. Louis, said that the question of whether self-driving car technology is ready for deployment is “highly debatable” right now.
“The more data they collect, the better [they’ll] become, but we have to take an incremental approach,” he said. “It’s asking too much for a zero percent accident rate, but it’s true that we don’t have enough information about how rare traffic and weather events, for example, should be modeled or understood.”
The firm that’s arguably the furthest along — Waymo — hit 8 million real-world miles in July and in a February report said that its cars can only travel about 5,600 miles between disengagements. In other words, if it were to give someone with a 10-mile commute one of its self-driving cars, that person would need to take control about once a year.
Cruise, the driverless car division of General Motors, recorded 131,000 miles in California in the first few months of 2018 and managed to reduce disengagements from once every 35 miles last year to once every 1,250 miles.
Every other company reported a disengagement rate of no better than once every 160 miles.
Chuck Price, partner and vice president of product at TuSimple, a three-year-old autonomous truck company that has 20 autonomous trucks operating in Arizona, California, and China, said metrics like miles aren’t all they’re chalked up to be. But he conceded that experience — both simulated and real-world — is one of the best ways to drive down error rates.
“Operating on a highway is almost a social thing,” he said. “Systems have to understand merging and changing lanes and mimic those sorts of human behaviors to the extent that they can.”
Rushing self-driving to market can have tragic consequences.
In May, an Uber prototype — a self-driving Volvo XC90 — fatally collided with a pedestrian in Tempe, Arizona. And in March, a Tesla Model X operating in semi-autonomous Autopilot mode rammed into a concrete barrier, killing 38-year-old Apple software engineer Wei Huang.
Extenuating factors contributed to both crashes, reportedly — Uber disabled Volvo’s built-in collision avoidance system, and Tesla said that Huang ignored automated warnings earlier in the drive. But at least one accident involving a self-driving car was attributable to a systems failure: a Model S that collided with a truck in Florida in May 2016. The National Highway Traffic Safety Administration said that the circumstances of the crash were “beyond the performance capabilities of [Autopilot].”
Even basic assistive technologies have been shown to be unreliable. In August, the Insurance Institute for Highway Safety warned in a report that the automatic breaking, adaptive cruise-control, and active lane guidance systems in cars from Tesla, BMW, Mercedes-Benz, and Volvo “made mistakes that could be fatal without driver intervention.” A few of their failures included drifting over lanes, hitting stationary objects even when they were within sensor range, and slowing down unexpectedly.
Jack Weast, principal architect of autonomous driving at Intel, said it comes down to a lack of training, sensors, and redundant systems.
“One of the challenges involved in developing self-driving systems is … ensuring that they accurately perceive the environment,” he said. “It’s capturing hundreds of millions of miles of video, feeding it through an algorithm, and testing the system for accuracy. As a company, you have to think, ‘How can I try to prove to a consumer that my vehicle is always going to make safe decisions and never get into an accident?'”
Complicating matters, most self-driving systems are “a black box,” said Austin Russell, CEO of lidar provider Luminar.
“When the rubber hits the road, consumers will feel more comfortable if they’re shown [self-driving vehicles] are safer than they themselves as drivers. Nobody has solved … [it],” he said. “We’re not even remotely close to being able to be truly autonomous in diverse conditions,” he said.
The Governors Highway Safety Association on Wednesday published a report authored by former senior NHTSA official Dr. Jim Hedlund outlining the questions that automakers, startups, and original equipment manufacturers need to address before self-driving cars hit the road in large numbers.
It suggested that states seek to encourage “responsible” autonomous car testing and deployment while “protecting public safety,” and that lawmakers “review all traffic laws” for changes that might be needed to accommodate that testing.
Currently, 21 states — Alabama, Arkansas, California, Colorado, Connecticut, Florida, Georgia, Illinois, Indiana, Louisiana, Michigan, New York, North Carolina, North Dakota, Pennsylvania, South Carolina, Tennessee, Texas, Utah, Virginia, Vermont — and the District of Columbia have passed laws regulating the deployment and testing of self-driving cars. And governors in 10 states — Arizona, Delaware, Hawaii, Idaho, Maine, Massachusetts, Minnesota, Ohio, Washington, and Wisconsin — have issued executive orders related to them.
Unfortunately, the various pieces of legislation aren’t consistent with one another. As Brookings points out, state laws spell out at least three different definitions for “vehicle operator”: in Texas, it’s the “natural person” riding in the car, while in California, teleoperators also fall under that definition.
The NHTSA released guidelines for self-driving vehicles in September in its Vision for Safety 2.0 guidance — the follow-up to its 2016 guidance. It provides a template for legislatures and state highway officials to follow but also clarifies that carmakers and self-driving startups don’t need to wait for federal legislation to test or deploy their systems.
Robert Brown, director of public affairs at TuSimple, thinks it’s a solid start.
“A 50-state solution is the perfect, ideal solution for us, but almost all of the states are in favor of deployment,” he said. “We’re working closely with them. It’s been a collaborative effort.”
But Baruah argues it doesn’t go far enough. He advocates for a rigorous vetting system akin to the Federal Aviation Administration’s flight licenses.
“The government could give self-driving companies certificates through a very rigorous testing procedure. They’d have to prove their systems’ safety.”
The only viable alternative in the short term is self-regulation, he said.
“As long as every company agrees to abide by a set of industry practices, it might be a reasonable alternative [to government intervention],” he said. “There’s social pressure to avoid deploying an unsafe system. No one’s going to buy your car if they perceive that it’s going to harm them. There’s a lot to lose.”
Most companies aren’t waiting for Uncle Sam to make a move.
Waymo will pilot a self-driving car service in Phoenix later this year, and Cruise said it plans to launch an autonomous taxi service in San Francisco in 2019. Meanwhile, startups like Drive.ai, NuTonomy, and Optimus Ride have deployed driverless prototypes in Frisco, Texas and Boston.
“Safety, reliability, and the experience the technology will enable are the key pillars to developing trust,” Sherif Marakby, CEO of Ford Autonomous Vehicles, said in a letter to Transportation Secretary Elaine Chao earlier this year. “Developing self-driving vehicles is not simply about the technology — it is about earning the trust of our customers and of those cities and businesses that will ultimately use it.”
Given the challenges that lie ahead for self-driving cars, that trust is likely a ways off.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.