Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

This morning, a coalition of 11 companies — Aptiv, Audi, Baidu, BMW, Continental, Daimler, Fiat Chrysler Automobiles, Here, Infineon, Intel, and Volkswagen — published a whitepaper (“Safety First For Automated Driving”) describing a framework for the development, testing, and validation of “safe” autonomous vehicles. The members claim it’s the broadest representation across the industry to date, and they say that the report — which runs 146 pages — is the largest to offer “clear traceability” proving autonomous vehicles to be “safer than the average driver.”

“‘Safety First For Automated Driving’ combines the expertise from key companies in the automaker, supplier, and technology industries to help direct development of safe automated vehicles,” the companies wrote in a jointly issued press release. “With [its] publication, authors and experts from each of the participating partners will present the group’s work at industry and technology conferences internationally over the next several months.”

Conspicuously absent from the list of contributors is Alphabet’s Waymo, which recently launched a commercial driverless taxi service that’s now servicing over 1,000 riders with a fleet of more than 600 cars. GM’s Cruise Automation, whose self-driving car prototypes racked up 450,000 autonomous miles in California last year, also opted not to participate. Neither did veritable AV powerhouses like Zoox, Tesla, Amazon-backed Aurora, Beijing-based, Nvidia, or Yandex’s driverless car division.

A coalition spokesperson told VentureBeat that the paper was “equally open” to any party who asked to participate and said that those who signed on did so of their own volition. “Due to the nature and objectives of the Safety First for Automated Driving whitepaper, we welcome additional companies to participate and see this as a living document that will continue to grow,” they added.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

Waymo, Cruise, Tesla, and Nvidia didn’t immediately respond to requests for comment.

The dearth of consensus puts into sharp relief the competitiveness of the global self-driving car market, which HTF Market Intelligence estimates will hit revenue of $173.15 billion by 2023. According to marketing firm ABI, as many as 8 million driverless cars will be added to the road in 2025, and Research and Markets anticipates that there will be some 20 million autonomous cars in operation in the U.S. by 2030.

Ford, Lyft, Uber, Volvo, and Waymo have a coalition of their own — the Self-Driving Coalition for Safer Streets — that launched in April 2016, with the stated goal of “work[ing] with lawmakers, regulators, and the public to realize the safety and societal benefits of self-driving vehicles.” And in what might be perceived as a preemptive shot across the bow, Nvidia yesterday announced that it has been tapped to lead the European Association of Automotive Suppliers (CLEPA) working group on highly connected automated vehicles, where it says it will examine autonomous vehicle audit assessment, track testing, real-world testing, and simulation.

Self-driving standards

There’s nothing overtly objectionable about Safety First For Automated Driving (SaFD) — at least not at first glance. The abstract notes that it’s intended as a summary of “widely known” level 3 and level 4 automated driving, ostensibly with an eye to developing a “generic baseline” that might become an industry-wide standard. (The Society of Automotive Engineers defines level 3 cars as those that can manage driving with only occasional human intervention, and level 4 as vehicles operating safely without oversight in select conditions.)

To this end, SaFD advocates 12 guiding principles of automated driving:

  1. Safe operation
  2. Operational design domain
  3. Vehicle operator-initiated handover
  4. Security
  5. User responsibility
  6. Vehicle-initiated handover
  7. Interdependency between vehicle operators and automated driving systems (ADS)
  8. Safety assessment
  9. Data recording
  10. Passive safety
  11. Behavior in traffic
  12. Safe layer

The coalition recommends that if safety-related functions or system components become hazardous for any reason, ADS be capable of compensating and transferring the vehicle to a safe state while ensuring sufficient time for drivers to take over. It also prescribes engagement and disengagement mechanisms that require explicit driver interaction, protect against security threats, and recognize a driver’s state in order to keep them informed about their responsibilities and driving mode transitions.

The whitepaper’s coauthors go on to describe maneuvers intended to minimize risk in the event a driver doesn’t comply with a takeover request, along with verification and validation tests intended to ensure that certain safety goals are met. They propose that automated vehicles record data pertaining to status in a privacy-compliant fashion and that they behave predictably in a way that respects road rules and is easy for nearby drivers to understand.

With respect to cybersecurity in ADS, a subject about which two-thirds of Americans expressed concern in a survey conducted by Morning Consult, SaFD recommends Secure Development Lifecycle (SDL), a process for “building in” security tailored to fit product development lifecycles while considering things like risk treatment strategy, system state, and risk treatment manifestation. The coalition also advises that ADS enable localization through sensors, map data, and sensor fusion algorithms so as to prevent autonomous driving in areas where it’s restricted.

SaFD expresses support for the adoption of Safety of the Intended Functionality (SOTIF), a paradigm that seeks to avoid unreasonable risks that might occur even if all of a vehicle’s components are operating correctly, such as when an AI system misidentifies a traffic sign or road signal. This is currently being developed by the International Organization for Standardization, and the coalition believes it will reduce known potential behaviors and unknown potential behaviors to “an acceptable level of … risk.”

Another way risk might be reduced, the coalition believes, is by ensuring that vehicles’ perception sensors — including cameras, lidar, radar, ultrasonic, and microphones — capture “all relevant external information” about surroundings, including pedestrians, obstacles, traffic signs, and acoustic signals. The group also suggests validating simulations — the digital environments employed by companies like Waymo, Uber, and Cruise to recreate tens of thousands of driving scenarios each day — by testing a subset of corner cases against real-world experience.

The strategies, taken as a whole, are in service to what SaFD defines as “safety by design,” an analytical engineering approach that begins with “scenario-based” automated driving technologies and ends with analyses of the systems’ performance in the real world. “To achieve the balance between fail-safe and availability, the design is analyzed and built from the top down,” write the paper’s coauthors. “The first analysis is carried out irrespective of the generic logical architecture … Ultimately, this evolves into a safety concept, defining safety mechanisms to support … safety goals.”

According to the SaFD, level 3 and level 4 vehicles face formidable challenges no matter how carefully they’re designed, chief among them “statistically” demonstrating safety and “positive risk balance” without a human driver ready to take the wheel. The coalition notes that they’ll also have to pass tests involving driver interaction — i.e., situations where drivers are forced to take control — and prove that they’re capable of coping with “scenarios not currently known” in traffic. Moreover, each component within automated systems that come in a range of configurations will need to be fully verified, SaFD says, and the core parts reliant on machine learning will have to be tested with “new validation methods” adapted to ensure the safety of the entire system.

“Long-term effects of prolonged use of an automated driving system may also desensitize the situational awareness of the driver,” write the coauthors. “[T]hese systems require a much more thorough consideration of the automated driving system’s ability to safely perform the driving function itself. This greatly increases the number of possible scenarios and implies the need to include statistical considerations in the overall safety argumentation.”

The coalition cautions that the whitepaper isn’t meant as a one-off but as a “first version” and says that the next version will be put forward as a proposal for international standardization. Only time will tell if that will be enough to convince a skeptical public.

Three separate studies last summer — by the Brookings Institution, think tank HNTB, and the Advocates for Highway and Auto Safety (AHAS) — found that a majority of people aren’t convinced of driverless cars’ safety. More than 60% said they were “not inclined” to ride in self-driving cars, almost 70% expressed “concerns” about sharing the road with them, and 59% expected that self-driving cars will be “no safer” than human-controlled cars.

These concerns are not without reason. In March 2018, Uber suspended testing of its autonomous Volvo XC90 fleet after one of its cars struck and killed a pedestrian in Tempe, Arizona. Separately, Tesla’s Autopilot driver-assistance system has been blamed for a number of fender benders, including one in which a Tesla Model S collided with a parked Culver City fire truck. (Tesla temporarily stopped offering “full self-driving capability” on select new models in early October 2018.) The Rand Corporation estimates that autonomous cars will have to rack up 11 billion miles before we’ll have reliable statistics on their safety — far more than the roughly 2 million miles the dozens of companies testing self-driving cars in California logged last year.

“There will still be some residual risks,” SaFD concedes, adding that it’s impossible to guarantee absolute safety with “100%” confidence. “Field monitoring is obligatory in order to iteratively learn and improve the systems,” the report concludes.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.