While autonomous cars still face tremendous technical challenges before they become commonplace, the ethical decisions they pose loom as even bigger obstacles. Engineers seem to be making systematic progress on the technical side, but designing a framework for addressing the social issues remains problematic.

Five years after the creation of the Moral Machine, an online quiz designed to gauge responses to decisions that self-driving cars face, there is more discussion around the ethics, but still no clear framework for reaching social consensus around the ethical problems, according to one of the co-creators of project, Jean-Fran├žois Bonnefon, a French researcher at the Toulouse School of Economics and the Artificial and Natural Intelligence Toulouse Institute.

“These are common decisions we need to make together as a community,” he said during a presentation at the Minds & Tech Conference held October 9 in Toulouse, France. “These decisions cannot be left to the car makers. We have to design ways for people to have a voice.”

The Moral Machine project, a collaboration with the Massachusetts Institute of Technology, made a big splash last year when it published some of its initial results in the scientific journal Nature. The site proposes a series of tradeoffs, each with some gruesome options, and asks the viewer to select the choice they would make in the given situation.

 

As of last year, the site had counted 40 million decisions in ten languages from people in 233 countries and territories. The fundamental findings seemed somewhat unsurprising: People would opt to save more lives when possible, children rather than adults, and people rather than animals. Since the study was published, the number of responses has climbed to 100 million.

But designing the quiz required immense simplification that only just barely touches on the seemingly infinite number of life-taking decisions autonomous vehicles will face, Bonnefon said.

“When you include more and more and more actors, you get to a level of complexity that’s quite daunting,” Bonnefon said. “There are 1 million options. How do you write a survey that includes 1 million options? It’s impossible.”

So the Moral Machine is an instructive, but imperfect and limited, substitute. Yes, people want to save the greater number. But what if the smaller number includes a pregnant woman or someone pushing a stroller? At some point, regulators and engineers designing the vehicle’s decision-making systems will need to agree on some kind of answer to program the answers in accordingly.

“It’s a terrible decision,” Bonnefon said. “I’m sure some of you are feeling uncomfortable with the idea that we should save the greater number. When we complicate the situations, we may get into situations where it’s not clear that saving the greater number is the preferred option.”

Still, the experiment has been a success to the extent that there is a greater global conversation around these issues. This summer, for instance, a coalition of 11 automakers published a white paper called “Safety First For Automated Driving” that proposes a framework for the standards that would determine whether autonomous vehicles can be considered safe.

But Bonnefon emphasizes that the Moral Machine is not meant to be a substitute to creating an actual method that will allow communities and governments to determine what is considered socially acceptable. It’s important that regulators not just be aware of the need to consider society’s views, but create a way to truly capture those sentiments in a meaningful way. This includes answering questions about who gets to have input, how the results are communicated to the public, and what to do when neither experts or the public can offer a clear agreement on the “right” decision.

“We do have this data,” he said. “Now that we know, what do we do? Our intention has never been to make this a global democratic exercise. It would be a terrible idea. But governments need to know what people won’t accept.”