Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Every commercial airplane carries a “black box” that preserves a second-by-second history of everything that happens in the aircraft’s systems as well as of the pilots’ actions, and those records have been priceless in figuring out the causes of crashes.
Why shouldn’t self-driving cars and robots have the same thing? It’s not a hypothetical question.
Federal transportation authorities are investigating a dozen crashes involving Tesla cars equipped with its “AutoPilot” system, which allows nearly hands-free driving. Eleven people died in those crashes, one of whom was hit by a Tesla while he was changing a tire on the side of a road.
Yet, every car company is ramping up its automated driving technologies. For instance, even Walmart is partnering with Ford and Argo AI to test self-driving cars for home deliveries, and Lyft is teaming up with the same companies to test a fleet of robo-taxis.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
But self-directing autonomous systems go well behind cars, trucks, and robot welders on factory floors. Japanese nursing homes use “care-bots” to deliver meals, monitor patients, and even provide companionship. Walmart and other stores use robots to mop floors. At least a half-dozen companies now sell robot lawnmowers. (What could go wrong?)
And more daily interactions with autonomous systems may bring more risks. With those risks in mind, an international team of experts — academic researchers in robotics and artificial intelligence as well as industry developers, insurers, and government officials — has published a set of governance proposals to better anticipate problems and increase accountability. One of its core ideas is a black box for any autonomous system.
“When things go wrong right now, you get a lot of shoulder shrugs,” says Gregory Falco, a co-author who is an assistant professor of civil and systems engineering at Johns Hopkins University and a researcher at the Stanford Freeman Spogli Institute for International Studies. “This approach would help assess the risks in advance and create an audit trail to understand failures. The main goal is to create more accountability.”
The new proposals, published in Nature Machine Intelligence, focus on three principles: preparing prospective risk assessments before putting a system to work; creating an audit trail — including the black box — to analyze accidents when they occur; and promoting adherence to local and national regulations.
The authors don’t call for government mandates. Instead, they argue that key stakeholders — insurers, courts, customers — have a strong interest in pushing companies to adopt their approach. Insurers, for example, want to know as much as possible about potential risks before they provide coverage. (One of the paper’s co-authors is an executive with Swiss Re, the giant re-insurer.) Likewise, courts and attorneys need a data trail in determining who should or shouldn’t be held liable for an accident. Customers, of course, want to avoid unnecessary dangers.
Companies are already developing black boxes for self-driving vehicles, in part because the National Transportation Safety Board has alerted manufacturers about the kind of data it will need to investigate accidents. Falco and a colleague have mapped out one kind of black box for that industry.
But the safety issues now extend well beyond cars. If a recreational drone slices through a power line and kills someone, it wouldn’t currently have a black box to unravel what happened. The same would be true for a robo-mower that runs amok. Medical devices that use artificial intelligence, the authors argue, need to record time-stamped information on everything that happens while they’re in use.
The authors also argue that companies should be required to publicly disclose both their black box data and the information obtained through human interviews. Allowing independent analysts to study those records, they say, would enable crowdsourced safety improvements that other manufacturers could incorporate into their own systems.
Falco argues that even relatively inexpensive consumer products, like robo-mowers, can and should have black box recorders. More broadly, the authors argue that companies and industries need to incorporate risk assessment at every stage of a product’s development and evolution.
“When you have an autonomous agent acting in the open environment, and that agent is being fed a whole lot of data to help it learn, someone needs to provide information for all the things that can go wrong,” he says. “What we’ve done is provide people with a road map for how to think about the risks and for creating a data trail to carry out postmortems.”
Edmund L. Andrews is a contributing writer for the Stanford Institute for Human-Centered AI.
This story originally appeared on Hai.stanford.edu. Copyright 2022
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!