VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Following a Maricopa County Grand Jury decision, the woman behind the wheel of a semi-autonomous Uber vehicle was charged last month with negligent homicide in the 2018 death of Elaine Herzberg. The lawsuit against the backup driver in the first known autonomous vehicle fatality promises to be a landmark case with the power to shape the future of artificial intelligence in the U.S.
Determining fault when AI plays a role in a person’s injury or death is no easy task. If AI is in control and something goes wrong, when is it the attending human’s fault and when can you blame the AI? That’s the focus of a recent paper published in the Boston University Law Review. Here, UCLA assistant professor Andrew Selbst finds that AI creates tension with existing negligence law and requires intervention by regulators. A preprint draft of the paper was initially published in early 2020. The final version was updated with analysis of the Arizona negligent homicide case.
Selbst says the Uber case could go either way. A judge or jury could find it unreasonable to place liability on a person dealing with a semi-autonomous vehicle. Or responsibility could be assigned to a human actor who had limited control over the automated or autonomous system. This is what cultural anthropologist Madeleine Elish calls a “moral crumple zone.” When machines and humans are considered in tandem but the law fails to take machine intelligence into account, humans can absorb responsibility and become “liability sponges.”
“If negligence law requires a higher standard of care than humans can manage, it will place liability on human operators, even where the average person cannot prevent the danger,” Selbst writes. “While the Uber case seems to point in the direction of moral crumple zones, it is also easy to imagine the reverse — finding that because the average person cannot react in time or stay perpetually alert, failing to do so is reasonable. Ultimately, what AI creates is uncertainty.”
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
Selbst said legal scholars tend to draw a distinction between fully autonomous vehicles and semi-autonomous machines working with humans, like the vehicle involved in the Uber crash. While fully autonomous vehicles or artificial general intelligence (AGI) may shift responsibility to the hardware maker or AI system, the answer is far less clear when a human uses AI to make a decision based on a prediction, classification, or assessment. Selbst expects this to present new challenges for businesses, governments, and society.
The vast majority of AI available today is designed to augment human decision-making. Examples range from algorithms judges use to assess recidivism to AI-powered tools medical professionals employ to make a medical treatment plan or diagnosis. These include systems that detect patterns in medical imagery to help professionals diagnose diseases like breast cancer, lung cancer, and brain cancer, and one that is attempting to diagnose COVID-19 using X-rays.
Selbst says that while technology is a key driver of change in negligence law, the way humans and AI rely on each other for decisions sets AI apart. Complicating matters further is the fact that humans may accept automated decision-making without scrutiny, ignore AI if they suffer alert fatigue from too many notifications, or rely on AI to recognize patterns in data too complex to follow.
In a world full of humans and AI systems making decisions together, Selbst says governments need to consider reforms that give negligence law the chance to catch up with rapidly emerging technology.
“Where society decides that AI is too beneficial to set aside, we will likely need a new regulatory paradigm to compensate the victims of AI’s use, and it should be one divorced from the need to find fault. This could be strict liability, it could be broad insurance, or it could be ex ante regulation,” the paper reads.
Various models have been developed to address this issue, like Andrew Tutt’s proposed “FDA for algorithms,” a federal agency that would function much like the FDA does in investigating pharmaceutical drugs. There’s also the idea of algorithm assessments akin to environmental impact assessments as a way to increase oversight and publicly available disclosures.
“Ultimately, because AI inserts a layer of inscrutable, unintuitive, statistically derived, and often proprietary code between the decision and outcome, the nexus between human choices, actions, and outcomes from which negligence law draws its force is tested,” the paper reads. “While there may be a way to tie some decisions back to their outcomes using explanation and transparency requirements, negligence will need a set of outside interventions to have a real chance at providing redress for harms that result from the use of AI.”
Doing so might give negligence law standards time to catch up with advances in artificial intelligence before future paradigm shifts occur and standards fall even further behind.
The paper also explores the question of what happens when algorithmic bias plays a role in an injury. Going back to the autonomous vehicle question, research has shown that computer vision systems do a better job detecting white pedestrians than Black pedestrians. Accepting the use of such systems could reduce vehicle fatalities overall while sanctioning worse outcomes for Black pedestrians.
Without regulatory intervention, Selbst said, there’s a danger AI could normalize adverse outcomes for certain groups and at the same time deny them any recourse. That has the potential to magnify the helplessness people already feel when they encounter algorithmic bias or experience harm online.
“The concern is that while AI may successfully reduce the overall number of injuries, it will not eliminate them, but it will eliminate the ability of the people injured in the new regime to recover in negligence,” the paper reads. “By using a tool based in statistical reasoning, the hospital prevents many injuries, but from the individual standpoint it also creates an entirely new set of victims that will have no recourse.”
Secrecy in the AI industry is a major hurdle when it comes to accountability. Negligence law typically evolves over time to reflect common definitions of what constitutes reasonable behavior on the part of, for example, a doctor or driver accused of negligence. But corporate secrecy is likely to keep AI shortcomings that result in injury hidden from the public. As with Big Tobacco, some of that information may come into public view through whistleblowers, but a lack of transparency leaves people exposed in the interim. And AI’s rapid development threatens to overwhelm the pace of changes to negligence or tort law, further exacerbating the situation.
“As a result of the secrecy, we know little of what individual companies have learned about the errors and vulnerabilities in their products. Under these circumstances, it is impossible for the public to come to any conclusions about what kinds of failures are reasonable or not,” the paper states.
Alongside data portability and freedom to reject the use of AI in decision-making processes, avenues of recourse are an essential part of an algorithmic bill of rights AI experts proposed last year. In another recent initiative aimed at helping society adapt to AI, last month Amsterdam and Helsinki launched beta versions of algorithm registries that allow residents to inspect risk and bias assessments, identify datasets used to train AI systems, and quickly find the city official and department responsible for deploying an AI system.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.