Amazon wants to minimize bias and address issues of transparency and accountability in AI. To that end, the Seattle company today announced that it will partner with the National Science Foundation (NSF) to commit up to $10 million in research grants over the next three years to develop systems focused on fairness in AI and machine learning.

“With the increasing use of AI in everyday life, fairness in artificial intelligence is a topic of increasing importance across academia, government, and industry,” wrote Prem Natarajan, VP of natural understanding in the Alexa AI group, in a blog post. “Here at Amazon, the fairness of the machine learning systems we build to support our businesses is critical to establishing and maintaining our customers’ trust.”

Amazon’s partnership with the NSF will specifically target explainability, potential adverse biases and effects, mitigation strategies, validation of fairness, and considerations of inclusivity, with the goal of enabling “broadened acceptance” of AI systems and allowing the U.S. to “further capitalize” on the potential of AI technologies. The two organizations expect proposals, which they’re accepting from today until May 10, to result in new open source tools, publicly available data sets, and publications.

Amazon will provide partial funding for the program, with the NSF making award determinations independently and in accordance with its merit review process. The program is expected to continue in 2020 and 2021, with additional calls for letters of intent.

“We are excited to announce this new collaboration with Amazon to fund research focused on fairness in AI,” said Jim Kurose, the NSF’s head for computer and information science and engineering. “This program will support research related to the development and implementation of trustworthy AI systems that incorporate transparency, fairness, and accountability into the design from the beginning.”

With today’s announcement, Amazon joins a growing number of corporations, academic institutions, and industry consortiums engaged in the study of ethical AI. Already, their collective work has produced algorithmic bias mitigation tools that promise to accelerate progress toward more impartial models.

In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on their race, gender, or age. Accenture released a toolkit that automatically detects bias in AI algorithms and helps data scientists mitigate that bias. Microsoft launched a solution of its own in May, and in September Google debuted the What-If Tool, a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework.

Not to be outdone, IBM last fall released AI Fairness 360, a cloud-based, fully automated suite that “continually provides [insights]” into how AI systems are making their decisions and recommends adjustments — such as algorithmic tweaks or counterbalancing data — that might lessen the impact of prejudice. And recent research from IBM’s Watson and Cloud Platforms group focused on mitigating bias in AI models, particularly with regard to facial recognition.

It’s worth noting that today’s news comes after researchers at the Massachusetts Institute of Technology published a study that found Rekognition — Amazon Web Services’ (AWS) object detection API — incapable of reliably determining the sex of female and darker-skinned faces in certain scenarios. The study’s coauthors claimed that in experiments conducted over the course of 2018, Rekognition’s facial analysis feature mistakenly identified pictures of woman as men and darker-skinned women as men 19 percent and 31 percent of the time, respectively.

Amazon disputed — and continues to dispute — those findings. It says that internally, in tests of an updated version of Rekognition, it observed “no difference” in gender classification accuracy across all ethnicities. And it says that the paper in question failed to make clear the confidence threshold used in the experiments.