We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Microsoft is developing a tool that can detect bias in artificial intelligence algorithms with the goal of helping businesses use AI without running the risk of discriminating against certain people.
Rich Caruana, a senior researcher on the bias-detection tool at Microsoft, described it as a “dashboard” that engineers can apply to trained AI models. “Things like transparency, intelligibility, and explanation are new enough to the field that few of us have sufficient experience to know everything we should look for and all the ways that bias might lurk in our models,” he told MIT Technology Review.
Bias in algorithms is an issue increasingly coming to the fore. At the Re-Work Deep Learning Summit in Boston this week, Gabriele Fariello, a Harvard instructor in machine learning and chief information officer at the University of Rhode Island, said that there are “significant … problems” in the AI field’s treatment of ethics and bias today. “There are real decisions being made in health care, in the judicial system, and elsewhere that affect your life directly,” he said.
The list of algorithmic bias run amok seems to grow by the year. Northpointe’s Compas software, which uses machine learning to predict whether a defendant will commit future crimes, was found to judge black defendants more harshly than white defendants. Research from Boston University and Microsoft shows that the data sets used to teach AI programs contain sexist semantic connections, for example considering the word “programmer” closer to the word “man” than “woman.” And a study conducted by MIT’s Media Lab shows that facial recognition algorithms are up to 12 percent more likely to misidentify dark-skinned males than light-skinned males.
“The most important thing companies can do right now is educate their workforce so that they’re aware of the myriad ways in which bias can arise and manifest itself and create tools to make models easier to understand and bias easier to detect,” Caruana said.
Microsoft isn’t the only one attempting to tamp down algorithmic bias. In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Recent studies from IBM’s Watson and Cloud Platforms group have also focused on mitigating bias in AI models, specifically as it relates to facial detection.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.