Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Artie, a startup developing a platform for mobile games on social media that feature AI, today released a data set and tool for detecting demographic bias in voice apps. The Artie Bias Corpus (ABC), which consists of audio files along with their transcriptions, aims to diagnose and mitigate the impact of factors like age, gender, and accent in voice recognition systems.

Speech recognition has come a long way since IBM’s Shoebox machine and Worlds of Wonder’s Julie doll. But despite progress made possible by AI, voice recognition systems today are at best imperfect — and at worst discriminatory. In a study commissioned by the Washington Post, popular smart speakers made by Google and Amazon were 30% less likely to understand non-American accents than those of native-born users. More recently, the Algorithmic Justice League’s Voice Erasure project found that that speech recognition systems from Apple, Amazon, Google, IBM, and Microsoft collectively achieve word error rates of 35% for African American voices versus 19% for white voices.

The Artie Bias Corpus is a curated subset of Mozilla’s Common Voice corpus representing three gender classes, eight age ranges (from 18 to 80), and 17 different accents in English. In addition to 2.4 hours of audio (1,712 individual clips) and transcriptions vetted by votes on the Common Voice web platform and native-speaker experts, it comprises self-identified, opt-in demographic data about speakers.

In a proof of concept, Artie researchers applied the Artie Bias Corpus to Mozilla’s open source DeepSpeech models, which were trained on at least one corpora with a known bias toward North American English. In another experiment, they evaluated gender bias in publicly available Google and Amazon U.S. English models.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 

Register Now

According to the researchers, DeepSpeech indeed showed a bias toward U.S. and Great British accents, but not a gender bias. On the other hand, as of early December 2019, Google’s U.S. English model showed “statistically significant” gender bias compared with Amazon Transcribe’s U.S. English model, performing on average 6.4% worse on female speakers.

“Fairness is one of our core AI principles, and we’re committed to making progress in this area,” a Google spokesperson told VentureBeat “We’ve been working on the challenge of accurately recognizing variations of speech for several years, and will continue to do so. In the last year we’ve developed tools and data sets to help identify and carve out bias from machine learning models, and we offer these as open source for the larger community.”

“As voice technology becomes more common, we discover how fragile it can be … In some cases, demographic bias can render a technology unusable for someone because of their demographic,” Josh Meyer, lead scientist at Artie and a research fellow at Mozilla, wrote in a blog post. “Even for well-resourced languages like English, state of the art speech recognizers cannot understand all native accents reliably, and they often understand men better than women … The solution is to face the problem, and work toward solutions.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.