VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


This week, leading AI researcher Timnit Gebru was fired from her position on an AI ethics team at Google in what she claims was retaliation for sending colleagues an email critical of the company’s managerial practices. The flashpoint was reportedly a paper Gebru coauthored that questioned the wisdom of building large language models and examined who benefits from them and who is disadvantaged.

Google AI lead Jeff Dean wrote in an email to employees following Gebru’s departure that the paper didn’t meet Google’s criteria for publication because it lacked reference to recent research. But from all appearances, Gebru’s work simply spotlighted well-understood problems with models like those deployed by Google, OpenAI, Facebook, Microsoft, and others. A draft obtained by VentureBeat discusses risks associated with deploying large language models, including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people.

Indeed, Gebru’s work appears to build on a number of recent studies examining the hidden costs of training and deploying large-scale language models. A team from the University of Massachusetts at Amherst found that the amount of power required for training and searching a specific model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car. It’s a scientific fact that impoverished groups are more likely to experience significant environmental-related health issues, with one study out of Yale University finding low-income communities and those comprised predominantly of racial minorities experienced substantially higher exposure to air pollution compared to nearby affluent white neighborhoods.

Gebru and colleagues’ assertion that language models can spout toxic content is similarly grounded in extensive prior research. In the language domain, a portion of the data used to train models is frequently sourced from communities with pervasive prejudice along gender, race, and religious lines. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNetOpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 

Learn More

In his email, Dean accused Gebru and the paper’s other coauthors of disregarding advances that have shown greater efficiencies in training and might mitigate carbon impact. He also said the work fails to take into account recent research into mitigating language model bias. But this argument seems disingenuous. In a paper published earlier this year, Google trained a massive language model — GShard — using 2,048 of its third-generation tensor processing units (TPUs), chips custom-designed for AI training workloads. One estimate pegs the wattage of a single TPU at around 200 watts per chip, suggesting GShard required an enormous amount of power to train. And on the subject of bias, OpenAI, which made GPT-3 available via an API earlier this year, has only begun experimenting with safeguards, including “toxicity filters” to limit harmful language generation.

In the draft paper, Gebru and colleagues reasonably suggest that large language models have the potential to mislead AI researchers and prompt the general public to mistake their text as meaningful. (Popular natural language benchmarks don’t measure AI models’ general knowledge well, studies show.) “If a large language model … can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads. “We advocate for an approach to research that centers the people who stand to be affected by the resulting technology, with a broad view on the possible ways that technology can affect people.”

It’s no secret that Google has commercial interests in conflict with the viewpoints expressed in the paper. Many of the large language models it develops power customer-facing products, including Cloud Translation API and Natural Language API. The company often touts its work in AI ethics and has seemingly — if reluctantly — tolerated internal research critical of its approaches in the past. Letting Gebru go would appear to mark a shift in thinking among Google’s leadership, particularly in light of the company’s crackdowns on dissent, most recently in the form of illegal spying on employees before firing them. In any case, it bodes poorly for Google’s willingness to openly debate critical issues around AI and machine learning. And given its outsize influence in the research community, the effects could be far-ranging.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers— and be sure to subscribe to the AI Weekly newsletter and bookmark The Machine.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.