Unconscious biases are pervasive in text and media. For example, female characters in stories are often portrayed as passive and powerless while men are portrayed as proactive and powerful. According to a McKinsey study of 120 movies across 10 markets, the ratio of male to female characters was 3:1 in 2016, the same it’s been since 1946.
Motivated by this, researchers at the Allen Institute for Artificial Intelligence and the University of Washington created PowerTransformer, a tool that aims to rewrite text to correct implicit and potentially undesirable bias in character portrayals. They claim that PowerTransformer is a major step toward mitigating well-documented gender bias in movie scripts, as well as other scripts in other forms of media.
PowerTransformer is akin to GD-IQ, a tool that leverages AI developed at the University of Southern California Viterbi School of Engineering to analyze the text of a script and determine the number of male and female characters and whether they’re representative of the real population at large. GD-IQ also can discern the numbers of characters who are people of color, LGBTQ, experience disabilities, or belong to other groups typically underrepresented by Hollywood storytelling.
But PowerTransformer goes one step further and tackles the task of controllable text revision, or rephrasing text to a style using machine learning. For example, it can automatically rewrite a sentence like “Mey daydreamed about being a doctor” as “Mey pursued her dream to be a doctor,” which has the effect of giving the character Mey more authority and decisiveness.
The researchers note that controllable rewriting systems face key challenges. First, they need to be able to make edits beyond surface-level paraphrasing, as simple paraphrasing often doesn’t adequately address overt bias (the choice actions) and subtle bias (the framing of actions). Second, their debiasing revisions should be purposeful and precise and shouldn’t make unnecessary changes to the underlying meaning of the text.
PowerTransformer overcomes these challenges by jointly learning to reconstruct partially masked story sentences while also learning to paraphrase from an external corpus of paraphrases. The model recovers masked-out agency-associated verbs in sentences and employs a vocab-boosting technique during generation to increase the likelihood it uses words with a target level of agency (i.e., ability to act and make choices). For instance, “A friend asked me to watch her two year old child for a minute” would become “A friend needed me to watch her two year old child for a minute,” lowering agency, while “Allie was failing science class” would become “Allie was taking science class.”
During experiments, the researchers investigated whether PowerTransformer could mitigate gender biases in portrayals of 16,763 characters from 767 modern English movie scripts. Of those characters, 68% were inferred to be men and only 32% women; they attempted to re-balance the agency levels of female characters to be on par with male characters.
The results show that PowerTransformer’s revisions successfully increased the instances of positive agency of female characters while decreasing their negative agency or passiveness, according to the researchers. “Our findings on movie scripts show the promise of using controllable debiasing to successfully mitigate gender biases in portrayal of characters, which could be extended to other domains,” they wrote. “Our findings highlight the potential of neural models as a tool for editing out social biases in text.”