Facebook’s methods for detecting hate speech are being criticized today for the way in which they protect some groups and ignores others. In a ProPublica report published today, leaked internal documents used to train teams that review questionable content with help from the AI-trained algorithm are told to protect white men over black children or women drivers.
Content reviewers are trained to safeguard what Facebook refers to as protected categories. This protection is extended to any race, gender, ethnicity, sexual orientation, gender identity, disease, or religious affiliation. Subgroups that aren’t one of these identifiers do not receive protection, however, so reviewers that censor content may remove a post targeting a white man because being white (race) and a man (gender) are two protected categories, but not a white (race) kid (age) since being a child is not a protected category (that’s weird).
Unprotected groups, according to the documents, include age, appearance, country, social class, political ideology, and occupation.
Facebook defines hate speech as any post that directs like cursing, slurs, or calls for violence or degradation against a protected category. Migrants recently became a quasi-protected group following the Syrian refugee crisis.
Justice is defined as just behavior or treatment. If Facebook’s rules and hate speech-targeting AI were just, they would protect those most in need of protection and acknowledge inequality.
Equality is thinking everyone has the same rights and opportunities, but most people understand that everyone does not have the same rights and opportunities.
Mark Zuckerberg said as much in his recent Harvard University commencement speech about the challenges the world faces today.
In the speech he called a fight “against the forces of isolationism, authoritarianism, and nationalism” the “struggle of our time,” and acknowledged the challenges people face today from an imbalanced world, like income inequality and the burden of crushing student debt.
People don’t decide whether to discriminate against each other based on whether or not that person is in a protected category. I don’t think that’s ever happened before.
In a series of tweets in response to leaked information about Facebook’s rules surrounding hate speech, Y Combinator graduate and The Human Utility cofounder Tiffani Ashley Bell called for standards to be formed around AI and ethnicity. Such action is necessary, she argues, as AI begins to control not just hate speech detection but also things like autonomous vehicles and criminal sentencing laws.
The Terminator-Skynet scenario isn’t the kind of AI she fears most.
Worry about AI taking up trash human biases and *accelerating* the destruction of certain groups in society. THAT is the real threat.
— Tiffani Ashley Bell (@tiffani) June 28, 2017
Caroline Sinders is an online harassment researcher at Wikimedia Foundation who is currently working with BuzzFeed Open Lab and EyeBeam to use machine learning to study online harassment. Recognizing hate speech requires a fair deal of context. No platform out there get this perfect today, she said, and Facebook does better than others, but she called the social network’s approach to hate speech “too basic.”
“I think it’s important to understand that not protecting female drivers is a hyper-specific example of upholding misogynist thought that can create toxicity and pain on your social network. Not protecting black children is the same,” Sinders told VentureBeat today in a phone interview. “They should be more protected as a subset than white men because of the less social currency and privilege they hold, because they’re more marginalized groups, because there are groups that already face more attack. So I think there’s something inherently wrong with this equation, with these subsets.”
Exactly what is hate speech and the kinds of speech that is illegal varies widely all over the world, so Sinders says hate speech shouldn’t be over legislated — but platforms should be thoughtful about issues of power and inequality.
“I do think there needs to be policy inside of social networks that actively thinks about this, that is aware of all the ways in which privilege plays out inside of a space,” she said.
The exact amount of weight or consideration to give a protected category sounds challenging, particularly given the fact that Facebook is used monthly by roughly one in four people on Earth, but the lack of any attempt to do so is in itself a form of institutional bias.
As the ProPublica piece points out, Facebook may have been used during the Arab Spring to topple authoritarian governments, but the leaked documents suggest that “at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.”
As it stands today, for some people, Facebook’s hate speech detection algorithm offers a veneer of protection, but that’s not the real thing.