In a blog post this evening, Facebook pulled back the curtains on a few of its efforts to curb hate speech and other content that runs afoul of its community guidelines. The brief arrives roughly a month after the social network published its latest Community Standards Enforcement Report, in which it said its automated tools now proactively detect 96.8% of certain categories of prohibited content before humans spot it.
The Menlo Park tech giant says it’s taking additional steps to address virility and reduce the spread of messages that can “amplify” and “exacerbate” conflict. To this end, following Facebook-owned WhatsApp’s decision earlier this year to reduce forwarded messages globally, Facebook says it’s exploring imposing a similar restriction on Messenger communications in Sri Lanka. This measure is intended to prevent message-sharing beyond a certain threshold of chat threads.
A Facebook spokesperson told VentureBeat that the current Messenger forwarding limit in Sri Lanka is five chats at a time and that while there’s “usually” a limit on total forwards, it’s a “high number” that’s “rarely” reached. In India, WhatsApp users can forward messages to a maximum of five people; in other markets, the limit is 20.
Last February, WhatsApp took steps to tackle misinformation ahead of national elections in India, one of Facebook’s largest markets with over 200 million users. The app has been blamed for inciting local violence that cost dozens of lives and for contributing to ethnic violence, as well as spreading hateful and racist messages about prominent political figures.
In Myanmar, Facebook says it has begun reducing the distribution of content shared by users who have “demonstrated a pattern of posting content that violates [its] Community Standards.” If the policy proves successful in mitigating harm, Facebook says it might introduce similar restrictions in other countries.
The company said it is increasingly using AI to detect abusive speech by adding any memes and graphics that violate its policies to a photo bank so they can be automatically deleted when they crop up in similar posts. The company also says it’s identifying clusters of words — i.e., graphs — that might be used in hateful and offensive ways and that it’s tracking how those clusters vary over time and geography to stay ahead of local trends.
Additionally, Facebook says it’s leveraging AI to recognize posts that might contain graphic violence and potentially violent or dehumanizing comments in order to limit their spread. In May, the company claimed that it now identifies 65% of the more than 4 million hate speech posts removed each quarter — thanks to AI — up from 24% just over a year ago and 59% in Q4 2018.
“This is some of the most important work being done at Facebook, and we fully recognize the gravity of these challenges,” wrote director of product management Samidh Chakrabarti and director of strategic response Rosa Birch. “We know there’s more to do to better understand the role of social media in countries of conflict … By tackling hate speech and misinformation, investing in AI and changes to our products, and strengthening our partnerships, we can continue to make progress on these issues around the world.”