In a blog post this afternoon, Facebook detailed the progress it’s made in combating terrorists, violent extremist groups, and hate organizations on both its platform and Instagram. The bulk of recent successes were enabled by automated techniques, according to the Menlo Park tech giant, but also by a 350-person counterterrorism team whose scope has expanded to prevent those proclaiming or engaging in violence from inflicting real-world harm.
The disclosures come ahead of a congressional hearing on how tech companies, including Facebook, Google, and Twitter, moderate violent content in the communities they host. As The New York Times notes, lawmakers are expected to ask executives how they’re handling posts from extremist groups.
Facebook claims its machine learning algorithms helped it to detect “a wide range” of terrorist organizations based on their behavior alone, chiefly by identifying content matching copies of known bad material and assessing posts to determine whether they’re likely to violate policies. The company initially targeted global groups like ISIS and al-Qaeda, leading to the removal of more than 26 million pieces of content in the last two years (99% of which was proactively identified). But starting around mid-2018, it broadened the use of its AI and human moderation techniques to “to a wider range” of dangerous organizations, resulting in the banning of 200 white supremacist organizations from Facebook and the removal of content praising or supporting those organizations.
Facebook says it’ll work with government and law enforcement officials in the U.S. and U.K. to train its computer vision algorithms on footage from firearms training programs in the future, with the goal of improving their sensitivity to real-world, first-person footage of violent events like the mass shooting in Christchurch, New Zealand in March. (The Financial Times reports that Facebook will provide body cameras to the U.K.’s Metropolitan Police at no cost, and in exchange, it’ll have access to video footage shared with the U.K. Home Office.) The attack and its aftermath “strongly influenced” updates to its policies and their enforcement, Facebook says.
Separately, the company says it’s expanded a program to connect those who search for terms associated with white supremacy on Facebook to resources focused on helping people leave hate groups, like Life After Hate. In Australia and Indonesia, it’s partnered with Moonshot CVE to measure the impact of those efforts, and it’s begun directing searchers in Australia and New Zealand to EXIT Australia and ruangobrol.id, respectively.
Lastly, Facebook says it’s developed a definition of “terrorist organizations” to guide its decision-making on enforcing against these organizations, which it says “more clearly” delineates attempts at violence (particularly those directed toward civilians) with the intent to coerce and intimidate. And it says it’s modified the structure of its counterterrorism team to combat the rise in white supremacist violence and terrorists not clearly tied to specific terrorist organizations.