Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a blog post this afternoon, Facebook detailed the progress it’s made in combating terrorists, violent extremist groups, and hate organizations on both its platform and Instagram. The bulk of recent successes were enabled by automated techniques, according to the Menlo Park tech giant, but also by a 350-person counterterrorism team whose scope has expanded to prevent those proclaiming or engaging in violence from inflicting real-world harm.
The disclosures come ahead of a congressional hearing on how tech companies, including Facebook, Google, and Twitter, moderate violent content in the communities they host. As The New York Times notes, lawmakers are expected to ask executives how they’re handling posts from extremist groups.
Facebook claims its machine learning algorithms helped it to detect “a wide range” of terrorist organizations based on their behavior alone, chiefly by identifying content matching copies of known bad material and assessing posts to determine whether they’re likely to violate policies. The company initially targeted global groups like ISIS and al-Qaeda, leading to the removal of more than 26 million pieces of content in the last two years (99% of which was proactively identified). But starting around mid-2018, it broadened the use of its AI and human moderation techniques to “to a wider range” of dangerous organizations, resulting in the banning of 200 white supremacist organizations from Facebook and the removal of content praising or supporting those organizations.
Facebook says it’ll work with government and law enforcement officials in the U.S. and U.K. to train its computer vision algorithms on footage from firearms training programs in the future, with the goal of improving their sensitivity to real-world, first-person footage of violent events like the mass shooting in Christchurch, New Zealand in March. (The Financial Times reports that Facebook will provide body cameras to the U.K.’s Metropolitan Police at no cost, and in exchange, it’ll have access to video footage shared with the U.K. Home Office.) The attack and its aftermath “strongly influenced” updates to its policies and their enforcement, Facebook says.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
Separately, the company says it’s expanded a program to connect those who search for terms associated with white supremacy on Facebook to resources focused on helping people leave hate groups, like Life After Hate. In Australia and Indonesia, it’s partnered with Moonshot CVE to measure the impact of those efforts, and it’s begun directing searchers in Australia and New Zealand to EXIT Australia and ruangobrol.id, respectively.
Lastly, Facebook says it’s developed a definition of “terrorist organizations” to guide its decision-making on enforcing against these organizations, which it says “more clearly” delineates attempts at violence (particularly those directed toward civilians) with the intent to coerce and intimidate. And it says it’s modified the structure of its counterterrorism team to combat the rise in white supremacist violence and terrorists not clearly tied to specific terrorist organizations.
“We know that bad actors will continue to attempt to skirt our detection with more sophisticated efforts and we are committed to advancing our work and sharing our progress,” wrote Facebook. “We are committed to being transparent about our efforts to combat hate … To date, the data we’ve provided about our efforts to combat terrorism has addressed our efforts against Al Qaeda, ISIS, and their affiliates.”
Facebook CEO Mark Zuckerberg often asserts that AI like its recently open-sourced image and video algorithms will substantially cut down on the amount of abuse perpetrated by millions of ill-meaning Facebook users. A concrete example of this in production is a “nearest neighbor” algorithm that’s 8.5 times faster at spotting illicit photos than the previous version, which complements a system that learns a deep graph embedding of all the nodes in Facebook’s Graph — the collection of data, stories, ads, and photos on the network — to find abusive accounts and pages that might be related to each other.
In Facebook’s Community Standards Enforcement Report published in May, the company reported that AI and machine learning helped cut down on abusive posts in six of the nine content categories. Concretely, Facebook said it proactively detected 96.8% of the content it took action on before a human spotted it (compared with 96.2% in Q4 2018), and for hate speech, it said it now identifies 65% of the more than four million hate speech posts removed from Facebook each quarter, up from 24% just over a year ago and 59% in Q4 2018.
Those and other algorithmic improvements contributed to a decrease in the overall amount of illicit content viewed on Facebook, according to the company. It estimated in the report that for every 10,000 times people viewed content on its network, only 11 to 14 views contained adult nudity and sexual activity, while 25 contained violence. With respect to terrorism, child nudity, and sexual exploitation, those numbers were far lower — Facebook said that in Q1 2019, for every 10,000 times people viewed content on the social network, less than three views contained content that violated each of those policies.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.