Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Facebook today published its latest Community Standards Enforcement Report, the first of which it released in April 2018. As in previous editions, the Menlo Park company tracked metrics across a number of policies — 10 in all — in the second and third fiscal quarter, focusing on the prevalence of prohibited content that made its way onto Facebook and the volume of this content it successfully removed.
For the first time, Facebook detailed how it’s taking action on suicide and self-injury content and provided prevalence metrics regarding regulated goods content — i.e., illicit sales of firearms and drugs. Additionally, it shared data on how it’s enforcing its policies on Instagram, specifically in the areas of child nudity, child sexual exploitation, regulated goods, suicide and self-injury, and terrorist propaganda.
“We’ll continue to refine the processes we use to measure our actions and build a robust system to ensure the metrics we provide are accurate,” wrote Facebook VP of integrity Guy Rosen in a blog post.
On the subject of self-harm and self-injury content, Facebook says it made technological improvements to find and remove a higher volume of violating content. As a result, the network took down 2 million pieces of self-harm and self-imagery content in Q2 2019, of which 96.1% it detected proactively. In Q3, that number hit 2.5 million pieces, of which 97.3% was detected proactively. On Instagram, 835,000 pieces of content were removed in Q2 2019, of which 77.8% was detected proactively, and about 845,000 pieces of content were removed in Q3 2019, of which 79.1% was detected proactively.
Facebook claims that for every 10,000 views on Facebook or Instagram in Q3 2019, no more than four contained content that violated its policies on suicide and self-injury and regulated goods.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
With respect to terrorist propaganda, Facebook expanded this edition of its community report to include actions taken against all terrorist organizations versus those taken only against al Qaeda, ISIS, and their affiliates. The rate at which it detected and removed content was 98.5% on Facebook and 92.2% on Instagram, a discrepancy it attributed to the “changing tactics” of known bad actors.
On a related note, thanks to improvements in its media-matching systems, Facebook says it removed over 4.5 million pieces of content related to the Christchurch terrorist attack in a six-month period (between March 15 and September). It identified 97% of these proactively, before any user reported it.
Separately, Facebook said improvements to its internal child nudity and exploitation database enabled it to better detect and remove instances of content shared both on Facebook and Instagram. In Q3 2019, it removed about 11.6 million pieces of content, up from Q1 2019 when it removed about 5.8 million. And over the past four quarters, it proactively detected over 99% of child nudity and exploitation content it removed. On Instagram in Q2 2019, it removed about 512,000 pieces of content, of which it detected 92.5% proactively. And in Q3, it removed 754,000 pieces of content from Instagram, of which it detected 94.6% proactively.
Facebook says that “continued investments” in its detection systems and “advancements” in its enforcement techniques allowed it to build on the progress to date where regulated goods are concerned. In Q3 2019, it removed roughly 4.4 million pieces of drug sale content, of which 97.6% was detected proactively — an increase from Q1 2019 when it removed about 841,000 pieces of drug sale content (84.4% of which it detected proactively). Also in Q3 2019, it removed about 2.3 million pieces of firearm sales content, of which it detected 93.8% proactively — an increase from Q1 2019 when it removed about 609,000 pieces of firearm sale content (of which 69.9% was detected proactively).
On Instagram in Q3 2019, Facebook says it removed about 1.5 million pieces of drug sale content, 95.3% of which it detected proactively. In the same quarter, it removed about 58,600 pieces of firearm sales content, of which 91.3% was detected proactively.
Facebook also said it made gains in hate speech content detection and removal, thanks in part to improved text and image matching techniques (which identify images and identical strings of text that have already been removed as hate speech) and machine-learning classifiers trained on thousands to millions of data samples. Starting in Q2 2019, it began removing some posts automatically when content was identical or near-identical to text or images previously removed by its review team or where it “very closely matched” policy-violating attacks. Facebook notes that it only did this in select instances, and that in all other cases, proactively detected content was sent to its review teams to make a final determination.
Facebook says that with the continued evolution of its detection systems, its proactive detection rate for hate speech has climbed to 80%, from 68% in its last report, coinciding with an increase in the portion of content it finds and removes (4.1 million pieces in Q1 2019 to 7 million in Q3 2019) across 40 languages.
“Self-supervision [machine learning techniques are] useful for improving language models and enabling us to build classifiers that understand concepts across multiple languages at once,” wrote CTO Mike Schroepfer in a blog post. “Our XLM method … enables us to better understand concepts like hate speech across languages … [and we use] Whole Post Integrity Embeddings (WPIE), a pretrained universal representation of content for integrity problems [that] works by understanding content across modalities, violation types, and even time. Our latest version is trained on more violations, with greatly increased training data.”
Yet another domain where Facebook’s AI is making a difference is duplicitous accounts. At the company’s annual F8 developer conference in San Francisco, Schroepfer said that in the course of a single quarter, Facebook takes down over a billion spammy accounts, over 700 million fake accounts, and tens of millions of pieces of content containing nudity and violence. AI is a top source of reporting across all of those categories, he says.
To this end, Facebook said it removed more than 3.2 billion fake accounts between April and September, compared to more than 1.5 billion during the same period last year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.