The day after a New York Times investigation that called into question how seriously Facebook dealt with Russian interference on its platform, the company released its biannual Community Standards Enforcement report, detailing the progress it’s making on taking down fake accounts, hate speech, spam, and other content that violates its policies.
Facebook said that it took down 1.5 billion fake accounts in the past six months, compared to 1.3 billion fake accounts six months prior. Of those, Facebook said that 99.6 percent of fake accounts were proactively identified as fake and taken down before users reported them.
Facebook said it still estimates that fake accounts represented about 3 to 4 percent of monthly active users in Q2 and Q3 2018 — a metric that has stayed pretty consistent for several quarters prior.
Facebook released its first Community Standards Enforcement report in May of this year, and with this new report, is now saying for the first time how much content it’s “taken action on” that violates its standards against bullying and harassment, as well as child nudity and sexual exploitation of children. In Q3 2018, Facebook said that it “took action” on 2.1 million pieces of content that violated its bullying and harassment policies (that means that Facebook could have taken down the piece of content or disabled the account altogether, though it doesn’t break down specifically what action it took on how much of that content).
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
On a conference call with reporters, Facebook executives were asked why the amount of bullying-related posts that Facebook took down was so low last quarter. Facebook’s VP of Product Guy Rosen responded that it often takes people reporting content for bullying or harassment in order for Facebook to understand whether it crosses the line — he gave the example of someone posting a photo of someone with the caption “you’re crazy” — it could be interpreted as a joke amongst friends, or it could be interpreted as a cutting remark.
In Q3, Facebook also removed 8.7 million pieces of content that violated its sexual exploitation and child nudity policies.
Another metric to note: Facebook said that it took down 5.4 million pieces of hate speech during Q2 and Q3 2018 — of that, about 52 percent of content was flagged by Facebook. CEO Mark Zuckerberg has repeatedly said that it’s been harder for Facebook to develop AI systems that detect hate speech than, say, nudity, so detecting hate speech is one area that the company still needs to show better progress in.
Additionally, Facebook said that it took action on 66 million pieces of content that violated its adult nudity and sexual activity policies over the past six months.
Zuckerberg and policy lead Monika Bickert conducted a conference call with reporters this afternoon on the report, which they said Facebook would now be releasing quarterly, like financial reports.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.