As Facebook continues to grapple with spam, hate speech, and other undesirable content, the company is shedding more light on just how much content it is taking down or flagging each day.

Facebook today published its first-ever Community Standards Enforcement Report, detailing what kind of action it took on content displaying graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, and spam. Among the most noteworthy numbers: Facebook said that it took down 583 million fake accounts in the three months spanning Q1 2018, down from 694 million in Q4 2017. That doesn’t include what Facebook says are “millions” of fake accounts that the company catches before they can finish registering.

The report comes just a few weeks after Facebook published for the first time detailed internal guidelines for how it enforces content takedowns.

The numbers give users a better idea of the sheer volume of fake accounts Facebook is dealing with. The company has pledged in recent months to use facial recognition technology — which it also uses to suggest which Facebook friends to tag in photos — to catch fake accounts that might be using another person’s photo as their profile picture. But a recent report from the Washington Post found that Facebook’s facial recognition technology may be limited when it comes to detecting fake accounts, as the tool doesn’t yet scan a photo against all of the images posted by all 2.2 billion of the site’s users to search for fake accounts.

Facebook also gave a breakdown of how much other undesirable content it removed during Q1 2018, as well as how much of it was flagged by its systems or reported by users:

  • 21 million pieces of content depicting inappropriate adult nudity and sexual activity were taken down, 96 percent of which were first flagged by Facebook’s tools.
  • 3.5 million pieces of violent content were taken down or slapped with a warning label, 86 percent of which were flagged by Facebook’s tools.
  • 2.5 million pieces of hate speech were taken down, 38 percent of which were flagged by Facebook’s tools.

The numbers show that Facebook is still predominately relying on other people to catch hate speech — which CEO Mark Zuckerberg has spoken about before, saying that it’s much harder to build an AI system that can determine what hate speech is then to build a system that can detect a nipple. Facebook defines hate speech as a “direct attack on people based on protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease.”

The problem is that, as Facebook’s VP of product management Guy Rosen wrote in the blog post announcing today’s report, AI systems are still years away from becoming effective enough to be relied upon to catch most bad content.

But hate speech is a problem for Facebook today, as the company’s struggle to stem the flow of fake news and content meant to encourage violence against Muslims in Myanmar has shown.  And the company’s failure to properly catch hate speech could push users off the platform before it is able to develop an AI solution.

Facebook says it will continue to provide updated numbers every six months. The report published today spans from October 2017 to March 2018, with a breakdown comparing how much content the company took action on in various categories in Q4 2017 and Q1 2018.