Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Though everyone is focusing on the “hate speech” violations that inspired Apple, Facebook, Spotify, and YouTube to block Infowars this week, Facebook still has a much bigger problem to fix: fake news. Despite CEO Mark Zuckerberg’s claim that “on Facebook, more than 99 percent of what people see is authentic,” it’s becoming increasingly obvious that the company is either inept or not interested in removing garbage from its platforms, even when users specifically bring that garbage to its attention.

Exhibit A is an example from the past week. On July 30, just four days after a widely cited article noted that 96-year-old actress Betty White had “no plans to retire,” Facebook spread some tragic news: The beloved celebrity had just passed away. Except she hadn’t actually died — a fact that was easy to confirm that day, and would have also been verifiable the next day, assuming Facebook had waited that long to check.

This particular piece of fake news ran in an advertisement for a Facebook page called Anime Art, which was circulating the lie for reasons unknown. I used Facebook’s “fake news” reporting feature to report the ad, using the Report tool’s “false news story” category to describe it, and gave Facebook the opportunity to verify the information. From my perspective, it would have been OK if it took a few hours to verify the report. So long as it stopped publishing the false story fairly quickly, Facebook would have been doing its job.

Instead, within a suspiciously short period of time, Facebook said that it had reviewed the report and had a message for me: The false report of Betty White’s death “does not go against our Ad Policies,” but instead was being characterized as an ad “of the kind you may not want to see.” The message attempted to reassure me by robotically promising that “We will not show you this ad in future,” and encouraged me to “continue to ‘Report’ Ads” that I think are “misleading, offensive or inappropriate.”

You probably won’t be surprised to hear that it’s indeed a violation of Facebook’s Ad Policies to use an ad to falsely claim the death of a celebrity. Number 11 on Facebook’s list of Prohibited Content notes that “Ads must not contain shocking, sensational, disrespectful or excessively violent content.” And if that one’s on the edge, number 13 unambiguously says that “Ads … must not contain deceptive, false, or misleading content, including deceptive claims, offers, or methods.”

That’s two arguable violations of Facebook’s Ad Policies before you even start asking tougher questions, like: How could Facebook have manually approved a death-of-Betty-White ad, given that it claims to review all ads “to make sure they meet our Advertising Policies”? Or why did an anime page’s advertisement about a deceased actress pass Facebook’s Relevance test, where “All ad components … must be relevant and appropriate to the product or service being offered and the audience viewing the ad”?

Why would Facebook initially authorize, and then keep running, an ad that violates multiple Facebook Ad Policies? My short answer would be that it doesn’t care, at least not in the way that normal humans care about things like life and death or truth and fiction. Any company netting $10 billion to $15 billion a year can afford to hire and train human beings to police the ads it runs. If a single human screener had looked at either the Betty White ad or fake news report, we wouldn’t be talking about this today.

If you think I’m being unfair to Facebook because it has an incredible amount of content to sift through, realize that the company has had over a decade to get this right. Instead, this situation has all the signs of the company’s old “move fast and break things” motto — it built a dangerous machine to monetize its content, and is applying only as much grease to the squeaking cogs as it needs to keep that machine moving.

I’m cynical about Facebook’s motives here because I’ve seen too much evidence that the company knowingly manipulates and pumps trash into users’ feeds to boost ad revenues — and this extends to Instagram, too. In the last week alone, I’ve personally witnessed two examples of the neglect that Instagram’s content filtering systems are suffering from.

Today, Instagram contacted me to share that it just took down an account that I reported for spam 17 weeks ago. “We’ve removed it from Instagram because it violated our Community Guidelines,” said the message. “Your feedback is important in helping us keep the community safe.” Note to spammers: Instagram let someone get away with spamming people for 17 weeks before taking action, so whatever you do, don’t target Instagram!

Lest you think Instagram always takes four full months to take action on a report, I can assure you that’s not the norm. If you report an image or account for pornographic content, you can be pretty sure someone — or something — at Instagram will take a look very quickly. This weekend, for instance, I reported an outright pornographic ad that was in clear violation of Instagram’s policies. Within eight hours, I received confirmation that Instagram would do nothing to remove the post — with thanks for making the site “a safe and welcoming place for everyone.”

All of my examples have one thing in common: A user provided Facebook with easy opportunities to enforce its stated policies, but it ignored those policies, most likely in pursuit of more advertising revenue or larger user statistics. Why else would a company repeatedly ignore user reports of clear policy violations?

Despite what Facebook might tell Congress or the public, these fake news, spam, and pornography issues aren’t old problems — they’re still happening right now. Its screening process doesn’t work, its ad “transparency” tools are mediocre, and despite its incredible size, it apparently still needs Apple to go first before it takes action against bad actors.

The only thing worse than not having a shield at all is having one that provides a false sense of security, and that’s why Facebook’s not-really-solutions are dangerous: They continue to expose users to bad content while offering the fiction of safety. It is now absolutely clear that Facebook isn’t capable of protecting users from violations of its own policies, and it might well be time for regulators to step in and demand changes to the way this company does business.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.