Facebook CEO Mark Zuckerberg didn’t concede much during his two hearings in front of the U.S. Congress this week. The most frequent solution he cited to the various problems raised was using AI, but he admitted that was still five to 10 years away. I believe him. The BFF spam that you may have seen in your News Feed, not coincidentally also this week, is all the proof anyone needs that Facebook’s automatic systems are years away from dealing with any serious issues.
Side note: If you happened to believe this latest scam, please go read “Does Facebook’s Green ‘BFF’ Prove Your Account Is Secure?” on Snopes first. I’ll wait.
I check Facebook multiple times throughout the day. This particular scam isn’t new, but it took off this week precisely because Mark Zuckerberg’s hearings were all over the news. Variations of this scam appeared at the top of my News Feed more times than I can remember, all from Pages that I do not follow. Friends were simply blindly commenting, and my News Feed was putting the posts at the top, because clearly this was more important than anything else on Facebook. So much for more content from my friends and family.
The spam usually says something along the lines of “Mark Zuckerberg, CEO of Facebook, invented the word BFF. To make sure your account is safe on Facebook, type BFF in a comment. If it appears green, your account is protected. If it does not appear in green, change your password immediately because it may be hacked.”
The crazy thing is this type of spam doesn’t require cutting-edge AI to address. Yes, these posts are sometimes shared as media (so the News Feed ranks them higher), which makes them harder to analyze en masse than if they were all simply text posts. They all also use different sources — I can distinctly remember five different images of Zuckerberg, overlayed or accompanied with the same text — so Facebook can’t simply blacklist one image across the site once it has been marked as spam. Oh, and they don’t link to anything, so you can’t block by URL either.
But they all have one very obvious thing in common: All the comments are just three letters. Sometimes BFF, sometimes Bff, and sometimes bff.
This is incredibly easy to weed out. And you don’t have to make your blacklisting rule specific to just this BFF spam. Anytime a post receives hundreds (let alone thousands) of comments saying the exact same thing, Facebook’s systems should flag it. And if that same exact comment is appearing over and over on a completely different post, flag that too.
Sure, Facebook would still need someone to manually go over the flagged posts in case there are any false positives. But this would still be significantly more effective than relying on Facebook’s users to report the posts first. I marked every post I saw this week as spam, but there is much worse and nuanced content on Facebook that users should invest their own time into reporting.
Which brings me back to Zuckerberg’s hearings this week. It’s absolutely no surprise to me that he doesn’t expect AI to be able to weed out the more problematic content than this for another five to 10 years. If Facebook’s systems can’t figure out basic spam now, they’re definitely years away from stopping fake news, stolen identities, and illegal activity.
“You commented on this post this week. We removed this post because it is false. Mark Zuckerberg did not invent the term BFF. Commenting on a Facebook post will not inform you whether your account is protected. To check your Facebook security settings, go here.”
This message should appear at the top of every affected user’s News Feed, accompanied with a screenshot of the scam in question. The same type of message could be shown to those who respond to the scams about Facebook suddenly becoming a paid service, which seem to show up every few months without fail.
Facebook could start with all the misleading nonsense that circulates on its social network about Facebook, and go from there.
ProBeat is a column in which Emil rants about whatever crosses him that week.