A recent study found that fake news spreads faster and further than real news on Twitter. This means false news reaches more people than the truth. In fact, as the study puts it, “the top 1 percent of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people.”
This was a well-collected and careful study using data from 2006-2017, but it is flawed in assuming all accounts on Twitter are real people. As a user of Twitter, I can safely say there is a spam account problem. People buy engagement and boost their follower counts with millions of bot accounts made to create the illusion of influence. Also, many of these nonperson accounts attempt algorithmically to engage “naturally,” meaning even users who don’t buy followers end up with fake followers. Until recently, Twitter seemed to do very little about this, and despite the latest crackdowns, bots still cause problems. With all of this said, perhaps the best way to slow the spread of fake news would be to revisit the way Twitter authenticates user accounts.
A closer look at the bot problem
I investigated the bot problem using Twitter Audit, a service that scans a random sampling of up to 5,000 of a user’s followers to identify spammers and bots. Twitter Audit looks for characteristics including follower to following ratios, the percentage of tweets to retweets, having a photo, the age of the account in comparison to the number of tweets it has sent out, and a few other metrics to sniff out fake followers. The app will be wrong some of the time, but it’s a decent barometer.
These are examples from evaluating the accounts of Justin Bieber, Donald J. Trump, Kim Kardashian, and CNN. Trump had both the lowest amount of fake followers by percentage (25 percent) and number (12,308,325), but his account still had its fair share of bot followers.
If you think these estimates appear high, they could be — but take a look at this screenshot of CNN’s six most recent followers at the time of writing. I probably could have come up with similar screenshots of the most recent followers for any of the top 1,000 accounts on Twitter.
In March 2017, a year before Twitter’s crackdown on fake accounts, a study by University of Southern California and Indiana University found up to 15 percent of Twitter accounts were bots, not people. Yes, since that study, Twitter pulled a significant number of accounts. But the four accounts above were measured after Twitter’s anti-spam update, and these estimates are of at least 32,738,074 spam followers — assuredly more if I could calculate overlap. With 330 million active users, Twitter is at least 9.9 percent bots, assuming this sampling and analysis is correct.
Since these estimations could be high, I used another tool, one I’m less familiar with, called StatusPeople. The website says its tool takes a sample of up to 1,000 followers and assesses them “against a number of simple spam criteria.” So, it uses a smaller sample size and does not provide as clear criteria for defining bots, compared to Twitter Audit. But one useful thing with this tool is the estimation of inactive, rather than spam, followers, which partly explains the data discrepancy between Twitter Audit and StatusPeople.
While all the numbers of fake accounts are lower than Twitter Audit indicates, there is still around 10 percent associated with each account. I feel safe saying at least 10 percent of accounts on Twitter are some form of spammers or bots.
Could expanded verification be the answer?
The problem with fake accounts is likely why fake news appears to spread faster on Twitter than it does on any other social media outlet. It’s also probably why Twitter CEO Jack Dorsey has started talking about expanding verification.
I’ve long said Twitter should not just open up verification to everyone, but encourage everyone to become verified. Doing so would help stop spam and sock puppet accounts.
I got my verification back when the public could request it. I had to jump through a few hoops, filling out a form explaining why I deserve verification. I had to provide articles published in large, credibility-imparting media outlets that mention me by name to prove I was “noteworthy.” I also had to submit photos of my ID and other documents to verify I was, indeed, myself.
The verification of identity should be Twitter’s main concern in combating spam. Asking users to explain why they are of public interest has gotten Twitter in trouble already. In fact, Twitter paused verification after verifying the known piece of crap Jason Kessler. I’m no fan of anything Kessler represents, but Twitter set itself up for problems by making verification into an exclusive club.
Twitter has a right to verify users as it sees fit; however, I think the company would be better off opening it to everyone. If a verified account meant only that the person who owns the account is who they say they are, we could see a significant drop in fake accounts. Also, the existing verification program becomes almost an endorsement of the account. Twitter shouldn’t have to appear to endorse Jason Kessler to say, “Yeah, the account is really that asshat.”
Sure, open verification would cause problems for real public figures. For example, Dwayne Johnson isn’t the only person with that first and last name. I get it, the only other Mason Pelt I know of is a convicted felon in Florida, so I can see how a blue checkmark account with the same first and last name using one of our photos instead of theirs could be a problem and could cause media confusion.
Still, with some policing on the part of Twitter and a slight degree of responsibility on the part of news outlets, open verification could be an effective solution to stop the wildfire spread of fake news on this popular social media platform.