NortonLifeLock Research Group, the R&D division of antivirus vendor NortonLifeLock, today released a browser extension called BotSight that’s designed to detect potential Twitter bots in real time. The team behind it says BotSight is intended to highlight the prevalence of bots and disinformation campaigns within users’ feeds, as the spread of pandemic-related misinformation reaches a veritable fever pitch.
Recent analyses suggest that certain influential social media accounts are amplifying false cures and conspiracy theories. One French account with over a million followers shared an article implying COVID-19 was artificially created, while a video describing the coronavirus as a “man-made poison” racked up more than 3 million views on YouTube and over 10 million likes, shares, and comments on Facebook. At least a portion of the disinformation dissemination is attributable to bots, which start posts that validate trends or latch onto feeds to sow discord. And it’s these bots that BotSight aims to spotlight — NortonLifeResearchGroup says it found the percentage of bot-originated tweets was as high as 20% when viewing trending topics like “#covid19”.
BotSight, which is available as an extension for Chrome, Brave, Firefox, and soon Edge, annotates each Twitter handle with a bot probability score directly within the Twitter timeline, search, profile, follower, and individual tweet views. In addition to annotating the profile, the tool highlights any handles mentioned in tweets’ bodies, as well as in retweets, quoted tweets, followers, accounts users follow, and descriptions.
Importantly, BotSight won’t interfere with — or replace — Twitter’s own anti-misinformation efforts, the team says. These include labels and warning messages on tweets with disputed or misleading information about COVID-19.
Powering BotSight is an AI model that detects Twitter bots with a high degree of accuracy, achieving an area under curve — a common indicator of model quality — of 0.967 on research data sets. (A perfect AUC is 1.) In its predictions, it considers over 20 factors, including IP-based correlation (accounts that are closely linked geographically), temporal-based correlation (closely linked in time), signs of automation in usernames and handles (and other metadata), social subgraphs, content similarity, Twitter verification status, the rate at which the account is acquiring followers, and account description.
Bots generally exhibit regularity in their posting habits that ordinary users don’t, according to NortonLifeLock, and they’re generally short-lived. They also tend to have names containing many numbers and random characters, and they form cliques within which they post identical content.
With all this in mind, the BotSight team trained the model on a 4TB corpus of historical tweets. A review of the data set revealed that about 5% of accounts overall were bots, but that between 6% and 18% of accounts tweeting about the pandemic were bots depending on the time period sampled. A separate, random sample indicated about 4% to 8% bot activity by volume, showing that the bots were strategic about their behavior, favoring current events to maximize impact.
Ahead of BotSight’s debut, the team says it spent six months scrolling through Twitter with the tool to test, improve, and validate the model. To date, BotSight’s users have analyzed over 100,000 Twitter accounts.
“There is more awareness around disinformation than ever before, yet there is still little understanding of just how much disinformation there truly is,” wrote the BotSight team in a blog post. “[The] numbers differ depending on language, topic, and time of day. That’s precisely why seeing it right in your Twitter feed itself is so helpful.”
The BotSight team plans to launch a smartphone app in the near future, which will join the many other Twitter bot-identifying tools that have been released so far. Some of the most popular include the Indiana University Observatory on Social Media’s Botometer; SparkToro’s Fake Followers Audit tool; Botcheck.me; and Bot Sentinel.