Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

COVID-19 disinformation has exploded in recent weeks, with campaigns using a combination of bots and humans to sow fear and confusion at a time when verifiable information has become a matter of life or death.

According to a new report from Blackbird.AI, a wide range of actors are leveraging confusion around the coronavirus to dupe people into amplifying false and misleading information. With COVID-19’s almost unprecedented impact around the globe, virtually every type of player in the disinformation wars, from nations to private actors, is rushing into the breach.

“If it’s favorable for creating societal chaos, for sowing some sort of discord, then they all kind of jump on,” said Blackbird.AI CEO Wasim Khaled. “COVID-19 is the Olympics of disinformation. Every predator is in for this event.”

In the past few weeks, many of the leading online platforms have attempted to clamp down on the information warfare their services have enabled. To direct users toward helpful sites, many of them now place links to reputable scientific or government sources at the top of feeds or in search results.


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

And they’ve implemented other tactics in an attempt to turn the tide. Pinterest has been highlighting verified health advice, while Facebook gave unlimited free advertising to the World Health Organization. Meanwhile, Google has announced it will invest $6.5 million to fight misinformation.

Still, voice assistants like Alexa and Google Assistant are struggling to respond to questions about COVID-19. To address the onslaught of erroneous information online, the U.K. has established a disinformation rapid response team. Today, an EU official blasted players like Google, Facebook, and Amazon for continuing to make money from fake news and disinformation.

“We still see that the major platforms continue to monetize and incentivize disinformation and harmful content about the pandemic by hosting online ads,” the European Union’s justice chief Vera Jourova told Reuters. “This should be stopped. The financial disincentives from clickbait disinformation and profiteering scams also should be stopped.”

Founded in 2014, Blackbird.AI has developed a platform that uses artificial intelligence to sift through massive amounts of content to dissect disinformation events. It uses a combination of machine learning and human specialists to identify and categorize the types of information flowing across social media and news sites. In doing so, Blackbird. AI can separate information being created by bots from human-generated content and track how it’s being amplified.

Typically, the company works with corporations and brands to monitor changes to their reputation. But with the rise of the COVID-19 pandemic, the company has shifted to focus on a new threat. The goal is to raise companies’ and individuals’ awareness in the hopes that they can curb the virality of disinformation campaigns.

“Anyone who’s watching this spread is pretty familiar with the concept of flattening the curve,” Khaled said. “We’ve always used a similar concept. We’ve described disinformation as a contagion, with virality being the driver.”

Unfortunately, the spread of disinformation is still in the exponential part of the curve.

For its “COVID-19 Disinformation Report,” the company analyzed 49,755,722 tweets from 13,203,289 unique users on COVID-19 topics between February 27 and March 12. The number of tweets in this category soared as Italy implemented lockdowns and the Dow Jones plummeted. Of those tweets, the company found that 18,880,396 were “inorganic,” meaning the tweets were being manipulated in a manner not consistent with human behavior.

Measuring the ratio of inorganic content helps the company generate a Blackbird Manipulation Index. In this case, the BBMI of COVID-19 tweets is 37.95%, which places it just inside the medium level of manipulation.


“We’re facing this kind of asymmetrical information warfare that’s being waged against not only the American public but across many societies in the world at a really incredible clip — at one of our most vulnerable moments in history,” he said. “There is incredible fear and uncertainty around what is right and what is wrong. And today people feel if you do the wrong thing, you just might kill your grandfather. It’s a lot of pressure and so people are looking for information. That gives a huge opening to disinformation actors.”

That BBMI number varies widely within specific campaigns.

For instance, on February 28 President Trump held a rally in Charleston, South Carolina, where he claimed the concern around coronavirus was an attempt by Democrats to discredit him, calling it “their new hoax.” Following that speech, Blackbird.AI detected a spike in hashtags such as #hoax, #Democrats, #DemHoax, #FakeNews, #TrumpRallyCharleston and #MAGA. A similar spike occurred after March 9, when Italian politicians quarantined the whole country.

In both cases, the platform detected a coordinated campaign to discredit the Democratic Party, a narrative dubbed “Dem Panic.” Of 2,535,059 tweets, 839,764 were inorganic for a BBMI of 33.1%.

But within that campaign, certain hashtag subcategories showed even higher levels of manipulation: #QAnon (63.38% BBMI), #MAGA (57.00%), and #Pelosi (53.17%).

“The driving message: that the Democrats were overblowing the issue in order to hurt President Trump,” the report says. “The ‘Dem Panic’ narrative and related spin-offs also included the widespread mention of the “out of control” homeless population and high number of immigrants in Democratic districts. Many of these messages unwittingly found their way into what would traditionally be considered credible media stories.”

In all these cases, the hashtags have synthetic origins but eventually spread far enough that real people picked them up and furthered their reach. The broad goal of such campaigns, said Khaled, is to delegitimize politicians, the media, medical experts, and scientists by spreading disinformation.

“While all the policymakers are still trying to decide what is the best course of action, these campaigns work very hard at undermining that type of advice,” he said. “The goal was, ‘How do we downplay the health risks of COVID-19 to the American public and to cast doubt on the warnings that are given by the government and public health agencies?'”

Other coronavirus disinformation campaigns include the conspiracy theory suggesting the U.S. had bioengineered the virus and introduced it into China.

“This content was seeded into public media in China,” Khaled said. “And, of course, it was immediately distributed by social media users who believed those narratives and amplified them. It’s happened around the world and in dozens of languages. There was not only the U.S. and China, but there was Iran blaming the U.S., the U.S. blaming China, all of these campaigns were out there.”

While Blackbird.AI doesn’t necessarily identify the originators of these campaigns, Khaled said they generally fall into three categories. The first is state-backed, typically Russia or China these days. The second is disinformation-as-a-service, where people can hire firms to buy disinformation service packages. The third is the “lone wolf that just wants to watch the world burn.”

“It all has the objective of creating a shifting in perceptions in the readers’ mind pushing them toward a behavior change or pushing them to spread the narrative further,” he said.

This doesn’t mean just retweeting fake news. Behavioral manipulation can also be used to move fake masks or drugs. And in some extreme circumstances, it has resulted in direct threats to life. Khaled noted that Dr. Anthony Fauci, the infectious disease specialist who is featured at presidential briefings, required extra security following death threats that were fueled by online conspiracy theorists. In addition, a train engineer attempted to attack the Navy ship entering a Los Angeles harbor by derailing a train because he believed another set of online conspiracies about the ship being part of a government takeover.

While Blackbird.AI is trying to help rein in the chaos, Khaled is not optimistic that the campaigns are going to be contained anytime soon.

“I’m 100% confident this is going to get much worse on the disinformation cycle,” he said. “Not only are we not seeing any indication that it’s slowing down, we’re seeing significant indication that it’s significantly ramping up. These disinformation actors, they’re going to take every possible advantage right now. People have to be aware. They have to understand that the things that they are going to see might have bad intent behind [them], they have to go to the CDC, they have to go to the WHO, they cannot take the stuff that they see at face value.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.