Despite Facebook’s efforts to combat fake news and propaganda, the social network has enabled 3.8 billion views of health misinformation in the past year, according to a new report from Avaaz. The global citizens’ movement that monitors election freedom and disinformation said websites containing such misinformation received almost 4 times as many views as sites by certified health organizations like the World Health Organization (WHO) and Centers for Disease Control (CDC).
Earlier this year, Facebook partnered with such organizations to drive users toward reliable information and weed out disinformation related to COVID-19. While these efforts may have had some impact, Avaaz found that overall misinformation related to health issues received 460 million views in April 2020, and 3.8 billion over the past year.
The data in the new study is the latest evidence that Facebook officials are failing to control rampant disinformation and propaganda on the platform, which has 2.7 billion users. Even as Facebook CEO Mark Zuckerberg has repeatedly vowed to crack down on such content, most recently by removing accounts related to the QAnon conspiracy theory, the platform’s vast reach and its algorithm, which is designed to fuel engagement with emotional content, leave it open to widespread exploitation.
“This is a kind of a pattern in Facebook,” said Avaaz researcher Luca Nicotra. “Kind of going in the right direction, but kind of falling short. I think what is interesting about this latest report is that we’re looking at what’s basically left on the platform after everything they’ve done.”
Fighting Facebook disinformation
Avaaz has tried to convince Facebook to take aggressive steps to fight disinformation in recent years. In reports on election disinformation, France’s Yellow Vest protests, and the Spanish elections, Avaaz uncovered abuses on the Facebook platform that the company addressed by removing accounts or changing policies.
After Avaaz revealed gaps in Facebook’s efforts to fight COVID-19 disinformation earlier this year, Facebook announced it would retroactively send alerts to any users who had interacted with content subsequently labeled misleading. Nicotra said that correction effort is promising but has remained too small and sporadic to be effective.
For instance, sometimes users are notified that they may have interacted with misinformation but not told what the content was or given correct information.
In addition to expanding this correction effort, Avaaz called on Facebook to “detox” its algorithm to downgrade posts by misinformation actors and lower their reach by 80%. Nicotra said he’s less interested in Facebook removing misinformation or the accounts that generate it because that allows other actors to claim censorship and weaponize the company’s actions to gain further attention.
Far more critical is the need to “detox” the algorithm to reduce the ability for such content to spread, he said.
“Stop giving these pages free promotion,” Nicotra said. “You know that your algorithm loves divisive content, and misinformation is in that category. Zuckerberg himself has said that we know the algorithm, if left without constraint, will push this content over and over, and this is what we’re seeing. There is an issue at the DNA of the platform, and they need to have the courage to tackle it.”
The new report drew on data compiled by Newsguard, a newstrust company that identifies websites and publishers that create misleading content. Avaaz focused on five countries: the United States, the United Kingdom, France, Germany, and Italy. This means the numbers cited represent only a fraction of the impact such posts had globally.
Drilling down into health misinformation, Avaaz found that Facebook’s efforts were having minimal impact. For instance, only 16% of health misinformation identified by Facebook had received a warning label. The other 84% had no label and was still circulating widely.
“This investigation is one of the first to measure the extent to which Facebook’s efforts to combat vaccine and health misinformation on its platform have been successful, both before and during its biggest test yet: the coronavirus pandemic,” the report says. “It finds that even the most ambitious among Facebook’s strategies are falling short of what is needed to effectively protect society.”
Health misinformation still found on the platform included articles such as:
- 8.4 million views for a story claiming that a Bill Gates-backed polio vaccination program has left half a million children in India paralyzed.
- 4.5 million views for stories about phony cures.
- 2.4 million views for a story making false claims about the effectiveness of quarantines.
- 13.4 million views for a post linking 5G networks to health problems.
These stories are being driven by sites such as RealFarmacy.com, which has more than 1.1 million followers on its Facebook page and shares stories about quarantine protests and discredited coronavirus cures. Another account, GreenMedInfo, has 540,000 followers and has recently said it is under threat of being deleted.
Such pages are key sources for misinformation, according to Avaaz, accounting for 43% of estimated views. Avaaz identified 42 Facebook accounts that spread health misinformation and have 28 million total followers. In addition, these sites often interact to amplify their messages and make it harder for Facebook to track their content’s spread.
The fact that so many of these misinformation sites have been active for several years and are out in the open with Pages, versus being in closed Groups, makes the lack of action on Facebook’s part even more disturbing, Avaaz said in its report.
“The findings of this report indicate Facebook is still failing at preventing the amplification of misinformation and the actors spreading it,” the report says. “Specifically, the findings in this section of the report strongly suggest that Facebook’s current algorithmic ranking process is either potentially being weaponized by health misinformation actors coordinating at scale to reach millions of users, and/or that the algorithm remains biased toward the amplification of misinformation, as it was in 2018. The findings also suggest that Facebook’s moderation policies to counter this problem are still not being applied effectively enough.”
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here