Following an investigation that revealed gaps in Facebook’s efforts to fight coronavirus misinformation, the social networking company announced it would retroactively send alerts to any users who had interacted with content subsequently labeled misleading.
A new report from global nonprofit Avaaz examined 100 Facebook posts about COVID-19 that had been labeled false by independent fact checkers. Despite Facebook’s efforts to crack down on false coronavirus information, those posts were shared 1.7 million times and viewed 117 million times, according to Avaaz.
Beyond the implications for the growing wave of coronavirus-related fake news, the findings are the latest indication that social media platforms have become powerful tools for spreading misinformation and disinformation. While companies such as Facebook have struggled to effectively combat such forces, one of the Avaaz report’s authors said the new correction feature could potentially represent a sea change in the company’s attitude toward fighting fake news.
“We’ve seen [Facebook], especially now on COVID-19, being very open to going further than before,” said Christoph Schott, a Berlin-based campaign director for Avaaz. “And what we see is the next clear step. If you can do this for COVID-19 disinformation, why don’t you do it if someone is shown a post saying something like ‘Vaccines are bad for your children and cause autism’? There’s no reason not to do it on other harmful misinformation in the same way.”
Avaaz leads campaigns surrounding issues like protecting democratic institutions and fighting corruption and climate change. In recent years, the group has emerged as one of the most effective social media critics, thanks to its studies uncovering fake news and disinformation campaigns. Those include the spread of fake news on Facebook related to France’s Yellow Vest protests and a propaganda campaign during the Spanish elections on Facebook and WhatsApp.
In this case, Avaaz researchers flagged coronavirus misinformation posts for Facebook, which subsequently removed 17 posts that had attracted an estimated 2.4 million views. These included posts claiming that “Black people are resistant to Coronavirus,” “Coronavirus is destroyed by Chlorine Dioxide,” “Vice President Pence urges those with Coronavirus to go to the Police,” “Hairdryers could be used for coronavirus prevention,” and “Virus can be cured by one bowl of freshly boiled garlic water.”
Like other social media platforms, Facebook has recently taken steps to identify coronavirus misinformation and direct users to reliable sources, such as the World Health Organization and the U.S. Centers for Disease Control. That effort includes collaborating with governments to develop coronavirus resources on Facebook Messenger.
“The steps so far have been commendable,” Schott said. “[The team at Facebook] have taken it up a notch in terms of how they treat misinformation. But we’re still seeing massive gaps in that policy.”
For several years, Avaaz has been pressing social platforms to not just block or take down misleading information but to go back and alert people who have been exposed. A study commissioned by Avaaz and conducted by researchers at George Washington University and Ohio State University suggested that such corrections can reduce belief in misinformation by 50% to 61%.
Facebook has taken steps to label posts and ads that have been independently fact-checked. In a post on Facebook, CEO Mark Zuckerberg shared several data points around these efforts including:
- 2 billion people directed to its Covid-19 Information Center and 350 million people clicking on articles there.
- Expansion of its fact-checking efforts by adding a dozen new countries. Facebook now has more than 60 fact-checking partners.
- Displayed warnings on 40 million posts Covid-19 posts that had been flagged by third-party fact checkers. The result: 95% of people did not click on the content.
“Through this crisis, one of my top priorities is making sure that you see accurate and authoritative information across all of our apps,” Zuckerberg wrote.
Still, the company had hesitated to go back and inform people who may have seen the posts.
That’s a problem, according to Avaaz, because there can still be big lags between something being posted and it being labeled as false or taken down. In the new report, Avaaz found that in some cases it took Facebook up to 22 days to issue warning labels for coronavirus misinformation. And 41% of the posts remained on the platform despite being flagged by fact-checking partners.
The new misinformation alerts will be sent to any user who had the content appear in their feed or shared or interacted with it in some way. The alerts won’t specifically identify which misleading content was involved but will include links to sources of vetted information.
“We’re going to start showing messages in News Feed to people who have liked, reacted, or commented on harmful misinformation about COVID-19 that we have since removed,” wrote Guy Rosen, Facebook’s vice president of integrity. “These messages will connect people to COVID-19 myths debunked by the WHO, including ones we’ve removed from our platform for leading to imminent physical harm … People will start seeing these messages in the coming weeks.”
Schott said he believes Facebook needed to test the new feature to refine it, and he’s hopeful that as it gets implemented the company will offer more precise information to users. In its study, Avaaz called Facebook’s move “one of the most substantial actions against misinformation in its history.”
In addition to the alerts, Facebook is adding a “Get the Facts” section to its COVID-19 Information Center that will list “fact-checked articles from our partners that debunk misinformation about the coronavirus.”