In a new report on efforts to battle fake news and disinformation during the recent European Union elections, officials found that “Russian sources” were at the heart of ongoing efforts to sow division, suppress turnout, and influence voters.

The progress report from the European Commission also provided updates on work being done by platforms like Facebook, Google, and Twitter to combat such abuses. While urging these social media giants to continue doing more, the EU study offered some rare praise.

“Ahead of the elections, we saw evidence of coordinated inauthentic behaviour aimed at spreading divisive material on online platforms, including through the use of bots and fake accounts. So online platforms have a particular responsibility to tackle disinformation,” the report says. “With our active support, Facebook, Google, and Twitter have made some progress under the Code of Practice on disinformation.”

Last year, the EU instituted a range of new programs to fight digital disinformation, fake news, and other information security issues. That included new capabilities of various agencies to coordinate responses, a Rapid Alert System, and more cooperation with member states. Several tech giants also signed a voluntary Code of Practice designed to increase transparency of political communications while compelling them be more aggressive in combating such disinformation campaigns.

In that respect, the authors of the report noted: “All platforms took actions in advance of the European elections by labelling political ads and making them publicly available via searchable ads libraries.”

The report card broke out some numbers:

  • Google took action against more than 130,000 EU-based accounts that were found to violate its ad policies against misrepresentation and another 27,000 that violated policies on original content.
  • Facebook identified more than 1.2 million incidents that violated ads and content policies.
  • Twitter rejected more than 6,000 ads that violated its unacceptable business
    practices ads policy and another 10,000 EU-targeted ads for violations of its quality ads policy.
  • Facebook disabled 2.2 billion fake accounts in the first quarter of 2019 and acted specifically against 1,574 non EU-based and 168 EU-based pages, groups and accounts engaged in inauthentic behaviour targeting EU member states.
  • Twitter challenged almost 77 million spam or fake accounts.
  • Youtube removed over 3.39 million channels for violation of its spam, misleading, and scams policy, and more than 8,600 channels for violation of its impersonation policy,

In spite of these coordinated efforts, the report says misinformation tactics continue to evolve rapidly — and many of the attacks seem to originate from Russia.

“The evidence collected revealed a continued and sustained disinformation activity by Russian sources aiming to suppress turnout and influence voter preferences,” the report says. “These covered a broad range of topics, ranging from challenging the [European] Union’s democratic legitimacy to exploiting divisive public debates on issues such as of migration and sovereignty. This confirms that the disinformation campaigns deployed by state and non-state actors pose a hybrid threat to the EU.”

The EU stepped up its work to fight digital disinformation in the wake of the U.S. elections in 2016 that were later found to have been the target of Russian campaigns. In recent months, across Europe, online platforms have been under assault by both Russian sources and generally far-right extremist groups.

Spanish voters on WhatsApp experienced disinformation and hateful memes at a rate that has outpaced similar malicious content on such platforms as YouTube, Twitter, and Instagram. And Facebook became a hive of fake news in France about the Yellow Vest movement.

The EU reports also says more resources and coordination will be needed to accelerate the response.

“The tactics used by these actors are evolving as quickly as the measures adopted by states and online
 platforms,” the report says. “Instead of conducting large-scale operations on digital platforms, these actors, in particular 
linked to Russian sources, now appeared to be opting for smaller-scale, localized operations that are
 harder to detect and expose.”