In an attempt to beat back a rise in scams and other unwanted interactions during the coronavirus pandemic, Facebook today unveiled an AI-powered Messenger feature that surfaces tips to help younger users spot malicious actors. The guidelines — which outline steps for blocking or ignoring people on Messenger if that becomes necessary — are intended to educate users under the age of 18 about interacting with adults they don’t know.
The Messenger announcements follow the rollout of limits on chats users can forward and a hub that spotlights pandemic resources — both attempts to limit the spread of misinformation. Despite Facebook’s renewed campaign against false coronavirus information, misleading content on the platform is still shared and viewed hundreds of millions of times, according to a report from global nonprofit Avaaz. More broadly, the U.S. Federal Trade Commission has documented over 20,000 instances of messages offering bogus testing kits, unproven treatments, or predatory loans.
Facebook’s efforts to supplement a shortfall of human moderators with AI haven’t consistently panned out. While the network is successfully using automated techniques to apply labels to content deemed untrustworthy by fact-checkers and to reject ads for disallowed items (like testing kits and medical face masks), in mid-March a bug caused Facebook’s anti-spam system to begin flagging and removing legitimate news content.
But according to Messenger privacy and safety director Jay Sullivan, the AI powering the safety tips feature — which rolled out on Android in March and will expand to iOS next week — looks at behavioral signals like an adult sending a large number of friend or message requests to users under the age of 18. This might take into account signals from user reports or previously reported content, and it’s designed to improve with time as it obtains more signals from accounts interacting with one another.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
In some ways, it’s intended to bridge the gap between Messenger and Messenger Kids, Facebook’s child-friendly alternative to the main Messenger app that allows parents or guardians to review the people their kids connect to. “Messenger already has special protections in place for minors that limit contact from adults they aren’t connected to, and we use machine learning to detect and disable the accounts of adults who are engaging in inappropriate interactions with children,” said Sullivan in a statement. “Our strategy to keep people safe on Messenger not only focuses on giving them the information and controls they need to prevent abuse from happening, but also on detecting it and responding quickly if it occurs.”
It’s this property that allows the AI to work with end-to-end encryption schemes, ensuring it will continue to function after Messenger becomes encrypted by default. As a spokesperson explained to VentureBeat, the system uses metadata, behavioral patterns, and reports, as opposed to the actual content of individual chat messages.
“We designed this safety feature to work with full encryption,” said Sullivan. “People should be able to communicate securely and privately with friends and loved ones without anyone listening to or monitoring their conversations. As Messenger becomes end-to-end encrypted by default, we will continue to build innovative features that deliver on safety while leading on privacy. These safety notices will help people avoid potentially harmful interactions and possible scams while empowering them with the information and controls needed to keep their chats private, safe, and secure.”
The newly expanded feature marks the latest move in Facebook’s initiative to combat fake news and misinformation around the pandemic. In early March, the company gave the World Health Organization (WHO) unlimited ads to counter false coronavirus claims on its platform and pledged to remove conspiracies and profiteering marketing flagged by health organizations. More recently, Facebook began informing users who like, react to, or comment on posts about the pandemic that are later removed by moderators and directing those users to information debunking virus myths.
As of April 16, Facebook said it had served 2 billion people independently fact-checked articles about the pandemic and expanded its fact-checking efforts to a dozen new countries, bringing its total number of fact-checking partners to 60. The company also said it had displayed warnings on 40 million posts about the pandemic that had been flagged by third-party fact-checkers, ostensibly dissuading 95% of people from clicking on the content.
In conjunction with this effort, Facebook is facilitating a program that connects developer partners with health organizations and UN health agencies to use Messenger to scale their responses to the health crisis. And the Indian government and the U.K.’s National Health Service have teamed up with Facebook’s WhatsApp to launch dedicated coronavirus informational chatbots.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.