Ahead of the 2020 U.S. election and a House Energy and Commerce hearing on manipulated media, Facebook announced that it will strengthen its policy on misleading videos identified as deepfakes — those that take a person in an existing image, audio recording, or video and replace them with someone else’s likeness. While the ban won’t extend to parody, satire, or video that’s been edited solely to change the order of words, it will affect a swath of edited and synthesized content published for the purpose of hoodwinking viewers.
In a blog post confirming an earlier report from the Washington Post, Facebook global policy management vice president Monika Bickert said going forward Facebook will remove media that’s been modified “beyond adjustments for clarity or quality” in ways that “aren’t apparent to the average person.” Content generated by machine learning algorithms that merge, replace, or superimpose people will also be subject to deletion, she said.
Deepfake videos that aren’t removed might be subject to review by Facebook’s independent third-party fact-checkers, which now span over 50 partners globally in over 40 languages. Like other media that’s rated false or partly false by a fact-checker, flagged deepfakes will have their distribution in the network’s News Feed “significantly” reduced. Additionally, those being run as ads will be rejected, and people who see, try to share, or have already shared the deepfake will receive alerts about their falseness.
“Consistent with our existing policies, audio, photos, or videos — whether a deepfake or not — will be removed from Facebook if they violate any of our other Community Standards, including those governing nudity, graphic violence, voter suppression, and hate speech,” said Bickert. “This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
Deepfakes are multiplying quickly. Amsterdam-based cybersecurity startup Deeptrace found 14,698 deepfake videos on the internet during its most recent tally in June and July, up from 7,964 last December — an 84% increase within only seven months. That’s troubling not only because deepfakes might be used to sway public opinion or implicate someone in a crime they didn’t commit, but because the technology has already generated pornographic material and swindled companies out of hundreds of millions of dollars.
In an effort to keep deepfakes from spreading, Facebook — along with Amazon Web Services (AWS); Microsoft; the Partnership on AI; and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and State University of New York at Albany — are spearheading the Deepfake Detection Challenge, which was announced in September. It launched globally at the NeurIPS 2019 conference in Vancouver last month, with the goal of catalyzing research to ensure the development of open source detection tools.
Facebook has dedicated more than $10 million to encouraging participation in the competition. For its part, AWS is contributing up to $1 million in service credits and offering to host entrants’ models if they choose, and Google’s Kaggle data science and machine learning platform is hosting both the challenge and the leaderboard.