Facebook together with the Partnership on AI, Microsoft, and academics are making a deepfake dataset, benchmark, and public challenge with up to $10 million in grants and awards to spur innovation and make it easier to spot fake content.

The Deepfake Detection Challenge will be put together with support from academics at Cornell Tech, MIT, University of Oxford, UC Berkeley, University at Albany-SUNY, and University of Maryland, College Park. The challenge will also have a leaderboard to identify top deepfake detection systems. The deepfake dataset will be released during the NeurIPS conference, which takes in December in Vancouver, Canada.

“This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress,” Facebook CTO Mike Schroepfer said in a blog post to announce the competition today. “It’s important to have data that is freely available for the community to use, with clearly consenting participants, and few restrictions on usage. That’s why Facebook is commissioning a realistic data set that will use paid actors, with the required consent obtained, to contribute to the challenge.”

Facebook is giving $10 million to the challenge and no user data will be included, Schroepfer said. Challenge governance will be overseen by a newly created Partnership on AI’s Steering Committee on AI and Media Integrity.

Federal authorities from the FBI, Office of the Director of National Intelligence, and Department of Homeland Security met at Facebook Wednesday with leaders from Facebook, Google, Twitter, and Microsoft to discuss the 2020 election. A similar meeting took place ahead of the 2018 election.

Also on Wednesday, Facebook introduced the fastMRI challenge to spur improvements in the amount of time it takes to get an MRI. Challenge results will also be shared at the NeurIPS conference.

A New York University study released earlier this week concluded that people concerned with election meddling by foreign or domestic actors should turn their attention to WhatsApp and Instagram. During the 2016 presidential election, Russian hackers used social media platforms like Facebook and Twitter to sow discord among the electorate and get Donald Trump elected president.

In other efforts by Facebook to create standards in the AI community, last month, Facebook and a group of other organizations introduced the SuperGLUE benchmark for robust language models, and in June Facebook launched PyTorch Hub to support model reproducibility.