Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The proliferation of deepfakes — AI-generated videos and pictures of events that never happened — has prompted academics and lawmakers to call for countermeasures, lest they degrade trust in democratic institutions and enable attacks by foreign adversaries. But researchers at the MIT Media Lab and the Center for Humans and Machines at the Max Planck Institute of Human Development imply those fears might be overblown.

In a newly published paper (“Human detection of machine manipulated media“) on the preprint server Arxiv.org, a team of scientists detail an experiment designed to measure people’s ability to discern machine-manipulated media. They report that, when participants were tasked with guessing which out of a pair of images had been edited with AI that disappeared objects, folks generally learned to detect fake photos quickly when provided feedback on their detection attempts. After only 10 pairs, most increased their rating accuracy by over 10 percentage points.

“Today, an AI model can produce photorealistic manipulations nearly instantaneously, which magnifies the potential scale of misinformation. This growing capability calls for understanding individuals’ abilities to differentiate between real and fake content,” wrote the coauthors. “Our study provides initial evidence that human ability to detect fake, machine-generated content may increase alongside the prevalence of such media online.”



MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The team embedded their object-removing AI model — which automatically detected things like boats in pictures of oceans and removed them before replacing them with pixels approximating the occluded background — on a website dubbed Deep Angel in August 2018, in the Detect Fakes section. In tests, users were presented with two images and asked “Which image has something removed by Deep Angel?” One had an object removed by the AI model, while the other was an unaltered sample from the open-source 2014 MS-COCO data set.

From August 2018 to May 2019, the team says that over 240,000 guesses were submitted from more than 16,500 unique IP addresses, with an average identification accuracy of 86%. In the sample of participants who saw at least ten images — about 7,500 people — the mean correct classification percentage was 78% on the first image and 88% on the tenth image, and the majority of manipulated images were identified correctly more than 90% of the time.

The researchers concede that their results’ generalizability is limited to pictures produced by their AI model, and that future research could expand the domains and models studied. (They leave to a follow-up study investigating how detection proficiency is helped or hindered by direct feedback.) But they say that their results “suggest a need to reexamine the precautionary principle” that’s often applied to content-generating AI.

“Our results build on recent research that suggests human intuition can be a reliable source of information about adversarial perturbations to images and recent research that provides evidence that familiarising people with how fake news is produced may confer cognitive immunity to people when they are later exposed to misinformation,” they wrote. “Direct interaction with cutting edge technologies for content creation might enable more discerning media consumption across society.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.