In an age of pervasive deepfakes, how can anyone know if an image they’re viewing is an AI-generated fabrication? Jigsaw — the organization working under Google parent company Alphabet to tackle cyberbullying, censorship, disinformation, and other digital issues — is prototyping a tool called Assembler to address this concern. Jigsaw CEO Jared Cohen revealed in a blog post that the tool is being piloted with media organizations to help fact-checkers and journalists pinpoint and analyze manipulated media.
“Jigsaw’s work requires forecasting the most urgent threats facing the internet, and wherever we traveled these past years — from Macedonia to Eastern Ukraine to the Philippines to Kenya and the United States — we observed an evolution in how disinformation was being used to manipulate elections, wage war, and disrupt civil society,” wrote Cohen. “By disinformation, we mean more than fake news. Disinformation today entails sophisticated, targeted influence campaigns, often launched by governments, with the goal of influencing societal, economic, and military events around the world. But as the tactics of disinformation were evolving, so too were the technologies used to detect and ultimately stop disinformation.”
According to Cohen, Assembler was developed in partnership with Google Research and academic partners, including the University of Maryland, University Federico II of Naples, and the University of California at Berkeley. It offers both a tool to identify manipulated media and a collaboration platform where researchers can refine its detection methods.
Concretely, Assembler brings multiple image manipulation detectors into one tool, each designed to spot specific techniques, such as copy-paste or alterations to image brightness. One of these detectors distinguishes images of real people from images produced by Nvidia’s StyleGAN architecture (you’ll recall that StyleGAN, which was published last year, can generate lifelike images of people who never existed), while another — an ensemble model — analyzes images for multiple types of manipulation simultaneously. This second detector was trained using combined signals from each of the individual detectors, enabling it to identify image manipulation on average more accurately than any individual detector.
Assembler additionally incorporates what Jigsaw refers to as an “image auto-upgrading process,” powered by reverse image search provider TinEye. It takes original images and finds larger and/or better quality versions of them in an effort to ensure the best image possible is analyzed by the detectors.
Among the news agencies currently using Assembler are Agence France-Presse, Animal Politico, Code for Africa, Les Décodeurs du Monde, and Rappler. Cohen says Jigsaw will continue to evaluate how the tool performs in real newsrooms as it continues development of a complementary tool — the Disinformation Data Visualizer — that shows coordinated disinformation campaigns around the world and the specific tactics used and countries affected.
“These days, working in multimedia forensics is extremely stimulating. On [the] one hand, I perceive very clearly the social importance of this work: In the wrong hands, media manipulation tools can be very dangerous; they can be used to ruin the life and reputation of ordinary people, commit frauds, [and] modify the course of elections,” said Google AI visiting scholar Dr. Luisa Verdoliva. “On the other hand, the professional challenge is very exciting — new attacks based on artificial intelligence are conceived [each] day, and we must keep a very fast pace of innovation to face them.”
In November, Jigsaw released a corpus originating from a competition it launched in April that challenged entrants to build a model that recognizes toxicity and minimizes bias. The first release contained roughly 250,000 comments labeled for identities, where raters were asked to indicate references to gender, sexual orientation, religion, race, ethnicity, disability, and mental illness in a given comment. And the latest version added individual human annotations from almost 9,000 human raters — annotations that effectively taught machine learning models the meaning of “toxicity.”
Data sets like this underpin Jigsaw’s products, like the comment-filtering Chrome extension it released in March and its Perspective API tool for web publishers. Beyond this work, the think tank conducts experiments that have at times proven controversial, like its assigning of a disinformation-for-hire service to attack a dummy website. Other projects underway include an open source tool, Outline, that lets news organizations offer journalists safer access to the internet; an anti-distributed denial-of-service solution; and a methodology to dissuade potential ISIS recruits from joining that group.