Twitter is proposing a handful of new features designed to help its users spot “synthetic” or “manipulated” media, including deepfake videos.
The social networking giant last month announced plans to implement a new policy around media assets that have been altered to mislead the public. Today heralds Twitter’s first draft proposal, alongside a public consultation period, as it works to refine the rules and how they will be enforced.
“When you come to Twitter to see what’s happening in the world, we want you to have context about the content you’re seeing and engaging with,” said Twitter VP of trust and safety Del Harvey in a blog post. “Deliberate attempts to mislead or confuse people through manipulated media undermine the integrity of the conversation.”
With the growing sophistication of technology, it has become easier to manipulate media such as photos, audio clips, and videos to falsely represent individuals or groups. In particular, the rise of AI-powered “deep fake” videos has become a specific concern in terms of how public figures such as politicians can be wrongly shown to be doing or saying things to advance a rival’s political agenda. The problem is not limited to Twitter, of course, but as a mainstream communications and news conduit, it is particularly susceptible to being targeted by nefarious actors.
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
Twitter has proposed a definition of “synthetic and manipulated media” as:
…any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.
The San Francisco-based company has published a draft resolution of what it plans to do if it detects a tweet containing such manipulated content. Steps include automatically placing a message next to the offending tweet informing users that the content may not be legitimate or is in some way misleading. Additionally, it may warn people before they engage with the tweet (i.e. “like” or “retweet” it) that the content may not be authentic, and even link to reputable third-party sources to explain why the content is believed to be synthetic.
However, Twitter said it may only remove such content if it could “threaten someone’s physical safety or lead to other serious harm,” according to Harvey. In other words, Twitter will not remove manipulated content just because it perpetuates a falsity around a high-profile political figure.
It is worth stressing here that this is just a draft proposal and Twitter is actively soliciting feedback from the public. Indeed, a questionnaire running from today through November 27 specifically asks whether photos and video that have been deliberately altered to mislead people should “be removed, even if it puts the responsibility on Twitter to decide.”
So Twitter could still backpedal slightly on this if the feedback it garners strongly indicates that manipulated content should be removed — with Twitter serving as the de facto arbitrator.
“We’ll review the input we’ve received, make adjustments, and begin the process of incorporating the policy into the Twitter Rules, as well as train our enforcement teams on how to handle this content,” Harvey said. “We will make another announcement at least 30 days before the policy goes into effect.”
The fight against fabricated and misleading content extends far beyond Photoshopped images or AI-powered deepfakes. Indeed, “fake news” has become an umbrella term to describe the deliberate spread of misinformation online, with the omnipresence of abuse and bullying feeding into the loss of user trust. Facebook sparked controversy last month when it announced that it would not include political advertising as part of its fact-checking program, and Twitter capitalized on the brouhaha soon after by revealing that it would ban political advertising altogether from November 22.
The underlying problem is the sheer scale at which these platforms operate, with Twitter claiming 145 million daily users and Facebook substantially more. Consequently, automation is playing a central role in these companies’ efforts to clean up their platforms. Earlier this year, Twitter acquired Fabula AI, a machine learning startup that helps spot fake news, while last year it bought another startup called Smyte to tackle hateful content more proactively.
This is an approach Twitter will likely use for its manipulated media strategy. As part of its public consultation, Twitter said it’s calling on other organizations to get in touch and collaborate on new tools that can automatically detect synthetic and manipulated media.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.