Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
This post was written by Rajesh Ganesan, Vice President at ManageEngine.
New technologies are frequently met with unwarranted hysteria. However, if the FBI’s recent private industry notification is any indication, AI-generated synthetic media may actually be cause for concern. The FBI believes that deepfakes will be used by bad actors to further spear phishing and social engineering campaigns. According to deepfake expert Nina Schick, AI-based synthetic media — hyper realistic images, videos, and audio files — are expected to become ubiquitous in the near future, and we should ensure we get better at spotting deepfakes.
The consumerization of deepfake technologies is already upon us, with applications such as FaceApp, FaceSwap, Avatarify, and Zao rising in popularity. This content is protected under the First Amendment until it is used to further illegal efforts, which of course, we’ve already started to see.
According to a UCL report published in Crime Science, deepfakes pose the most serious artificial intelligence-based crime threat.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Your IP depends on spotting deepfakes
We’ve already seen effective deepfake attacks on politicians, civilians, and organizations. In March 2019, cybercriminals successfully conducted a deepfake audio attack, tricking the CEO of a U.K.-based energy firm into transferring $243,000 to a Hungarian supplier. Last year, a lawyer in Philadelphia was targeted by an audio-spoofing attack, and this year, Russian pranksters duped European politicians in an attack initially thought to be deepfake video. Even if the Russians did not use deepfake technology in their attacks, the subsequent news coverage speaks to how the existence of deepfakes is sowing distrust of media content across the board.
As synthetic media becomes more proliferate — and more convincing — it will become increasingly difficult for us to know which content to trust. The long-term effect of the proliferation of deepfakes could lead to a distrust of audio and video in general, which would be an inherent societal harm.
Deepfakes facilitate the “liar’s dividend”
As synthetic media populates the Internet, viewers may come to engage in “disbelief by default,” where we become skeptical of all media. This would certainly benefit dishonest politicians, corporate leaders, and spreaders of disinformation. In an environment polluted by distrust and misinformation, those in the public eye can deflect damaging information about themselves by claiming the video or audio in question is fake; Robert Chesney and Danielle Citron have described this effect as the “liar’s dividend.” As a quick example, after Donald Trump learned about the existence of deepfake audio, he rescinded his previous admission and asserted that he might not have been on the 2005 Access Hollywood tape after all. Trump aside, a “disbelief by default” environment would certainly be harmful for those on both sides of the aisle.
Deepfake detection efforts are ramping up
In addition to initiatives from Big Tech, namely Microsoft’s video authentication tool, Facebook’s deepfake detection challenge, and Adobe’s content authenticity initiative, we have seen particularly promising work out of academia.
In 2019, USC scholar Hao Li and others were able to identify deepfakes via correlations between head movements and facial expressions, researchers from Stanford and UC Berkeley subsequently focused on mouth shapes, and most recently, Intel and SUNY Binghamton scholars have attempted to identify the specific generative models behind fake videos. It’s quite a game of cat and mouse, as the bad actors and altruistic detectors use generative adversarial networks (GANs) in an attempt to outwit one another. This past February, UC San Diego researchers admitted that it’s hard to stay ahead of the bad actors, as criminals have adapted enough to trick the deepfake detection systems.
The private sector is working on deepfake detection as well. The SemaFor Project, Sensity, TruePic, AmberVideo, Estonia-based Sentinel, and Tel-Aviv-based Cyabra all have initiatives in the works.
Additionally, blockchain technologies could help to identify media’s provenance. By creating a cryptographic hash from any given audio, video, or text source, and placing it on the blockchain, one could ensure that the media in question has not been altered.
Nevertheless, seeing as the FBI is already seeing bad actors using AI-generated synthetic media in spear phishing and social engineering efforts, it is vital that all employees remain vigilant in their own personal deepfake detection.
Spotting deepfakes 101
According to the FBI, deepfakes can be identified by distortions around a subject’s pupils and earlobes. Additionally, it is wise to look for jarring head and torso movements, as well as syncing issues between lip movements and the associated audio. Another common tell is a distortion in the background, or a blurry or indistinct background, in general. Lastly, be on the lookout for social media profiles and other images with consistent eye spacing across a large group of images.
As a caveat, the deepfake tells are constantly changing. When deepfake video first circulated, odd breathing patterns and unnatural blinking were the most common signs; however, the technology quickly improved, making these particular tells obsolete.
Aside from looking for tells and relying on third-party tools to authenticate the veracity of media, there are certain basic things can help employees with spotting deepfakes. If an image or video appears to be dubious in nature, one can check the metadata to ensure that the creation time and creator ID make sense. One can ascertain a great deal by learning when, where, and on what device an image was created.
At this point in time, a healthy skepticism of media from unknown origins is warranted. It’s important to train employees on media literacy tactics, including watching out for unsolicited phone calls and requests that don’t sound quite right. Whether a request comes through an email or a call, employees should be sure to confirm the request through secondary channels — particularly if the request is for sensitive information. Also, employees who manage corporate social media accounts should always use two-factor authentication.
Companies that deploy a continuous learning model when it comes to security risks, such as deepfakes, should teach all employees to maintain some level of skepticism when it comes to any shared media content. If synthetic media proliferates as quickly as Nina Schick and other deepfake experts expect, it will be vital to maintain this skepticism.
Also, through the use of anti-spam and anti-malware software, employees can be alerted to any unusual or anomalous activity, as the software filters and checks all emails that come through. As with any technology though, employees should still do gut checks as an added layer of protection.
Deepfake awareness part of your cybersecurity plan
Deepfakes pose serious risks to society, including sowing mistrust of media in general, which has its own devastating repercussions. Given the potential stock market manipulation, risks to business and personal reputations, and the ability to disrupt elections and create geopolitical conflict, the potential negative effects of deepfakes are vast.
That said, there are some potential positive effects as well, including the creation of synthetic voices to help those with amyotrophic lateral sclerosis (ALS) like the Project Rejoice initiative where deepfake technology was used to give Pat Quinn, co-founder of the ALS Ice Bucket Challenge his voice back, or to serve as an educational output, like when David Beckham once delivered an anti-malaria message in nine languages, as part of “Malaria No More’s” attempt to protect against the deadly disease.
Nevertheless, it’s vital that employees make spotting deepfakes a part of their media literacy and Zero Trust mindset, as synthetic media will get more convincing and more prolific in the near future.
Rajesh Ganesan is Vice President at ManageEngine, the IT management division of Zoho Corporation. Rajesh has been with Zoho Corp. for over 20 years developing software products in various verticals including telecommunications, network management, and IT security. He has built many successful products at ManageEngine, currently focusing on delivering enterprise IT management solutions as SaaS.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.