In 2025, 1 in 20 identity verifications were fooled by generative AI deepfakes. According to a survey by Medius.com, 43% of professionals surveyed said they had fallen victim to a deepfake fraud attack. The FBI recently published a press release stating that its internet fraud portal (IC3.gov) received over 9,000 complaints related explicitly to AI fraud.

Deepfake scams are on the rise, and fraudsters are using AI-generated video, audio, and images to fool people. The problem? AI-generated media is becoming so convincing that it’s nearly indistinguishable from authentic content.

TruthScan, a deepfake detection software, aims to protect both consumers and enterprises from AI-related fraud attacks. TruthScan’s AI-fraud prevention suite includes audio, video, image, and text analysis tools that detect AI-generated content.

“When we started developing TruthScan, we asked decision makers how they were dealing with AI fraud,” says Christian Perry, CEO of TruthScan. “They told us that they basically had no idea what to do or where to even start.”

TruthScan’s strategy for stopping shadow AI attacks

The key to both improving and spotting generative AI models is data, and lots of it. Before Perry founded TuthScan, he’d launched a startup in 2023 called Undetectable AI. “For the last two years, we’ve been collecting data from AI models, detection algorithms, and real humans,” says Perry. “We took all of that, specifically focusing on data related to fooling detection algorithms, and plugged it all into TruthScan.”

One of the biggest challenges in deepfake detection is how quickly they get updated. Whenever a major AI image or video generation app releases an update, TruthScan’s software is typically updated within 24–48 hours with new data to enhance detection capabilities, according to the company.

TruthScan’s detection particularly focuses on photorealistic images, identity documents, and videos. It works by analyzing every piece of data from any given piece of content.

How generative AI fraud is targeting enterprises

Banks, crypto platforms, insurance companies, and even retail apps are all being targeted by deepfake attacks. For banking and crypto companies, know-your-customer (KYC) verifications have been a critical way to prevent identity fraud. But attackers can use deepfaked images and ID documents to attempt to fool these checks.

Insurance companies have online portals where customers can make insurance claims, but now, AI makes it easy for bad actors to submit phony claims. “We’re seeing that fraudsters are using AI to generate and submit fake evidence for insurance claims. It only takes a few minutes.”

Even major food delivery apps are prone to generative fraud. If you place an order on a food delivery app, and there’s something wrong with your order, you’re entitled to a refund.

AI scams affect everyone. Here’s how

Scammers aren’t just using AI to impersonate corporate officials; they’re also using deepfake tech to assume the identities of celebrities, loved ones, and public figures.

“People who are on these dating apps are being targeted by convincing catfishing schemes that use a combination of AI-generated videos, photos, and voices to trick victims into sending them money or personal information,” says Perry. “It only takes a few images of an individual, or a few seconds of their audio or video content to create a compelling deepfake of them.” Next time someone you’re talking to online sends you a photo of themselves, you may want to run it through an AI image detector.

TruthScan’s edge in real-time detection and integration

What sets TruthScan apart is its emphasis on real-time analysis and seamless integration into existing workflows. The platform offers API endpoints that enterprises can plug into their verification systems, allowing for instant checks during KYC processes, claim submissions, or even social interactions on platforms. For instance, banks can embed TruthScan's video analysis during remote onboarding sessions to flag deepfake attempts before they escalate.

Perry highlights the tool's accuracy rates, which hover above 97% for photorealistic deepfakes, based on internal benchmarks against leading generative image and video models. "Our system not only detects anomalies, it also explains them," he adds. "Users get a breakdown of why content is flagged, whether it's unnatural pixel patterns, audio inconsistencies, or metadata mismatches empowering teams to make informed decisions."

For consumers, TruthScan provides a user-friendly web and mobile app where individuals can upload suspicious media for free basic scans, with premium features for advanced forensics. This makes access to AI fraud prevention more widely available, helping users better recognize and avoid scams on dating sites or social media.

The road ahead: Scaling defenses against evolving threats

As generative AI continues to advance, TruthScan is gearing up for the next wave. The company plans to expand its dataset through partnerships with cybersecurity firms and academic institutions, aiming to incorporate quantum-resistant algorithms by 2027 to counter future AI threats.

Industry analysts agree that tools like TruthScan are essential. "The arms race between generators and detectors is intensifying," notes Perry. "But with proactive data strategies, we're keeping one step ahead."

In an era when trust in digital content is eroding, companies like TruthScan aim to fight back against deception. Fighting AI fraud requires AI that's smarter, faster, and relentlessly updated. For businesses and individuals alike, adopting modern defenses to modern threats isn't prudent; it's now a necessity.

The information in this article is intended for general informational purposes only and does not constitute legal, financial, or professional advice. Readers should consult with a qualified professional for guidance tailored to their specific needs.


VentureBeat newsroom and editorial staff were not involved in the creation of this content.