Presented by Celebrus

The 2023 Imperva Bad Bot report found that bad bots make up a whopping 30% of automated traffic, with evasive bad bots accounting for 66.6% of all bad bot traffic. Meanwhile, automated attacks targeting APIs are on the rise. To no one’s shock, generative AI has been a significant driver of more sophisticated attacks – fraudsters are out there using ChatGPT to write code for their bots. And the bots are out there hoovering up personal and business data, stealing accounts and tallying up billions of dollars in damages for targeted companies across industries.

“Today bot networks are run by professional, organized criminal organizations like the Genesis Marketplace,” says Ant Phillips, CTO of Celebrus. “The technology is equally professional, and one of the major consequences is that older security techniques and technology, the rule-based solutions that can identify bots based on predictable behavior, are no longer effective. Machine learning and AI are the only way to step up and defeat fraudsters.”

A bot is a software application that automates and speeds up repetitive tasks such as opening a page or clicking a link – and with generative AI behind their code, they now can spoof user and device IDs, as well as mimic human behaviors to appear like regular users. They can be used for everything from credential stuffing, account enumeration, content scraping, form spam and unauthorized use of credit cards to impersonating users, emulating devices and launching destructive, full-scale distributed denial-of-service attacks to take down websites.

On a larger scale there are botnets, or robot networks of computers and devices that are infected with malware and centrally controlled, able to scale to the tune of millions of targets, and exploit their security gaps and soft spots.

“Any individual bot attack, whether it’s account takeovers or pay-per-click fraud, are serious issues in their own right,” Phillips says. “But when you look at the full scale of attacks, we’re talking about billions of dollars’ worth of crime here.”

Setting defenses against next-generation bots

There is, unfortunately, no magic bullet for identifying and eliminating all bots, because there’s such a broad array of them, wielded by a wide spectrum of criminals with a vast range of targets and goals. It takes painstaking data science work to successfully identify every kind of bot.

“It requires a lot of data exploration,” Phillips says. “Detailed data is necessary to understand what these bots are doing and isolate them from regular human traffic. That’s why a clean, updated and comprehensive data set is crucial — not just for identifying the dangers we know about today, but also for the emerging threats.”

New threats scale up fast, and criminals are quick to take advantage of the window of opportunity before they’re found out and defenses are put in place. Companies have to act quickly – they can’t be scrambling for the data they need to identify a new threat; they need to have that rich data set on hand.

“Fundamentally, getting rich enough data to identify the differences between valid users and bots, that’s the core competency that’s needed to be able to drive some of this bad traffic away,” he says.

Using AI, machine learning and comprehensive identity profiles

On the legal side, developing a data defense strategy requires granular and accurate data, such as biometric data, which needs to be kept secure, from both external and internal use (i.e., the data cannot be borrowed by the marketing department). That means staying aware of the regulatory landscape. For instance in Europe, the Payment Services Regulations requires companies to refund APP fraud within five days. And these kinds of regulations are spreading across the world, growing increasingly commonplace (and often quite complex).

To combat this, Phillips says, “we believe very strongly in having first-party defenses. That, in itself, solves the potential problems around data privacy and strict third-party sharing issues.”

The core of Celebrus’ first party technology is the identity graph, which uses first-party data to build user profiles that identify a user, and how they interact with a brand digitally, all in real time. Understanding the entire customer journey helps build that biometric profile, from when they log in to what they’re browsing across a digital touchpoint.

“Real-time, first-party data across the whole customer journey, all the things that customer does helps us understand who they are, so that we can then be very effective in detecting when a bot starts to diverge from that behavior,” he says.

When a user arrives on a website, a biometric profile can determine whether they’re operating and working and using the website in all the same ways they have historically, coming from the same part of the world and delivering similar signals to previous visits. The biometric profile is continuously building evidence for each and every visitor in each and every session, whether it be web or mobile, and can be scored.

Developing a bot defense plan

Putting safeguards in place is very much a crawl, walk, run process. It should start with the data — measuring traffic and understanding where the losses are.

“Many companies we work with are surprised when we show them that 20 percent or more of their site traffic is bots,” Phillips says. That has all sorts of consequences for marketers in terms of how many people they think are actually coming to their site.” That includes traffic generated by pay-per-click fraud, where advertisers are being charged for ad impressions consisting of bots rather than real people.

The next step is to prioritize defenses — it’s not possible to remove all fraudulent traffic, but it is possible to deflect the ones that have significant economic impact. And the solution needs to be running continuously, in real time. Fraudsters are quite aware of the traffic patterns that each industry, and even individual companies experience, and time their attacks to good effect.

Fraud detection and prevention also needs to keep the customer experience fluid and frictionless, otherwise it’ll drive real customers away. It should require limited interventions, but also minimize false positives. A feedback loop is vital, to ensure continuous improvement, as the fraudsters change tactics and look for ways around defenses.

And finally, a solution should provide good data metrics and KPIs, with robust reporting. That way, you keep unhappy surprises to a minimum, and ensure that your defenses remain effective and continue to improve.

“From my perspective, one single thing above all others, it’s about partnering,” Phillips says. “Not every business has those kinds of advanced data engineering and data science skills and technology to be able to solve this problem. It’s important to partner with someone who has not only the technology, but a deep understanding of the overall landscape.”

Dig Deeper: Learn more here about combating bad bots and the technology that powers data-based fraud detection.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact