Multiple reports say that mobile ad fraud rates could be costing advertisers and publishers billions of dollars a year. That’s a big tax on what will be a $75 billion business in the U.S. alone in 2018 (according to market researcher eMarketer).
That’s why Adjust, a mobile measurement and analytics company, has started an effort across the whole mobile ad industry to fight back. A year ago, the company helped start the Coalition Against Ad Fraud, which is bringing together different players in the mobile ad ecosystem to stop ad fraud from multiple directions.
The group has pledged to tackle mobile ad fraud head-on by working together to develop definitions of fraud, come up with ways to measure it, and talk about solutions that take the incentive out of fraud. The group has now finished its initial document on fraud definitions, and it is trying to spread awareness in the industry.
Adjust also announced the coalition will open up to advertisers to accelerate the anti-fraud efforts. The CAAF initiative is part of a concerted effort by industry leaders like Liftoff, IronSource, and Jampp. The standardization document provides all industry players — including advertisers, supply-side networks and third-party vendors globally — with a common, agreed-upon nomenclature and a rounded technical overview of mobile performance ad fraud so they are better equipped to deal with the issue at large.
Mobile ad fraud can take a number of different forms, from faked impressions and click spam to faked installs. As the industry develops to fight current fraud techniques, the methods used by fraudsters change to become more effective.
I sat down with Andreas Naumann, fraud expert at Berlin-based Adjust, for a lunch recently in San Francisco.
Here’s an edited transcript of our interview.
Andreas Naumann: We started the coalition a year ago, the Coalition Against Ad Fraud. The idea was that we wanted to get together with the supply side networks to define standards of what is fraud and what isn’t. What’s the methodology of detection? What’s the methodology of mitigation? First, our clients can be sure that we have standards that cover all eventualities in their talks with networks, and also, we can spread that out into the industry for other people to learn and follow. That document is now finished. We’re going to release it on the 18th.
VentureBeat: Were there some major conclusions from that?
Naumann: Not conclusions exactly, but standardization. Nomenclature and definitions so it’s clear what we consider to be fraud, how we find it, and how we deal with it. We have full transparency into what’s going on. There’s so much misinformation out there, so many players who call things different names and say this or that is fraud.
We have the same problems as anyone else. When we started with all of this 2016, it took about three months before all of our competition claimed they were doing fraud prevention, even though they weren’t. They were doing detection, and then clients had to go and get their money back.
Click injections, for instance, was a thing we transparently brought into the market. “Here’s this exploit. We don’t have a solution yet, but this is the problem.” Defining it takes a month or two, and then our competition comes out with something that claims there’s this new thing, calls it something different, and what Adjust is doing is actually the old stuff. They’re just making up new names and definitions for no reason other than to claim that they’ve solved a new problem.
VentureBeat: Who is in the space that you’d consider competition?
Naumann: Direct competition for us would be AppsFlyer, Kochava, Singular to some extent. We’ll see what names they come up with. We just want to make sure that, first off, for our stuff, clients are in the know and can educate themselves. They can bring that into the broader market and share it with different people.
VentureBeat: You guys were putting a number on it in your reports, that it’s a billion-dollar problem?
Naumann: That’s not something I’m so interested in. Except for getting people to click on something to read about it, that doesn’t have any value, really. It’s a large problem, but it’s hard to name how large it is, because everybody only sees what they see. If you have a broad definition of fraud, then maybe it’s a $12 billion problem? But that makes me smile, usually, because as long as you don’t define what’s part of the problem and what isn’t, those numbers are pretty useless.
It gets worse when people start trying to say things like, “Well, fraud is bigger in China than it is in the U.S. by this much.” That doesn’t make any sense. Fraudsters don’t care where a campaign runs. They care about how much money they can make and how little attention you pay to what they’re doing. That’s it. There’s no cause and effect relationship between the country you run your campaign in and how much fraud you can expect.
That’s an unpopular opinion, because it doesn’t make the whole thing easier to understand. But we try to be as candid and transparent about what we’re doing as possible. Usually, when I talk about numbers, I make a big effort to define what those numbers are describing.
We, of course, can only talk about what we’re seeing from our clients’ perspective, but fraudsters know our clients are protected. That’s why they don’t get targeted as much as someone who isn’t as well-educated, who may have older standards as far as KPIs. They may still think about quantity only, acquiring as many users as possible for the lowest price. They’ll see much bigger rates of fraud.
We do a lot of education for the market, which is the background of why we started the coalition in the first place. We want to raise the topic, to educate people in what they need to look out for. Simple steps they can take in order to see if they’re facing the same issues as other players, or if they have to come up with countermeasures of their own.
What’s inherently different is, when we report on, “OK, this is how much fraud we rejected this year,” that’s a much lower number, usually, than if somebody says, “This is how much fraud we detected this year.” If we have actual prevention running — let’s say a client has been running without prevention for a year, and then on January 1 they turn on our fraud prevention suite. They have a lot of sources that do click injections, say, and as soon as they turn their filters on, we start rejecting all traffic that has click spam or click injections going, which means from that moment on, no attributions happen to those sources. They’re not getting paid, and they’ll find something else to do.
In the beginning we’ll see a high fraud rate, which then tumbles right away and stays low. After that, you’ll only see spikes from time to time when someone tries to work fraud in on new campaigns. It makes for an inherently different number than when you detect for the whole month, and then afterward you tell the client, “Hey, you need to get your money back.” That rate will always be higher than when you actually mitigate the problem during attribution. That makes our numbers somewhat incompatible with what the rest of the industry is doing. Nobody wants to do the prevention part.