Your users are vulnerable to the insidious types of fraud that infest every online game out there, including account takeovers, game hacks, credential ripoffs, bots, and more. Learn how fraud-fighting tools powered by artificial intelligence are super-charging the fight against scammers when you join this VB Live event!
Don’t miss out!
We’ve come a long way since the Super Nintendo days, says Jeff Sakasegawa, trust and safety architect at Sift Science. With the proliferation of gaming platforms swarming the online world, there are millions of potential interactions, not just in the context of a game itself, but in things like power-ups and skins, mods and downloadable content, not to mention the connections created in the online communities that have sprung up around these games.
“You’d hope a lot of that would be positive, but unfortunately – we’re all familiar with the internet – malicious behavior is just an ever-increasing problem,” he says. “It’s one of the great challenges of being in this space. Fraud prevention is what helps keep those engagements and experiences as positive as possible.”
As people spend more time online, as they engage more with their games, the number of interactions that they must trust with developers increases. Of course it’s not only the money involved, but a virtual representation of self, and gamers have a real sensitivity to making sure that it accurately represents them, which means investing a tremendous amount of time into their play, their characters, and their items. In-game items gain as much worth as cold, hard cash, or more – they haven’t just sunk money into these virtual trophies, but time and energy too.
“It all comes back to the idea that you’ve built this great platform, this successful game, this place where a lot of people are going and spending their time, and you generate this value,” Sakasegawa says. “Unfortunately, externally people see that value as something to exploit.”
That abuse on your platform can take many forms, he notes, whether it’s traditional payment fraud, where credit cards are swiped, account takeover, posting malicious content, or trolling. But because fraud is so increasingly sophisticated and detection is increasingly requiring needle-in-a-haystack levels of focus, developers can sometimes not believe they have a real problem – or not realize exactly how widespread the issue is.
“Not because they’re willfully trying to turn a blind eye to it – they may just not have the sophistication or the rigor or the data scientists to identify these things to begin with,” he says. “The first step is applying measurement and reporting to give you an understanding of what the real implications are.”
That means learning from a monetary standpoint, including issues like user churn, what is going on under the hood. That level of insight and sophistication requires machine learning and AI, and those solutions and tools are at hand for any company, as the technology becomes democratized, allowing every developer to audit, and get in front of fraud, instead of frantically following behind.
“You can say, hey, we know we have a problem, but we actually found it, and here’s what we’re doing to remediate that and do better in the future,” Sakasegawa says. That’s preferable to just getting burned by these events.
“Just think about what your customers would expect from you,” he adds. “They would probably hope that you have some kind of preventative measure in place, as opposed to a bad event happens with their data, with their payment information, and your response is essentially a shrug. That’s a terrible look.”
Fraud needs to be combatted holistically, from listening to what your customers are telling you, which can be a warning of a bad wave coming at you, to being honest with yourself about exactly what effort you’re putting in to combatting scams, and getting internal buy-in from the C-suite on putting preventative measures in place.
And as threats become increasingly interrelated – an account takeover leads to widespread trolling leads to credit card fraud – companies need to do their best to understand the total user journey.
“You can do well by focusing on a specific avenue of abuse, but I would posit that as these things co-mingle more, you just have to be better in preventing a lot of this from happening concurrently, unfortunately,” he says. “The better you can become at identifying this stuff, the better a space you’ll be in as far as preventing this activity, regardless of the avenue in which a fraudster or abuser comes to your platform. You have to batten down the hatches and just be better on all fronts.”
To learn more about what hatches to batten down, how internal evangelizing can combat larger threats, and how to boost user loyalty as you shore up account vulnerabilities, don’t miss this VB Live event!
In this webinar, you’ll learn:
- How the gaming industry can secure gamer data and build trust
- How account takeover, fake licensing, spam, and scams pose a particular challenge to gamers and gaming platforms
- What policies your company should have in place around data breach ransom
- How to combat trolling
- Jeff Sakasegawa, Trust and Safety Architect, Sift Science
- Dean Takahashi, Lead Writer, GamesBeat
- Scott Adams, CEO FraudPvP.com, Former Director of Fraud & Risk, Riot Games
- Rachael Brownell, Moderator, VentureBeat
Sponsored by Sift Science