Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Cybercrime is on the rise, and organizations across a wide variety of industries — from financial institutions to insurance, health care providers, and large e-retailers — are rightfully worried. In the first half of 2017 alone, over 2 billion records were compromised. After stealing PII (personally identifiable information) from these hacks, fraudsters can gain access to customer accounts, create synthetic identities, and even craft phony business profiles to commit various forms of fraud. Naturally, companies are frantically looking to beef up their security teams. But there’s a problem.
A large skills gap is causing hiring difficulties in the cybersecurity industry, so much so that the Information Systems Audit and Control Association found that less than one in four candidates who apply for cybersecurity jobs are qualified. The ISACA predicts that this lack of qualified applicants will lead to a global shortage of 2 million cybersecurity professionals by 2019.
In response, many companies are turning to artificial intelligence to pick up the slack. This raises a very important and expensive question: Are robocops ready for the job?
Training and supervision are paramount
One of AI’s apparent benefits is in providing authentication without the need for human involvement. Monitoring of implicit data points (i.e., a user’s environment, or geo-location), device characteristics (metadata of the call), biometrics (heartbeat), and behavior (typing speed and style) to validate someone’s identity can be done more effectively and faster with AI than with the human eye.
Companies are already seeing great results from AI, as illustrated by FICO’s newest Falcon consortium models, which have improved CNP fraud detection by 30 percent without increasing the false positive rate.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
While AI’s ability to authenticate may outweigh that of a human, without strategic direction from a human to alleviate the cold-start problem, cybercrime is too intricate an issue to solve. Given the complexity of a cybersecurity environment and the lack of proper foundation from which to start, unsupervised cyber-sleuthing from robocops gets us nowhere. Identifying patterns in big data is an impressive feat for AI, but these analyses in themselves are ill-equipped to fight the war on fraud and eliminate inefficient CX.
At the same time, supervised machine learning techniques depend on human-supplied test cases to help train algorithms. As an analogy, instead of reinventing the wheel, a supervised algorithm is just figuring out the best tire circumference for given car models and weather conditions. While its role is limited in certain regards, supervised learning can extract patterns from big data and provide actionable intelligence.
AI and machine learning can analyze massive quantities of data and identify patterns within that data that humans could never distill. But human direction is still needed to lay the foundation and set AI off on the right foot in its pursuit of both fraud and better customer service.
Readying AI for first contact
When artificial intelligence comes across a new dataset instance that doesn’t fit its induction-based models, a human may be needed to resolve the situation and train the algorithm on how to react in the future.
To better understand this interplay, consider a military metaphor. In war, the saying goes that “no plan survives first contact.” Of course, you’re probably going to make contact, so does that mean you should give up before the battle begins? No, but you should follow the idea of determining commander’s intent — the “why” behind the details of your plan and its execution. That way, even if your plan falls apart, you can still accomplish your mission.
Similarly, in authentication, you have enemies (fraudsters) who are actively trying to best your protections. They hit you high, you put your hands up to block, and they find a new gap in the gut of your omni-channel defenses. This is in contrast to many common applications of machine learning. For example, meteorologists’ machine learning algorithms have substantially improved prediction accuracy over the past several years. Hurricanes, however, aren’t actively trying to fool meteorologists’ models — they’re acting naturally, albeit perhaps more intensely these days, thanks to climate change.
Authentication AI needs to be able to adapt to fraudsters’ new methods. And without an understanding of the cybersecurity commander’s intent, AI will not be able adapt appropriately. Hence, a human element is needed to constantly guide and refine these powerful algorithms.
But what about GANs you say? Generative adversarial networks are a relatively new concept in machine learning. Essentially, they involve situations in which you have two machine learning algorithms. Algorithm A is doing a job, and Algorithm B is actively trying to poke holes in the way A is doing its job.
For example, take a GAN image processing algorithm in which A is trying to identify whether a given image contains a bird. As A sees more and more pictures, it improves its ability to differentiate bird-filled pictures from bird-less pictures. Meanwhile, B is working to create pictures in which A incorrectly identifies whether there is a bird or not. Applied to authentication, A represents the AI authentication and B represents white-hat hackers trying to poke holes in your system. When effectively implemented, GANs have been shown to be superior to traditional techniques in producing model performance and can help authentication AI actively prevent future criminal cyber activity.
However, even with GANs, an algorithm cannot understand the cybersecurity commander’s intent. And that’s where the overseeing human element once again comes into play.
Preventing false positives
On the back-end of authentication, certain fringe cases will never fit even the best algorithms because algorithms are based solely on inductive decision-making and past experience. For exceptions to those inductive rules, we need a human eye. Otherwise, seemingly innocent customer interactions can go very badly. Imagine your company addressing a customer differently because of their gender or skin color, for example.
Machine learning algorithms are analyzing massive amounts of data and doing it well. But in the end, the conclusions drawn are probabilistic, and there will always be exceptions to rules. Just as we cannot identify fraudsters 100 percent of the time, even after drawing out endless contingency plans, some customers who check all the boxes for fraud may actually be real customers in extenuating circumstances.
Take this example: A customer, Jose, who frequently calls from Houston is using a VoIP connection from Mexico. He’s nervous and fidgety on the phone, and your biometric behavioral sensors pick up on it. Additionally, he’s trying to activate a $5,000 wire transfer from his account. Most machine learning algorithms — even if supervised — would flag this as fraud. However, Jose explains that he went to Mexico to live with his family after his house was flooded from Hurricane Harvey. He needs the money for hospital bills for his grandmother in Mexico who never told the family how bad her health had become.
What do you do? It’s a tricky situation because if you reject the request, you may be contributing to a PR nightmare, and worse, indirectly harming your customer’s grandmother. But fraudsters often take advantage of disasters. For these situations, the algorithm cannot give a hard answer.
While it’s possible that in the near future cybersecurity forces will consist mostly of bots, today humans remain critical in the fight against fraud and the pursuit of great customer experiences. Only we can recognize the “why” behind cybersecurity, define key metrics to monitor faults in our algorithms, and make game-time decisions on fringe-case false positives that don’t fit our AI models.
Ian Roncoroni is CEO of Next Caller, a Y-combinator-backed provider of authentication and fraud detection technology.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.