We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Resistant AI has raised $2.75 million in venture capital to develop an artificial intelligence system that protects algorithms from automated attacks.
Index Ventures and Credo Ventures led the investment, which included participation by Seedcamp, UiPath CEO Daniel Dines, and Avast CTO Michal Pechoucek. Based in Prague, Resistant AI focuses on the growing problem of hackers harnessing AI to manipulate machine learning systems.
Experts had predicted that cybersecurity would eventually lead to an AI arms race between attackers and their targets.
“Companies are just now learning how to deploy AI,” said Resistant AI cofounder and CEO Martin Rehak. “And on the other side, we see criminals and fraudsters learning how to use those processes for their benefit and how to steal money at scale. Our job is to protect the AI and machine learning models.”
Resistant AI’s team includes a core group that worked at Cognitive Security, which was acquired by Cisco Systems in 2013. That team originally began working on AI for security back in 2006, Rehak said, at a moment when such technology seemed far over the horizon.
“The first five years, when I told anyone what we were doing, they told me I was crazy,” he said.
The AI-related work became increasingly central while they were at Cisco. But the group finally struck out on its own to focus on the issue of AI being used to attack AI — or, as Rehak explains, AI being used to attack various automated decision-making systems.
Experts have grown increasingly worried about the rise of adversarial attacks. This refers to the idea of someone externally introducing elements into a machine learning model in order to disrupt or manipulate it.
When Resistant AI launched in 2019, it decided to focus first on financial companies, which had begun turning to automated systems to approve applications for various products.
Fraud attempts can occur in several ways. In one basic scenario, people use utility bills or bank statements with names changed to fool algorithmic-driven verification systems into opening accounts or financing or approving loans. Resistant’s AI intervenes by detecting visual anomalies or identifying data that seems suspicious to stop it from entering the approval system.
Resistant’s service can also review the decisions being made by a financial system, consider all the inputs, and look for correlations or inconsistencies within large batches. For example, a single request for approval might seem benign, but within a group of 100,000 requests, it may have abnormalities that resemble several other requests.
“That way, we can see that someone under different identities is actually fingerprinting the system and trying to find the vulnerability,” Rehak said.
By “fingerprinting,” Rehak means someone is submitting a range of documents and information to try to understand how a company’s algorithms and machine learning function.
The goal of such an attack can be twofold. First, the hacker may be trying to figure out the parameters of the algorithms in order to commit fraud. However, they may also be trying to use the attack to learn about the algorithm in order to copy it. They might then sell the information to other people who want to commit fraud or possibly even to competitors of the company being attacked.
In both cases, the hackers are increasingly using AI to automate and adapt their own methodology for probing these machine learning systems, Rehak said.
Going forward, Resistant plans to use the money to expand its staff of 20 people and extend its sales operations in Western Europe.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.