AI enables organizations to automate tasks, extract information, and create media nearly indistinguishable from the real thing. But like any technology, AI isn’t always leveraged for good. In particular, cyberattackers can use AI to enhance their attacks and expand their campaigns.
A recent survey published by researchers at Microsoft, Purdue, and Ben-Gurion University, among others, explores the threat of this “offensive AI” on organizations. It identifies different capabilities that adversaries can use to bolster their attacks and ranks each by severity, providing insights on the adversaries.
The survey, which looked at both existing research on offensive AI and responses from organizations including IBM, Airbus, Airbus, IBM, and Huawei, identifies three primary motivations for an adversary to use AI: coverage, speed, and success. AI enables attackers to “poison” machine learning models by corrupting their training data, as well as steal credentials through side channel analysis. And it can be used to weaponize AI methods for vulnerability detection, penetration testing, and credential leakage detection.
Top AI cyber threats
Organizations told the researchers that they consider exploit development, social engineering, and information gathering to be the most threatening offensive AI technologies. They’re particularly concerned about AI used for impersonation, like deepfakes to perpetrate phishing attacks and reverse-engineering that might allow an attacker to “steal” proprietary algorithms. Moreover, they worry that, because of AI’s ability to automate processes, adversaries may shift from having a few slow covert campaigns to having many fast-paced campaigns to overwhelm defenders and increase their chances of success.
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
But the fears aren’t spurring investments in defenses. According to a survey of enterprises conducted by data authentication startup Attestiv, less than 30% said they’ve taken steps to mitigate the fallout from a deepfake attack. The fight against deepfakes is likely to remain challenging as generation techniques continue to improve, in spite of innovations like the Deepfake Detection Challenge and Microsoft’s Video Authenticator.
Indeed, the researchers anticipate that phishing campaigns will become more rampant as bots gain the ability to make convincing deepfake phishing calls. They also say that there’s likely to be an increase of offensive AI in the areas of data collection, model development, training, and evaluation over the next few years.
To head off the threats, the researchers recommend that organizations focus on developing post-processing tools that can protect software from analysis after development. They also suggest the integration of security testing, protection, and monitoring of models with MLOps so that organizations can more easily maintain secure systems. MLOps, a compound of “machine learning” and “information technology operations,” is a newer discipline involving collaboration between data scientists and IT professionals with the aim of productizing machine learning algorithms
“With AI’s rapid pace of development and open accessibility, we expect to see a noticeable shift in attack strategies on organizations,” the researchers wrote. “AI will enable adversaries to target more organizations in parallel and more frequently. As a result, instead of being covert, adversaries may chose to overwhelm the defender’s response teams with thousands of attempts for the chance of one success … [Indeed, as] adversaries begin to use AI-enabled bots, defenders will be forced to automate their defenses with bots as well. Keeping humans in the loop to control and determine high-level strategies is a practical and ethical requirement. However, further discussion and research is necessary to form safe and agreeable policies.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.