Weaponizing AI is proving to be a potent catalyst driving new, more complex cybersecurity threats, reshaping the cybersecurity landscape for years to come. From rogue attackers to sophisticated advanced persistent threat (APT) and nation-state attack teams, weaponizing large language models (LLMs) is the new tradecraft of choice. Adversarial AI is the attack threat no security team sees coming.
The nature of threats is getting faster, more nuanced and more lethal as attackers look to sharpen their tradecraft and win the AI war. Forrester's Top Cybersecurity Threats In 2024 report reflects how security teams are going to find it increasingly challenging to keep the balance of power in check when it comes to weaponized AI attacks.
With the goal of democratizing weaponized AI and profiting off of attackers' demand for the latest technologies, attack groups that include APTs and nation-states are selling ransomware-as-a-service, FraudGPT starter services, IoT attack services and selling knowledge of how to fine-tune malware-less attacks that the current generation of cybersecurity systems can't detect. CrowdStrike's 2024 Global Threat Report found that malware-free attacks increased from 71% of all attacks in 2022 to 75% in 2023.
Forrester sees a more complex, lethal cyberthreat landscape coming
Enterprises are under siege. Years of survey data quantify that. Forrester's latest survey of security and risk management (SR) pros found that nearly eight out of ten (or 78%) estimated their organization's sensitive data was potentially breached or compromised at least once in the past 12 months. The reports' authors write that "a successful cyber breach acts as a beacon for future attacks." Comparing security survey data from 2022 to 2023 shows a 13% jump in SR pros who estimated their organizations experienced six to 10 breaches in the prior 12-month period.
Forty-eight percent of survey respondents had experienced a breach or related cyber incident that exceeded $1 million. The majority of these breaches (27%) cost from $2 million to around $5 million to remediate, with 3% costing $10 million or more.

The average estimated cost of a breach is $2,183,333. Three out of every 10 S.R. pros interviewed have been hit with between $2 million to more than $10 million in total breach costs. Source: Forrester.
Forrester's top 5 security threats for 2024
Narrative attacks leveraging disinformation, growing manipulation risks of deepfakes, exploiting A.I. software supply chains and the threat of nation-state espionage are the top five security threats Forrester warns of this year. Each is briefly explained below.
Narrative attacks
These attack strategies seek to discredit or distort the true meaning of a narrative by manipulating the message and undermining its credibility and trust. Forrester's report team writes: "Technology is now making it easier and faster for sophisticated misinformation and disinformation campaigns to spread. Narrative attacks intent on shaping public opinion and influencing behavior will be especially popular in 2024 as 64 countries will stage elections."
It's a favorite strategy nation-states use in an attempt to derail elections and create divisions across a nation's citizens. Forrester points to the example of Russian nation-state attackers using narrative attacks to try to sow political dissent around the Mexico and U.S. border debates. The recently published 2024 Annual Threat Assessment of the U.S. Intelligence Community explains in detail how nation-state attackers use narrative attacks and other techniques to disrupt U.S. foreign policy and elections.
Deepfakes
Deepfakes are one of the fast-growing threats propelled by quick access to inexpensive computing power, generative AI algorithms — including generative adversarial networks (GANs) and autoencoders — and the surging popularity of mobile apps designed to transform a person's image. The goal is to create audio- and visually-convincing likenesses of humans. Deepfakes are often used to create synthetic identities used for fraud, ransomware execution, data and intellectual property (IP) loss. Forrester points to instances where deepfakes have been used for stock-price manipulation, reputation and brand damage, decreased employee and customer experience and amplification of misinformation. Identifying deepfake threats and stopping them requires algorithms that can detect audio and image manipulation.
Forrester recommends IT and security teams look at how they can control the source of media by using authenticator apps and also wrapping facial and voice biometrics with additional verification and protection layers, including behavioral biometrics, device ID fingerprinting/reputation, bot management and detection, digital fraud management, and passwordless authentication.
CISOs need passwordless authentication systems that are intuitively designed not to frustrate users while they ensure adaptive authentication on any device. Leading vendors providing passwordless authentication solutions include Microsoft Authenticator, Okta, Duo Security, Auth0, Yubico and Ivanti's zero sign-on (ZSO). Of these, Ivanti's approach is the most innovative in combining passwordless authentication and zero trust that enables adaptive authentication, including multi-factor authentication (MFA), based on risk. Ivanti ZSO is a component of the Ivanti Access platform. It supports FIDO2, SAML and WS-Fed protocols authentication for desktop/laptop and mobile login, along with single sign on (SSO) via certificates.
AI responses
Defending against prompt engineering, prompt injection and sensitive data being exfiltrated through repetitive prompt attacks is a high priority on many CISOs' threat lists. As more enterprises introduce gen AI-based apps, the risk gets worse. CISOs are reluctant to ban the use of Microsoft Copilot, Salesforce Einstein GPT, Claude from Anthropic or Perplexity given the productivity gains they deliver.
Forrester notes that new technologies have emerged to perform content analysis and filtering, including PrivateAI, Prompt Security, ProtectAI and data leakage prevention (DLP) vendors attempting to rebrand or pivot to offering controls in this space. In addition, Cisco, Ericom Generative AI Data Loss Prevention from CradlePoint, Menlo Security, Nightfall A.I, Wiz and Zscaler are a few of the most notable new systems on the market that aim to help security leaders solve this challenge.
Of these, Cradlepoint's approach is among the more innovative. It's differentiated in how it relies on a clientless approach where user interactions with gen AI sites are executed in a virtual browser inside the Ericom Cloud Platform. Cradlepoint says this approach is designed to allow for data loss protection and access policy controls to be applied in their cloud platform. Designing their system to route all traffic through their proprietary cloud platform prevents personally identifiable information (PII) or other sensitive data from being submitted to gen AI sites like ChatGPT.
"Generative AI websites provide unparalleled productivity enhancements, but organizations must be proactive in addressing the associated risks," said Gerry Grealish, VP of marketing for the Ericom cybersecurity unit of Cradlepoint. "Our gen AI isolation solution empowers businesses to attain the perfect balance, harnessing the potential of gen AI while safeguarding against data loss, malware threats and legal and compliance challenges."
The AI software supply chain
Exploiting AI software supply chains to embed malicious executable programs in source code is proving to be an especially challenging threat to stop. Attacking software supply chains and creating chaos is the ransom multiplier attackers gravitate towards. Nation-state attackers, cybercrime syndicates, and advanced persistent threat (APT) groups routinely go after software supply chains because they've historically been the least-defended area of any software company or business.
Notably, 91% of enterprises have fallen victim to software supply chain incidents in just a year, underscoring the need for better safeguards for continuous integration/continuous deployment (CI/CD) pipelines.
GitHub and Hugging Face are among the most popular open-source models and frameworks. In an AI software supply chain intrusion attempt, attackers would look to exploit supply chains and frameworks as well. Forrester cites the breach of OpenAI’s ChatGPT as the result of malicious actors leveraging a vulnerability in the Redis open-source library. The LLM is just one component within a highly intricate ecosystem.
Nation-state espionage
Spy satellites and advanced technologies are a core part of nation state attackers’ arsenals. The Council on Foreign Relations found that the goal of 78% of nation-state cyberattacks in 2022 was espionage, growing to 82% in 2023.
Satellites are becoming an increasingly strategic threat surface to protect. In April of 2023, the Cyberspace Solarium Commission urged the White House to name space as critical infrastructure after Russia attacked Viasat during its initial invasion of Ukraine. Later in 2023, German researchers highlighted the vast vulnerabilities found in satellite technologies. Given this and the shifting geopolitical landscape, satellite supply chains are facing more scrutiny.
Building more cyber-resilient satellites begins with a strong network capable of securing the supply chain at scale, including ground and spacecraft segments. At the end of 2022, a total of 6,718 active satellites orbit the planet, with another 58,000 satellites expected to be launched by 2030. The US Defense Intelligence Agency wrote in its 2022 Challenges to Security in Space report that: “Space is being increasingly militarized. Some nations have developed, tested and deployed various satellites and some counter-space weapons. China and Russia are developing new space systems to improve their military effectiveness and reduce any reliance on US space systems.”
