Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Fears surrounding AI and cybersecurity reflect very real risks. AI-powered malware isn’t a threat we need to worry about right now, but attackers have become adept at manipulating AI systems to their own advantage, essentially turning them against users. Widespread manipulation of the algorithms used on social media is already causing problems in many parts of the world. And as sophisticated AI tools become freely available, it would be naive not to expect adversaries to take advantage of the technology.

But for now, we suspect that threat actors are using AI in rather indirect ways, such as for data analysis or by using tools to produce fake content. So although there are clear reasons for concern, AI is arguably more of a help to cybersecurity defenders than a threat, for the time being.

AI’s limitations

As AI and machine learning are complex, and often loosely defined, a lot of the fear comes from misunderstanding what the technology is and what it can do. For example, we’re decades away from seeing anything like artificial general intelligence (AGI) — a machine or system that can learn to do any task a human can — let alone a sentient AI. Even though we’ve never seen the AI field advance as quickly as it has recently, the first plans to build an artificial human brain date back to the 1950s.

Today, intelligent systems have specific and narrow applications. These are everywhere around us — you see them when you drive into a car park and your license plate is read automatically, and you hear them when you speak to Siri or Alexa and they’re able to understand what you’re saying.


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

The most common example of this kind of narrow application of machine learning is Google search. You don’t even need to type more than a couple of letters before — as if by magic — Google seems to intuit what you’re looking for. But while that kind of intelligent algorithm is excellent at what it does, it can only do that one thing — a search system won’t know how to drive a car.

In narrow applications, computers are already a million times better than humans. And while people versus machine comparisons carry a certain amount of drama, interactions between the two are actually business as usual in many domains, including cybersecurity. Cybersecurity products and services have used AI components since at least 2005. Every single day, in homes and workplaces across the globe, cyberdefense systems (including spam filters, antivirus engines, heuristic intrusion detection mechanisms, endpoint detection and response solutions, and more) cross swords with countless human adversaries. And these AI-based defenses win more fights than they lose.

AI’s use in actual attacks, on the other hand, is largely indirect. There’s no AI-powered malware in the wild. AI could certainly be used to run attacks that learn and morph, but any such examples currently reside within academic research or science fiction. Attackers are definitely trying to abuse the AI systems used by defenders, but they are not yet creating their own.

So the cybersecurity fight is about people protecting people from other people. And in spite of the popular AI-as-adversary narrative, AI is a natural ally to the cybersecurity industry and will likely continue to be so in the near future.

Machines are a natural complement to our strengths

Some of AI’s biggest successes, at least in the security field, involve handling tasks that humans find difficult. Data analysis is a prime example of an application where machine intelligence has become invaluable.

A normal laptop can produce well over 1 million “events” in a single day. Asking a person or team of people to sort through these events to find a small handful of anomalies that could indicate a potential attack is far too taxing in most cases. But humans can effectively solve this problem by training AI models to flag anomalies so analysts can address them.

Cybersecurity professionals have applied this hand-in-glove approach to working with AI for well over a decade. It has proven to be effective in tasks such as sample analysis, URL categorization, malware classification, and breach detection. These are areas where the industry has successfully capitalized on the unique strengths of AI and machine learning to stop countless numbers of potential security incidents. And human-AI collaboration will become even more widespread and important in the future.

The work to understand, appreciate, and nurture machine intelligence as entirely different from human intellect is a largely untapped frontier in AI research. And teaching human cybersecurity professionals to embrace machine intelligence as a means of augmenting their own capabilities will give the cybersecurity industry a clear vision for how to utilize AI effectively.

In the near future, social, economic, and political considerations will play an increasingly important role in shaping AI’s net impact on security. Collaborations between people and AI have already yielded substantial benefits for cybersecurity, and will likely continue to do so. With massive investments in AI and the limited number of people with the skills to drive them, there’s very little motivation for talented AI professionals to turn to crime to earn money. Right now, they can make a very comfortable living without breaking laws.

Historically, defenders have benefited more from AI than attackers, and there are many forces pulling the balance of power in that direction. But it’s important to keep in mind that our adversaries and allies are the people that work with AI. We are the ghosts in the machines. And acknowledging that is vital for our continued success.

Read More: VentureBeat's Special Issue on AI and Security