Cybersecurity firm McAfee was born before the current artificial intelligence craze. The company recently spun out of Intel at a $4.2 billion valuation, and it has become a giant among tech security firms. But lots of rival AI startups in cybersecurity (like Deep Instinct, which raised $32 million yesterday) are applying the recent advances in deep learning to the task of keeping companies secure.

Steve Grobman, chief technology officer at McAfee, believes that AI alone isn’t going to stop cybercrime, however. That’s in part because the attackers are human and they’re better at determining outside-the-box ways to penetrate security defenses, even if AI is being used to bolster security. And those attackers can employ AI in offensive attacks.

Grobman believes that including human curation — someone who can take the results of AI analysis and think more strategically about how to spot cyber criminals — is a necessary part of the equation.

“We strongly believe that the future will not see human beings eclipsed by machines,” he said, in a recent blog post. “As long as we have a shortage of human talent in the critical field of cybersecurity, we must rely on technologies such as machine learning to amplify the capabilities of the humans we have.”

The machines are coming, which could be a good thing for security technologists and cybercriminals alike, escalating the years-old cat-and-mouse game in computer security. I interviewed Grobman recently, and the topics we discussed are sure to arise at the Black Hat and Defcon security conferences coming up in Las Vegas.

Here’s an edited transcript of our interview.

Above: Cybersecurity is getting harder.

Image Credit: McAfee

VentureBeat: Your topic is a general one, but it seems interesting. Was there any particular impetus for bringing up this notion of teaming humans and machines?

Steve Grobman: It’s one of our observations that a lot of people in the industry are positioning some of the newer technologies, like AI, as replacing humans. But one of the things we see that’s unique in cybersecurity is, given that there’s a human on the other side as the attacker, strictly using technology is not going to be as effective as using technology along with human intellect.

One thing we’re putting a lot of focus into is looking at how we take advantage of the best capabilities of what technology has to bring, along with things human beings are uniquely qualified to contribute, primarily things related to gaming out the adversary and understanding things they’ve never seen before. We’re putting all of this together into a model that enables the human to scale quite a bit more drastically than simply doing things with a lot of manual effort.

VB: Glancing through the report you sponsored this May — you just mentioned that cybersecurity is unique in a way. It’s usually a human trying to attack you.

Grobman: If you think about other areas that are taking advantage of machine learning or AI, very often they just improve over time. A great example is weather forecasting. As we build better predictive models for hurricane forecasting, they’re going to continue to get better over time. With cybersecurity, as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field we call adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models — what we call poisoning the models, or machine learning poisoning – or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.

There are many ways of doing this. One way we’ve looked at a bit is a technique where they force the defender to recalibrate the model by flooding it with false positives. It’s analogous to, if you have a motion sensor over your garage hooked up to your alarm system — say every day I drove by your garage on a bicycle at 11 p.m., intentionally setting off the sensor. After about a month of the alarm going off regularly, you’d get frustrated and make it less sensitive, or just turn it off altogether. Then that gives me the opportunity to break in.

It’s the same in cybersecurity. If models are tuned in such a way where a bad actor can create samples or behavior that look like malicious intent, but are actually benign, after the defender deals with enough false positives, they’ll have to recalibrate the model. They can’t continuously deal with the cost of false positives. Those sorts of techniques are what we’re investigating to try to understand what the next wave of attacks will be, as these new forms of defense grow in volume and acceptance.

VB: What are some things that are predictable here, as far as how this cat-and-mouse game proceeds?

Grobman: One thing that’s predictable — we’ve seen this happen many times before — whenever there’s a radical new cybersecurity defense technology, it works well at first, but then as soon as it gains acceptance, the incentive for adversaries to evade it grows. A classic example is with detonation sandboxes, which were a very popular and well-hyped technology just a few years ago. At first there wasn’t enough volume to have bad actors work to evade them, but as soon as they grew in popularity and were widely deployed, attackers started creating their malware to, as we call it, “fingerprint” the environment they’re running in. Essentially, if they were running in one of these detonation sandbox appliances, they would have different behavior than if they were running on the victim’s machine. That drove this whole class of attacks aimed at reducing the effectiveness of this technology.

We see the same thing happening with machine learning and AI. As the field gets more and more acceptance in the defensive part of the cybersecurity landscape, it will create incentives for bad actors to figure out how to evade the new technologies.

Above: Malicious hackers are using AI too.

Image Credit: McAfee

VB: The onset of machine learning and AI has created a lot of new cybersecurity startups. They’re saying they can be more effective at security because they’re using this new technology, and the older companies like McAfee aren’t prepared.

Grobman: That’s one of the misconceptions. McAfee thinks AI and machine learning are extremely powerful. We’re using them across our product lines. If you look at our detection engines, at our attack reconstruction technology, these are all using some of the most advanced machine learning and AI capabilities available in the industry.

The difference between what we’re doing and what some of these other startups are doing is, we’re looking at these models for long-term success. We’re not only looking at their effectiveness. We’re also looking at their resilience to attack. We’re working to choose models that are not only effective, but also resilient to evasion or other countermeasures that will start to play in this field. It’s important that our customers understand that this is a very powerful technology, but understanding the nuance of how to use it for a long-term successful approach is different from simply using what’s effective when the technology is initially introduced.

VB: What is the structure you foresee with humans in the loop here? If you have the AI as a line of defense, do you think of the human as someone sitting at a control panel and watching for things that get past the machine?

Grobman: I’d put it this way. It’s going to be an iterative process, where machine technology is excellent at gathering and analyzing large quantities of complex data, but then having those results presented to a human operator to interpret and help guide the next set of analysis is going to be critical. It’s important that there are options for an operator to direct an investigation and really find what the underlying root cause is, what the scale of an attack is, and whether it is a new type of attack that an algorithm might not have seen before and was intentionally designed to not be recognized. Putting all of that together is going to be critical for the end success.

VB: A lot of people are making predictions about how many jobs AI may eliminate. If you apply that to your own field, do you think it has an impact, or not?

Grobman: One of the biggest challenges we have in the cybersecurity industry is a talent shortage, where there aren’t enough cybersecurity professionals to man security operations and incident response positions. Utilizing automation and AI to make it such that the security personnel that are available can be effective at their jobs is really the key. Very few people I’ve talked to are concerned there won’t be enough jobs for human security professionals because they’ve been replaced by technology. We’re still very far on the other side of that equation. Even with the best technology available, we still have a shortcoming as far as critical talent in the cybersecurity defense space.

Above: DedSec is the hacker group in Watch Dogs 2.

Image Credit: Dean Takahashi

VB: Where in the process do humans make the most sense? In what roles?

Grobman: There are two places where humans play critical roles. One is in being able to use strategic intellect. When an analysis is complete and presents its result to the human, having the human ascertain the context of those results — just by way of a simple example, if you see an attack coming from Russia, that might be something to be concerned about, but if the human is able to notice that an employee is on a business trip in Russia, that all of a sudden makes the technical data have very different meaning. Understanding complex context is something that humans are very good at.

The other thing a human is required for is when an action is going to be taken that has severe consequences. When a decision needs to be made to remove someone from a network, to shut down a major application in a way that disrupts business, to isolate a manufacturing facility — having those decisions made by a human, where they can be presented with a technical assessment of what the machine believes the situation is, and the human uses their strategic intellect in order to assess whether that makes sense, and then determines whether the response is appropriate to mitigate the risk that is being presented, those are things that are critical for humans to be in the loop on.

VB: Your CEO talked a lot about the ransomware problem before it really exploded. I’m curious what you think about that, and whether that comes into this conversation as well, whether AI can really help.

Grobman: It’s a great example, because I think where AI is really effective is detecting ransomware that appears to be similar to other types of attacks. It can be used to defuse the impact of a ransomware event, or defuse the ability of ransomware to be executed multiple times.

Where human intellect helps is when ransomware is used to disguise ulterior motives. In the ransomware cases we saw over the last couple of weeks, both WannaCry and the Petya attack, it was very unclear that the objective of the attacker was to actually make large sums of money. By way of example, in the Petya attack, the payment mechanism for the ransom was very fragile. It used a standard email address in order to communicate with the ransomware author as part of the key access capability. Once that email account was disabled, it removed the ability for individuals to pay the ransom.

A human intellect can ask questions like, “Why would somebody develop such a sophisticated infection mechanism that took advantage of both technical exploitation of vulnerabilities and a credential-stealing approach to impact non-vulnerable machines, yet rely on such a rudimentary and fragile payment mechanism?” There’s a few different reasons that might have happened, but that’s the type of question that’s very difficult for AI to come up with and answer. You need to understand incentives, what drives different parts of the ecosystem, what a criminal or state actor might be trying to do. These are good examples where we can use machine learning and AI to do better detection of ransomware, but we still need to rely on humans to understand some of the nuance in what is occurring.

Above: AI could help defend against cyber attacks with human curation.

Image Credit: Shutterstock

VB: As far as things like tampering with the election, do you see a role for AI in that kind of situation? Is that similar to other kinds of protection, or unique in some ways?

Grobman: Machine learning and AI are very good at some things. One of them is, they’re very good classifiers. They can look at data and classify it into buckets that have been previously defined. Where classifiers work very well is when attacks or behavior are similar to the classification pattern that it was trained on. If there’s a state-sponsored attack, like an election attack, that has similar characteristics to things that have been seen before, those will be well-served by AI and machine learning.

Where I think we need to be careful is a sophisticated state actor crafting a unique attack that has never been seen before, is not similar to attacks that have been seen before, and is intentionally designed to evade the in-market machine learning models that are available when a nation-state is developing such a capability. That scenario is exactly the type of scenario where you’ll need to use a human-machine teaming type of approach in order to see something that is radically different from what’s been seen before.

VB: Are there some areas where you’re worried that AI developers should not go? There was the announcement this morning that a bunch of nonprofits are looking at the ethical uses of AI and encouraging this sort of more pro-human use of AI, compared to some other types of AI that could be dangerous to develop.

Grobman: At least as far as cybersecurity is concerned, I’m less worried about AI going rogue. The bigger worry for me is the value that AI brings to the attacker. Attackers are using AI as well. If you look at what AI is good for, it’s very good for classification, as we’ve discussed. An attacker using AI to classify its victims into “easy to breach,” “medium to breach,” “hard to breach,” or “easy to breach with a high probability of payout,” it enables an attacker to be much more efficient in selecting their victims.

AI also can be used to automate things that used to require humans. For example, attackers used to have to choose between spearphishing, where they could craft a spearfished email to a specific target and have a high return on acquiring the victim, or they could to a mass mailing of a non-specific phishing email. They had to choose between a low effectiveness rate that they would send en masse, or a high effectiveness rate that required human tuning in order to target specific victims. What AI does for them, they can use AI to automate phishing so they essentially have the effectiveness of spearphishing at the scale of traditional phishing.

Above: Ransomware is a rising problem.

Image Credit: SWEviL/Shutterstock

VB: In this scenario, do you see the good AI eventually outwitting the bad AI?

Grobman: One of the challenges we always have in cybersecurity is that the attacker has some inherent advantages. The attacker is always able to look at the existing defensive capabilities deployed in the industry. In some ways that will give the upper hand to the attacker. That said, AI does open many new avenues for the defender community, the cybersecurity technology community, to look at defending environments in new ways.

It will help both parties, I think, but one of the key things to recognize is, regardless of what people desire, bad actors will always use the best technology available to them. The fact that using AI for evil purposes by an attacker is something we don’t like — it doesn’t mean they won’t do it. They’ll always take advantage of the most effective technologies they can find to add to their arsenal.

VB: There are some reports of the NSA developing technology to aggressively go after their targets in the name of national security. Do you worry about that kind of use of cybersecurity technology, that could then be somehow appropriated by bad actors?

Grobman: As with any technology, there are always risks, but it’s important to recognize that cyber-offensive capabilities, used responsibly, can be the most precise weapon a nation has available. Being able to program an attack to only take out a target without having any collateral damage to civilians or non-targeted infrastructure is something that is much easier to do with a cyber-capability than traditional offensive weaponry. Although any nation that’s engaged in offensive cyber-capabilities needs to use the utmost caution and care and understand the implications of their technology, I do think that a responsible nation-state using offensive cyber-capabilities can have an effective, precision weapon in their arsenal. I don’t think AI is different from anything else that would fall into that category.

Above: Ransomware was first detected in 1989.

Image Credit: Intel Security

VB: Looking at your alarm bells right now, what are you worried about, and what are you less worried about?

Grobman: As far as AI goes, it introduces a new technical landscape to both the attacker and the defender. It’s a highly complex landscape, and one where it’s difficult to understand all the nuance. As we look at technologies developed by defenders, we need to recognize what it’s doing, but also recognize what the limitations are. Part of what I worry about is having organizations or key individuals not comprehend some of the nuanced elements of AI-based solutions and believing they do something that they don’t, or not understanding the opportunity that they have by embracing it.

There’s a need for lots of education. I do worry that it will greatly amplify the effectiveness of bad actors. It will enable bad actors to create attacks and attack scenarios that will be much more effective than when they had to tune everything manually.

VB: As an aside, I’m curious what you think about all the optimism around Ethereum and blockchain and coming up with secure cryptocurrency.

Grobman: I think it’s interesting. I don’t know that I’d call it a panacea. Blockchain is an interesting technology. It solves some unique problems. Specifically, it allows an immutable ledger when none of the parties trust each other. That’s important in certain problems. But it’s definitely not going to be the end-all be-all for everything in our industry. We have many different problems that require many kinds of solutions.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member