Above: DedSec is the hacker group in Watch Dogs 2.

Image Credit: Dean Takahashi

VB: Where in the process do humans make the most sense? In what roles?

Grobman: There are two places where humans play critical roles. One is in being able to use strategic intellect. When an analysis is complete and presents its result to the human, having the human ascertain the context of those results — just by way of a simple example, if you see an attack coming from Russia, that might be something to be concerned about, but if the human is able to notice that an employee is on a business trip in Russia, that all of a sudden makes the technical data have very different meaning. Understanding complex context is something that humans are very good at.

The other thing a human is required for is when an action is going to be taken that has severe consequences. When a decision needs to be made to remove someone from a network, to shut down a major application in a way that disrupts business, to isolate a manufacturing facility — having those decisions made by a human, where they can be presented with a technical assessment of what the machine believes the situation is, and the human uses their strategic intellect in order to assess whether that makes sense, and then determines whether the response is appropriate to mitigate the risk that is being presented, those are things that are critical for humans to be in the loop on.

VB: Your CEO talked a lot about the ransomware problem before it really exploded. I’m curious what you think about that, and whether that comes into this conversation as well, whether AI can really help.

Grobman: It’s a great example, because I think where AI is really effective is detecting ransomware that appears to be similar to other types of attacks. It can be used to defuse the impact of a ransomware event, or defuse the ability of ransomware to be executed multiple times.

Where human intellect helps is when ransomware is used to disguise ulterior motives. In the ransomware cases we saw over the last couple of weeks, both WannaCry and the Petya attack, it was very unclear that the objective of the attacker was to actually make large sums of money. By way of example, in the Petya attack, the payment mechanism for the ransom was very fragile. It used a standard email address in order to communicate with the ransomware author as part of the key access capability. Once that email account was disabled, it removed the ability for individuals to pay the ransom.

A human intellect can ask questions like, “Why would somebody develop such a sophisticated infection mechanism that took advantage of both technical exploitation of vulnerabilities and a credential-stealing approach to impact non-vulnerable machines, yet rely on such a rudimentary and fragile payment mechanism?” There’s a few different reasons that might have happened, but that’s the type of question that’s very difficult for AI to come up with and answer. You need to understand incentives, what drives different parts of the ecosystem, what a criminal or state actor might be trying to do. These are good examples where we can use machine learning and AI to do better detection of ransomware, but we still need to rely on humans to understand some of the nuance in what is occurring.

Above: AI could help defend against cyber attacks with human curation.

Image Credit: Shutterstock

VB: As far as things like tampering with the election, do you see a role for AI in that kind of situation? Is that similar to other kinds of protection, or unique in some ways?

Grobman: Machine learning and AI are very good at some things. One of them is, they’re very good classifiers. They can look at data and classify it into buckets that have been previously defined. Where classifiers work very well is when attacks or behavior are similar to the classification pattern that it was trained on. If there’s a state-sponsored attack, like an election attack, that has similar characteristics to things that have been seen before, those will be well-served by AI and machine learning.

Where I think we need to be careful is a sophisticated state actor crafting a unique attack that has never been seen before, is not similar to attacks that have been seen before, and is intentionally designed to evade the in-market machine learning models that are available when a nation-state is developing such a capability. That scenario is exactly the type of scenario where you’ll need to use a human-machine teaming type of approach in order to see something that is radically different from what’s been seen before.

VB: Are there some areas where you’re worried that AI developers should not go? There was the announcement this morning that a bunch of nonprofits are looking at the ethical uses of AI and encouraging this sort of more pro-human use of AI, compared to some other types of AI that could be dangerous to develop.

Grobman: At least as far as cybersecurity is concerned, I’m less worried about AI going rogue. The bigger worry for me is the value that AI brings to the attacker. Attackers are using AI as well. If you look at what AI is good for, it’s very good for classification, as we’ve discussed. An attacker using AI to classify its victims into “easy to breach,” “medium to breach,” “hard to breach,” or “easy to breach with a high probability of payout,” it enables an attacker to be much more efficient in selecting their victims.

AI also can be used to automate things that used to require humans. For example, attackers used to have to choose between spearphishing, where they could craft a spearfished email to a specific target and have a high return on acquiring the victim, or they could to a mass mailing of a non-specific phishing email. They had to choose between a low effectiveness rate that they would send en masse, or a high effectiveness rate that required human tuning in order to target specific victims. What AI does for them, they can use AI to automate phishing so they essentially have the effectiveness of spearphishing at the scale of traditional phishing.

Above: Ransomware is a rising problem.

Image Credit: SWEviL/Shutterstock

VB: In this scenario, do you see the good AI eventually outwitting the bad AI?

Grobman: One of the challenges we always have in cybersecurity is that the attacker has some inherent advantages. The attacker is always able to look at the existing defensive capabilities deployed in the industry. In some ways that will give the upper hand to the attacker. That said, AI does open many new avenues for the defender community, the cybersecurity technology community, to look at defending environments in new ways.

It will help both parties, I think, but one of the key things to recognize is, regardless of what people desire, bad actors will always use the best technology available to them. The fact that using AI for evil purposes by an attacker is something we don’t like — it doesn’t mean they won’t do it. They’ll always take advantage of the most effective technologies they can find to add to their arsenal.