McAfee CTO Steve Grobman has six cybersecurity warnings for all of us as U.S. election day approaches. This is in keeping with what Grobman has consistently done over the years. He looks at the million little cyber threats that McAfee sees every day and tries to extract a big-picture warning for the rest of us, whether it’s about AI’s effect on cyberattacks or the dangers of deep fakes.

He has studied the impact of cyberattacks during the 2016 election, and he is once again concerned about how the American electorate could be swayed by false information. I talked with Grobman this week about his concerns.

He pointed to the Hunter Biden controversy as a good example. Grobman said we should be wary of the “hack and leak” disinformation campaign. Some information about candidate Joe Biden’s son is legitimate. But he warns that “fabricated information can be intertwined with legitimate information that has been stolen.”

He added, “Because the legitimate information can be independently validated, it gives a false sense of authenticity to the fabricated information.” Be prepared for that disinformation to only grow in the coming days. Grobman wants us all to vote, but he wants us to do it wisely and with reliable sources of information.

Here’s an edited transcript of our interview.

Steve Grobman: I didn't say that.

Above: Steve Grobman: I didn’t say that. Grobman did a demo of deep fakes at RSA in 2019.

Image Credit: RSA

VentureBeat: You had some thoughts about election-related matters today.

Steve Grobman: We’re moving into the home stretch. While we can’t predict exactly what the outcome is going to be over the next week, there’s definitely a number of things that we think people should be on heightened alert for from a cyber perspective, in order to maximize the ability to have a free and fair election. I’m happy to talk through some of the scenarios that we’re looking out for, and we encourage both media and voters to be on the lookout for.

VentureBeat: You had six examples?

Grobman: We’ve broken it down to six key areas that are based on things we’ve seen and things that we think are high-probability events, or at least plausible scenarios that we need to be on the watch for.

The first one is what we’re calling hack and leak. It’s the need to be on the lookout for leaked data and not trusting leaked data. One of the problems with political information that comes to light from a data breach or a leak is, fabricated information can be intertwined with legitimate information that’s been stolen. Because the legitimate information can be independently validated, it gives a false sense of authenticity to the fabricated information.

In 2016 the Podesta emails were one type of leak, where some of that information could be validated, but there were also a number of things that were unclear as to whether they were fabricated. In this election, we’re seeing other types of leaked information or information that’s coming from questionable sources, such as the Hunter Biden laptop. It’s important that voters should distrust any information that’s coming from a leak unless all of the information can be independently validated. That’s the first scenario that we wanted to call out.

The second one is related to ransomware. We see ransomware as a major problem for consumers and organizations over the last few years, where ransomware is now impacting businesses. There are many types of ransomware, including not only holding data hostage, but also systems, and even extorting businesses with things like the threat of release of intellectual property, or re-enabling critical business systems.

One of the concerns we have is, given that ransomware is so common, it’s typically attributed to criminals, but it would be a reasonable way for a nation-state actor to disrupt the elections and have false attribution pointing more toward cybercrime motivation than an election manipulation or disruption scenario. We do need to look out for both state-sponsored ransomware campaigns, or even what I would call state-encouraged ransomware campaigns, where a nation-state might look the other way for criminal organizations within the country that are willing to execute these attacks against election infrastructure.

VentureBeat: On your first scenario, with the Hunter Biden material, what is theoretically an issue here is that there were some facts that were verifiable. It was his laptop, and there were emails on it. But the specific emails pointing to his father, that can be faked to go along with other correct information. Is that a kind of scenario that’s possible here, that you’re warning against?

Grobman: Right. The warning — the way I’d say it more directly is, it’s important not to let verified information in a leak lend credibility to unverified information. It’s very easy and a common tactic for disinformation to use true, verifiable information to raise the credibility of false or disinformation. In the scenario you just laid out, it would be very reasonable for an adversary that wanted to create a narrative that was completely fabricated to intertwine that information along with content that could be verified. What people might not realize is, the logic of, “Oh, well, in one part of the story the facts check out, therefore the whole thing must be true,” that’s a very dangerous way of looking at information.

It’s critical that — I’d give three takeaways. One is, voters need to be skeptical of information that comes out of a leak. The press needs to be very careful in how they treat information that comes out of a leak, and not assume it’s legitimate unless it’s completely verified independently. And third, politicians should not point to leaked information as part of their political messaging, because the information ultimately can’t be verified. It’s a dangerous path to walk down if politicians start pointing to information that is very easily fabricated.

Above: Deep fakes are pretty easy to create.

Image Credit: McAfee

VentureBeat: On ransomware, is there a scenario out there in the wild already that relates to the election?

Grobman: We have seen state and local IT infrastructure impacted by ransomware attacks very recently. What’s a lot more difficult is to do direct attribution to a particular nation-state that might be using this tactic to disrupt the election. One of the challenges here is, whether it’s a nation-state, or criminal groups that are linked to a nation-state, or just cybercriminals, the evidence may look very similar. That’s the danger. We’re seeing that ransomware is impacting state and local organizations.

In the third scenario, one of the differences between 2016 and 2020 is the sophistication of AI technology in the ability to create large volumes of compelling fake video. What we call deep fake. We need to recognize that just as voters are skeptical of photographs being subject to manipulation, video now can be manipulated such that there can be a video of a candidate saying or doing anything. The barrier to entry for building these videos has come way down since the last election cycle.

We need to be very careful in the way that we treat video, not only being skeptical but before spreading viral videos, they need to be verified. Not only by looking at them, but tracing them back to their source. It’s important that if there is video content related to a candidate’s words or actions, that it can be validated by a reputable news or media outlet, and not solely sourced off of social media.

One of the things McAfee is doing in this area is we’ve opened a deep fake forensics lab that is available to media sources, such that if a video comes in, before they run a story based on it, we can provide analysis as to whether we see markers or indications that it’s been fabricated or faked.

VentureBeat: Are you able to quickly identify deep fakes? Is that something you can keep up with?

Grobman: I’d put it this way. We’re pretty good at detecting deep fakes that are created with the common tools that are publicly available. With that said, if a well-funded nation-state actor created a video using new algorithms, new techniques, that would be significantly more difficult for us to detect.

The other two points I like to make on our ability to do analysis — we’re able to detect deep fakes, but in scenarios where we don’t detect something as being fake, that doesn’t infer that it’s legitimate or authentic. If we detect that it’s fake, it’s almost definitely fake. If we don’t detect that it’s fake, that either means it’s authentic or it’s using new techniques that our deep fake detection capability is not yet able to recognize.

The other point I’d try to stress is, it is a cat and mouse game. There are going to be better deep fake creation techniques, and we’ll have better deep fake detection techniques. We can also use a wide range of deep fake detection techniques that look at different approaches. For example, we can look at markers for the altered video itself. Some of the algorithms are looking for inconsistencies in the video. But then there are other, more advanced solutions that track the mannerisms or gestures of certain candidates, so we can look for inconsistencies of — would this candidate have made these arm motions? Are they typical? The algorithms can track and create clusterings for the other videos on file for a candidate, and then determine whether the submitted video is an outlier.

Another thing we suggest to the media is if somebody submits a video that occurred in a public setting, to try to verify through multiple unique sources. If a candidate said something at a rally, get video from multiple cell phones. It’s going to be much harder to fabricate a video from multiple angles and get all of the physics exactly right when you have multiple cameras shooting the same event simultaneously. Putting all of these things together will help us authenticate whether or not we should trust video related to the campaign.

The next one we talk about is related to disinformation. We saw, about a week ago, the FBI reported that there are intimidation campaigns, where nation-states, per the FBI’s attribution, are intimidating voters, attempting to either change the way a voter votes or discredit the election process.

We’ve also seen that the websites that are hosting information about the election, run by local and state governments, are often lacking some of the most basic cyber-hygiene capabilities that we’d expect. For example, we ran a report that showed the vast majority of local election websites are not using .gov domain addresses, which means that it’s very difficult to tell whether you’re going to a legitimate local election site or you’re going to a fake site. A fake site could do very simple things to suppress votes, such as changing the time the polls are open, changing the polling locations, changing information on eligibility requirements for voting, or changing information on the candidates. There’s no way to tell, if you’re a typical voter, whether or is the “correct” site, one giving fake information and the other giving real information.

The other hygiene element we saw severely lacking: About half the sites are not using HTTPS. HTTPS both encrypts data, so that if there’s personal information going from a voter to the site, or if the data is coming back from the site is something important, HTTPS can ensure that there’s an integrity to the data, that the data is not tampered with. There’s a number of attacks where you can impersonate a site and change the information with some of these integrity attacks. That’s much easier if a site is not using HTTPS.

Above: Ransomware was first detected in 1989.

Image Credit: Intel Security

VentureBeat: That sounds like a tough one to get around, especially if you’re just Google-searching for things.

Grobman: It’s the exact point. Instead of Googling, we recommend voters start from a trusted Secretary of State’s website. There’s typically going to be a list of all the local websites from the Secretary of State’s website. If you’re a resident of Texas, start at the Texas Secretary of State and find your county. There will be a link from the Secretary of State’s website to your county. That’s the link you should follow.

Voters also need to be very skeptical of email. Election boards are not typically going to email you with logistics information on where, when, and how to vote. If you get an email that says, “Reminder, tomorrow is election day. This year, due to COVID-19 we’ve moved the polling location 55 miles away,” stop before you drive 55 miles out into the country to vote. It’s likely a fake email. Those are the types of things voters need to be aware of as we get closer to November 3.

The fifth one is, we’ve talked a lot in the past about denial of service attacks, attacks on things like critical infrastructure. We need, as a nation, to be ready for a critical infrastructure attack that could target specific areas of the country in order to tilt the vote. A critical infrastructure attack in a rural area to suppress Republican votes, a critical infrastructure attack in urban areas to suppress Democratic votes — in a close election in a state that is going to be very close from a voting perspective, and given the fact that the Electoral College gives all electoral votes for a state — except for Maine and Nebraska — as winner take all, disrupting portions of a state and giving voters a reason to stay home because they need to wait for the heat to come back on, or creating traffic jams due to lights going out, those are types of things we need to be aware of.

The good news is, federal agencies like DHS are very much on alert looking for these types of attacks. We will hopefully be able to respond very quickly if anything like this does occur. But really, all federal, state, and local authorities need to be on their A game for the next week.

And finally, we want to remind people that attribution is difficult. When and if we see cyber activity during the election cycle, jumping to conclusions as to who is behind it is difficult. It’s something that needs to be left to trusted federal agencies. One of the things that’s unique about cyber is, given that your evidence is digital, it’s easy to fabricate fake evidence to point to some other entity than the one that executed the attack. We call this a false flag.

If country A wanted to make it look like country B was manipulating the election, going back historically and analyzing the way that country A had executed attacks in the past and setting up a scenario with some of the markers that have been used in the past is very possible. We’ve seen elements of this even recently called out by the FBI in the indictments of some of the Russian actors that came out a few a weeks ago, where some of those attacks were meant to look like China or North Korea at work. Given that we’re in an election cycle where different countries are inferred to be supporting different candidates, recognizing that attribution is something we need to be careful with, and generally using a combination of both digital forensic evidence and also information that would only be available to law enforcement and the U.S. intelligence community by investigating things that are not generally in the public domain.

VentureBeat: There is the problem that the president of the United States [or] his advisors are sometimes the source of the disinformation. I’m not so sure exactly how people check up on that, other than listening to reputable news sources.

Grobman: Relying on the media to fact-check all information and ensure that we can trace evidence back to the underlying source that is verifiable is incredibly important. Operating on conjecture, innuendo, or other information that is not verifiable is something that the media and voters should be very careful of. It’s important that we have a free and fair media that’s able to fact-check and dig into the data. That’s very important to supporting the U.S. democracy.

VentureBeat: When you think of more low-tech and simple disinformation campaigns and you compare it to things that are a lot more sophisticated, with the technology available now, what do you think about that? Do you think that those are still worth worrying about?

Grobman: They’re worth worrying about. But what I will say is, we see with cyberattacks, generally, a cyber-adversary will use the simplest approach to achieve their goals. If you can steal somebody’s data with a very simple attack, like a spearphishing attack, you won’t go to the trouble of engineering a high-tech solution. Additionally, for some of these more elaborate attacks, where a nation-state might need to use vulnerabilities that only they are aware of, once you exploit a vulnerability you’ve burned it. You can’t use it in the future. Unless an adversary feels that they’re unable to meet their objective using the simpler approaches, there are incentives to keep in your back pocket the more sophisticated and elaborate techniques.

With that said, it’s certainly plausible that an adversary might see the stakes for this election cycle as being high enough that they’re willing to pull out some of their more powerful capabilities and use them. Unfortunately we don’t have any deterministic predictors of which of those scenarios will play out until after it happens.

Above: A deep fake of Tesla CEO Elon Musk.

Image Credit: McAfee

VentureBeat: You’re saying this right before the election. Have you detected a lot more activity in recent days that makes it necessary to speak up?

Grobman: McAfee has been focused on election security for more than two years. We started calling out concerns back in the 2018 midterm elections. We’ve been focused on educating the general public on what to look out for and how to think about election security. We’re moving into the final week of the election, and clearly, if adversaries wanted to create scenarios of disruption, this would be one of the higher-probability weeks that would occur. One of the key reasons we’re talking about it right now is just to make sure that voters understand what to look for, and that all of our state, local, and federal officials are preparing as strongly as they can for every possible scenario.