As 2014 wraps, it’s safe to say that we have had some of the most publicized, devastating data breaches in years, including massive hacks on Target, JP Morgan Chase, and Sony.
But as more details emerge about how those breaches happened, it’s becoming clear that better security software and larger IT security teams may not be the most cost-effective answer. The way those attacks happened underscores a need to get back to security basics.
These attacks were not made possible by supercomputer-powered, password-cracking, firewall-busting systems. No, they were done in by a lack of focus on security fundamentals: changing passwords, limiting or monitoring contractor access, not letting access to one area become a key to a much more sensitive area, and so on.
As one board member at multiple security companies phrased it: “We’re talking ‘Washing your hands before you eat’ kind of stuff for IT security.”
Conventional IT wisdom calls for more security software and more staff to monitor that software, decisions that are often based on recommendations from a wide range of — you guessed it — vendors of security services.
Let’s drill into these three breaches and see what actually allowed the attackers in. Target is a good place to start, because a lawsuit reveals many of the details.
According to documents recently filed in the federal lawsuit against Target for the data breach, there were more than a dozen security issues that allowed the thieves to do their naughtiness:
- Target didn’t take written warnings from Visa seriously. (Those warnings dealt with firewall configurations, network segregation and encryption improvements.) “Target failed to implement these measures,” the plaintiffs wrote;
- The attackers got into the network using the credentials of a Target contractor, a refrigeration company, which needed access for electronic billing, contract submission, and project management purposes. But the thieves learned about the company far too easily, by searching on Google. The plaintiffs argued that Target needed to “limit the amount of publicly available vendor information.”
- The security problem grew because of apparently weak security systems at that refrigeration contractor. “Target could and should have required adequate monitoring and anti-malware software for any vendors with access to Target’s computer systems,” according to the lawsuit. Take it up a notch and the argument is that Target should have insisted that all contractors have security comparable to Target’s. The “weak link” strategy is a favorite of cyberthieves.
- Target IT staff gave security warnings to their superiors and they were also not implemented. Note: Companies are not obligated — nor is it necessarily prudent — to implement every proposal that is sent in a memo. Still, it’s clear these warnings went unheeded.
- Segmentation. According to the suit, the contractor credentials gave the thieves “access to the billing, contract submission, and project management portions of Target’s computer network only and presumably nothing else. Target’s computer network, however, was not properly segmented to ensure that its most sensitive parts were walled off from the other parts of the network.” That’s a basic security error. As a result, the attackers were able to easily move from billing into the payments and CRM areas.
- Target did not use two-factor authentication for contractors. Two-factor authentication is no panacea, but it can increase security if the second factor is hard to discover or to guess — which would have closed this particular vulnerability.
- The attackers used exfiltration malware, which essentially holds the stolen data for several days and then ships it to a system controlled by the thieves. “FireEye, Target’s new security software provider, detected that the hackers were uploading the malware and alerted Target’s security team about the suspicious activity. Target’s security team took no action,” the lawsuit alleges.
- Target failed to remove unused default accounts, which the thieves also leveraged.
- Target should have “required vendors to more closely monitor the integrity of their critical system files,” according to the suit.
- Other software, specifically Symantec Endpoint Protection, also flagged suspicious behavior in the system, yet Target took no action, the filings said. Note: Depending on the particulars, this allegation may be unfair, due to it being without context. Companies the size of Target see a huge number of such alerts daily from various security systems. They may not be able to chase down every alert. The issue is whether the nature of the alerts were such that it justified immediate action.
- “Target could have and should have erected strong firewalls between Target’s internal systems and the outside Internet to help disrupt the hackers’ ability to command and control the company’s computer network as easily as it did,” the lawsuit claims.
- Target should have blocked domains and IP addresses from areas known for cyberthief attacks, such as Russia, while whitelisting approved servers, the lawsuit claims.
The consistent thread in the Target lawsuit is that more strict adherence to basic and relatively low-cost security mechanisms — limiting access from less-sensitive area of the network to more sensitive ones, making sure contractors take security seriously, aggressively shutting down unneeded accounts, etc. — could have prevented this attack from doing major damage.
A pattern of failures
But Target is just a retailer. What about a major financial player like JPMorgan Chase? Surely that attack wouldn’t have been thwarted with adhering to decade-old best practices, right?
According to the latest details from Chase, yes, it would have. “The computer breach at JPMorgan Chase this summer — the largest intrusion of an American bank to date — might have been thwarted if the bank had installed a simple security fix to an overlooked server in its vast network,” noted the New York Times. The problem was, again, the lack of two-factor authentication. “JPMorgan’s security team had apparently neglected to upgrade one of its network servers with the dual password scheme,” the Times wrote.
Then there’s Sony, which also used the FireEye security vendor that Target used. The key access method was the thieves’ tricking a senior admin into giving them network access. From there, the data flowed quickly. But why didn’t alarms scream when one account was downloading that many files? According to news reports, the hackers grabbed, stored, and shipped offsite more than a dozen terabytes of data. A system that flags downloads exceeding a preset threshold and alerts the employee (“are you really doing this?”) and higher-ups (“Is this person supposed to be doing this?”) is not difficult to deploy.
The Wall Street Journal shed a little light onto one Sony security hole: Sony used 42 firewalls, but when the company switched who oversaw those servers, according to a September 2014 audit report, “it appeared that monitoring of one firewall and 148 pieces of computer gear was lost in the shuffle.”
A very detailed breakdown of the Sony attack from a firm called Risk-Based Security made a wise observation: “All analysis to date suggests the malware was not unique to Sony, and may have been used several times before. Trying to suggest that malware that evades ‘industry-standard antivirus software’ is ‘unprecedented’ is ridiculous.”
This is a very old problem with anti-virus detection. With few exceptions, it can always be fooled. Then when the victim reports the new virus, the anti-virus firm updates its databases. And when the thieves slightly alter the virus again, it again becomes undetectable.
The best defense is not necessarily to better scan for viruses — although that wouldn’t hurt. It’s best to reinforce training to never click on unexpected links, and to flag any unusual activity, such as excessive downloads. And if this sounds familiar, it’s been the refrain of security trainers for the last decade and, for some of the more basic elements, multiple decades.
Part of the problem: Human nature
The cause for much — if not all — of these security issues are either human nature or the way most enterprises handle IT today.
When projects close, especially trials of very preliminary services, it’s natural for the teams to want to move on. This is triply true when what is being moved on to is invariably already two weeks behind schedule and, by the way, revenue-generating.
More human nature: it’s scary to shut down an unrecognized account or to deactivate a password that doesn’t link to a known need. Is the mysterious account or file critical to an active CIO project that the IT worker doesn’t know about? If you delete it and something important crashes, will the CIO praise you for trying to maintain security or blame you for the disaster? (OK, that is the quintessential rhetorical question needing no answer.)
In a perfect world, every programmer and designer would dogmatically list every single file, account, and password, along with an explanation of why it was created. Then someone could simply match every file and password to those documents and feel safe in deleting anything that wasn’t properly documented. Alas, this is not the world that enterprises in 2014 lived in.
Without such a comprehensive, investigating orphaned passwords or files feels like proving a negative. How many people and databases have to say “I have no idea what that is for” before something is safe to be deleted?
Outside contractors, entirely undocumented projects (hello, cloud operations), and acquisitions make the existence of abandoned files and passwords and partitions even daunting to manage.
Who has the time and resources to continually chase every potentially dormant account?
Indeed, all of the security issues flagged in this story could have been dealt with via traditional security best practices. But it’s very hard for the typical enterprise to pay someone to chase these things down.
But if companies don’t chase them down, there are those who will — within the cyberthief dens in Eastern Europe, China, North Korea, and the bedrooms of bored 14-year-old hackers in training.