We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Some of the world’s best-known brands have invested millions of dollars in information security. So have their adversaries. But malicious actors are counting on the fact that your defenses are operated mostly by humans and tend to be the same across the board.
When you moved into your neighborhood, did you change your locks or do you have the exact same ones as all your neighbors? Think about what could happen if a thief compromised just one of those shared locks? For some reason, the world of information security has a same-lock mentality. And some “customers” are malicious actors working hard to do harm to the rest. Given this situation, we should not be surprised that even with the massive amount of money being spent, defenses still fail.
If cyber defenders are ever going to have a chance at winning, we must begin to level this playing field. Vendors distribute identical copies of their security products to customers because it’s easier for them, not because it’s better for their customers.
How many variants of a signature is an anti-virus company supposed to produce for each malware sample it analyzes? Do all host-based artificial intelligence (AI) defenses learn in their environment? In the past, tailoring these approaches for each enterprise was not feasible. Luckily, new techniques are emerging within cybersecurity that produce unique detection behaviors for each customer, behaviors that can help level the playing field and maybe even help win the game.
These emerging techniques broadly fall into the area of AI and machine learning. At the heart of any AI system is the ability to learn. Some AI solutions learn from their local environment, while others learn strictly from a global context. Those that will win out are solutions that build some or all of their threat detection capability using data that only exists in a customer’s network environment and produce a type of moving defense unique to that environment. These include:
- A defense that is substantively different from enterprise to enterprise.
- A defense that evolves over time as it adapts to changes in its environment.
- And, most importantly, a defense that no attacker can completely scope out beforehand and know for certain they can defeat.
Similar to how adding cryptography to a password helps protect it from compromise, deploying cybersecurity solutions that use the network environment to differentiate it from all other copies helps protect the enterprise.
Establishing a moving defense using AI
AI systems use many thousands of features to discern if content traversing a network is malicious or if user or system behaviors are anomalous. Each feature alone provides only a small piece of evidence needed to make a final determination or classification.
Only in intricate and complex combinations are these useful. Machine learning algorithms try to figure out how to combine features to produce accurate insights and predictions using a dedicated set or period of training.
Depending on each AI system’s approach, training data can originate from the local environment, a global context, or a hybrid of the two. However, unlike traditional approaches, the resulting models are never based on simple rules or patterns that are easily understood. The natural opacity of these models and their dynamic construction provide the building blocks for an effective moving defense.
You can alter the AI models by adjusting the training set or period. Whether additional training data is simply added or used to replace older training data won’t matter — the results are the same.
New models are created with different ways of using existing features and possibly using totally new features. With AI and machine learning, the cost of building tailored detection solutions is negligible. There must, however, be a vision on the part of the solution provider to enable this approach. Some security providers using machine learning and AI still deploy their models in a traditional manner and won’t leverage the local data to tailor their solution.
Of course, there are challenges with moving defenses, and not just those faced by the malicious actors that will continue to try to defeat them. The most significant challenge is ensuring parity among the tailored solutions. Nobody wants the “second-best” detection model. Care must be taken to verify that any technical implementation produces a statistically equivalent model with detection accuracy and error rates nearly identical across all tailored variants.
It’s hard to find a security concept simpler than a moving defense. “Change your locks” is amongst the most well-established security advice. In cybersecurity, however, some locks are just easier to change than others.
Scott Miserendino is the chief data scientist at BluVector.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.