Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Cybersecurity is a game where speed kills. Defenders need to act fast if they want to keep up with sophisticated modern threat actors, which is difficult when attempting to secure data as it moves between on-premise and cloud environments. However, Microsoft believes this is a challenge that can be addressed by turning to GPT-4.
Today, Microsoft announced the release of Microsoft Security Copilot, a generative AI solution based on GPT-4 and its own proprietary security models. The tool can process up to 65 trillion threat signals taken from security tools like Microsoft Sentinel, and create a natural-text summary of potentially malicious activity — such as an account compromise — so that a human user can follow up.
“Security Copilot can augment security professionals with machine speed and scale, so human ingenuity is deployed where it matters most,” said Vasu Jakkal, Microsoft corporate VP for security, compliance, identity and management said in the blog post announcing the new tool.
At a high level, this latest release highlights the fact that generative AI has a valuable defensive use case; not just in collecting disparate threat signals throughout an organization’s network and converting them into a written summary, but also providing users with step-by-step incident remediation instructions.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Using GPT-4 to make security teams move at the speed of AI
Ever since the release of ChatGPT-3 in November 2022, the defensive use cases for generative AI have been rapidly growing in the enterprise security market.
For example, open source security provider Armo released a ChatGPT integration designed for building custom security controls for Kubernetes clusters in natural language.
Likewise, cloud security vendor Orca Security released its own ChatGPT extension, which could process security alerts generated by the solution and provide users with step-by-step remediation instructions to manage data breaches.
The new release of Microsoft Security Copilot illustrates that adoption of generative AI is accelerating in enterprise security, with larger vendors looking to help organizations realize the vision of an automated SOC, which is essential for keeping up with the level of current cyber threats.
“The number of attacks keeps going up,” said Microsoft VP AI security architect Chang Kawaguchi. “Defenders are spread thin across many tools and many technologies. We think Security Copilot has the opportunity to change the way they work and make them much more effective.”
Contextualized signals, analyst support
With the average breach lifecycle lasting 287 days and with security teams spending 212 days to detect breaches and 75 days to contain them, it’s clear that manual, human-centric approaches to threat investigation are slow and ineffective.
Security Pilot’s answer is to not only contextualize threat signals, but to support analysts with prompt books, provided by Microsoft or by the organization itself, to provide guidance on how to remediate a security incident quickly.
For instance, if Security Pilot detects malware on an endpoint, it can highlight a malware impact analysis prompt book to the user, which will detail the scale of the breach and provide guidance on how to contain the incident.
The generative AI in cybersecurity market
It’s no secret that the generative global market is in a state of growth, with OpenAI, Google, Nvidia and Microsoft all vying for dominance in a market that researchers estimate will reach a value of $126.5 billion by 2031.
However, at this stage in the market’s growth, the role of generative AI in cybersecurity has yet to be clearly defined.
While providers like Orca Security, which currently holds a valuation of $1.8 billion, have demonstrated potential use cases for GPT-3 in processing cloud security alerts and generating remediation guidance to reducing the mean time to resolution (MTTR) of security incidents, the concept of an autonomous cybersecurity copilot is still to be defined.
Microsoft’s decision to go all-in with its own generative AI security solution not only has the potential to accelerate the adoption of tools like GPT-4 in a defensive context, but to define the potential defense use cases that other organizations can look to and apply in their own environments.
“What differentiates use, besides the Microsoft models themselves, is the skills and the integrations with all the rest of the security products our customers use; and to be honest, we think that there’s a massive first mover advantage here in starting the learning process and working with customers to improve and empower their teams,” said Kawaguchi.
That being said, while the defensive use cases of generative AI appear promising, there’s still a long way to go before it becomes clear whether tools like GPT-4 are a net-positive or negative for the threat landscape.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.