How AI Enhances Anomaly Detection and Eases Alert Fatigue
Alert-weary security pros are looking to GenAI’s contextual capabilities to identify the most serious anomalies and get to the “big aha moments” of actionable threat intel.
It’s commonly known in cybersecurity circles as “alert fatigue” – brain overload from a barrage of anomaly alerts generated by detection tools – and it can be the reason that serious threats get missed, as security teams chase false positives or simply tune out the “noise.”
But lately, security pros are hoping there might soon be a prescription for relieving alert fatigue: GenAI. With its ability to add valuable context, it can screen out the noise to ID and prioritize the most pressing threats, and then provide potential mitigation strategies – plus (potentially) a whole lot more, if the technology evolves as some experts expect.
Monitoring anomalies (cyber parlance for anything unusual or unexpected within a system, such as an application or network) is complex and exhausting, explains Scott Crawford, information security research head at 451 Research. The high alert rate is due to the complexity of modern systems, the difficulty in gathering all the relative context of the environment, the nature of the attack, and having security tools tuned to listen for proper detection of anomalies.
Historically, anomaly detection engines work well on very specific, well-baselined data sets, explains Wim Remes, operations manager at security services firm Spotit. In Remes’ view, useful anomaly detection must provide an answer to two specific questions: Is something being observed that is abnormal to the environment, and what is the anomaly most likely related to?
The ability to answer that second question is what distinguishes traditional, limited-in-scope anomaly detection with next-gen iterations fueled by AI.
If the anomaly detection system can only deliver on the first question, it primarily serves as a very coarse detection engine with a high rate of false positives. “If AI-based anomaly detection is here to stay, I believe that it should evolve into understanding broader data sets with higher accuracy,” says Remes.
The future of anomaly detection with AI
Detecting anomalies can indicate something is awry, like malware or active attackers.
If AI-based anomaly detection is here to stay, I believe that it should evolve into understanding broader data sets with higher accuracy.
Historically, intrusion detection systems and anti-malware software have used signatures to detect malware and exploits being executed. Over time, machine learning (ML) and behavioral analysis capabilities have been added to detect artifacts yet to be identified. And now there’s the prospect that GenAI will improve upon previous AI/ML capabilities. With its advanced pattern recognition and data-generation power, GenAI has the potential to build increasingly accurate environmental and behavioral models that can identify subtle deviations from normal behavior, simulate potential attack scenarios, and adapt in real time to evolving threats.
[Read also: Ultimate guide to AI cybersecurity – benefits, risks, and rewards]
“The conversation today is driven by the breakthroughs in the use of generative AI,” says David Gruber, principal cybersecurity analyst at the market research firm Enterprise Strategy Group. “The traditional AI machine learning models that have been in use have done an excellent job in stitching signals together that equate to potential patterns of suspicious behavior,” Gruber says. “Generative AI is helping add this new layer on top of traditional detection models for machine learning and AI, but it’s helping the analyst automate and correlate a lot of the things that they had to do somewhat manually and use their own knowledge base to do,” he adds.
However, AI goes beyond traditional ML in threat detection.
How AI adds reasoning to anomaly detection
When the system is tuned correctly, traditional machine learning can effectively spot anomalies within data patterns. GenAI systems promise to add “reasoning” regarding the context of the findings and provide suggestions based on complex variables.
[AI is] helping the analyst automate and correlate a lot of the things that they had to do somewhat manually and use their own knowledge base to do.
This means that for security practitioners, the AI first identifies a threat, then assesses how that specific threat could impact the environment, and then provides a range of potential mitigation strategies.
Gruber explains that such GenAI applications are helping to change the way security practitioners think about the use of AI in the context of security operations functions. It’s not just about detecting anomalies but understanding them fully contextually in the environment in which they occur. “Are we detecting something suspicious in the context of the infrastructure, the set risk profile, and the threat history the organization has experienced, or the current business climate?” Gruber asks. GenAI can point analysts in the right direction.
Multi-agent GenAI systems show promise
While multi-agent systems (which use multiple components to scan the environment rather than a single, centralized entity) are not new to cybersecurity, GenAI promises to revolutionize how security agents dedicated to disparate tasks can work together. Numerous autonomous agents functioning in unison will be able to assess the full context of an organization’s environment, the threats detected, and the best ways to respond.
With agents monitoring network traffic, other agents monitoring system logs, user behaviors, system misconfigurations, and vulnerabilities, and another monitoring for active threats, the GenAI system could develop a nuanced understanding of a potentially risky situation. It would also spell major relief for threat intelligence teams who today are buckling under the pressure to assess and respond to every random threat.
Hopefully, this will provide a more thorough situation assessment and help prioritize the alerts that require immediate attention.
GenAI’s broader contextual understanding
GenAI systems promise to understand not only the context of the internal technological environment but also broader events, such as geopolitical circumstances, company-specific threats, and the nature of threat actors targeting particular industries.
GenAI can do a lot of the legwork to… put more contextual information on the screen.
With its ability to represent relationships between external and internal conditions, rather than simply dispatching an alert when a threat is identified in a pattern, a much more sophisticated analysis of threats is possible.
“GenAI can do a lot of the legwork to get this context on behalf of analysts, which puts more contextual information on the screen in front of them and correlates it in a way that it makes it easier for them to get to the big aha moments about what’s happening and the potential impact,” says Gruber.
Human-AI collaboration – and caution
In time, experts say, GenAI systems may partner with human analysts in ways specific to the strengths and inclinations of individual team members. For instance, through analyst feedback, skill assessments, and work pattern recognitions, the AI will understand the strengths, weaknesses, and potential biases of specific team members and work with them at their level of skill and awareness. For threats, the system may suggest the pairing of particular analysts based on their talents.
[Read also: CISO success story – a real-life Marvel superhero on AI fighting cybercrime]
There’s a lot of excitement about the potential of the technology, but it’s still early days in the evolution, and not everyone is convinced it can really deliver. “Unfortunately, I have not seen great success with GenAI-based environment information [anomaly] systems. They only work well in single-vendor environments,” says Remes. He adds that very few organizations, significantly larger organizations, have chosen to embrace a monolithic security strategy. “I also don’t think that’s a great strategy, so I don’t expect many to take that leap,” says Remes.
While such caution has its merits, the technology is undeniably promising, and it will only get better from here. Considering how many hours the typical security analyst spends chasing false positives, any future system that promises to dramatically reduce that workload is certainly one most will embrace.