Skip to content

Seeing Is Believing: How Enterprises Are Using AI to Improve Cybersecurity

AI's ability to chew through scads of data, assess activity, and guide responses makes it a valuable weapon in any cybersecurity arsenal. We explore how leading corporations are using AI and autonomous tools now and what role they’ll play in the near future.

Perspective

Sure, in that last two years, cybercriminals have seemed to be one step ahead in the AI race, deploying large language models and chatbots in an onslaught of phishing and other cyberattacks, but that lead may be faltering, say experts, as those groups are increasingly coming up against AI-enabled cybersecurity tools that can detect – and thwart – such threats.

With its speed and skill at spotting patterns across scads of data, AI particularly excels at fraud detection and scanning systems for weaknesses. That should come as a relief to those who’ve been tracking the AI-fueled exploits of cyber gangs.

Welcome to the next phase in cybersecurity – Automated Endpoint Management – which allows you to leverage real-time data, execute changes at scale, and oversee intelligent automation, all from a unified platform.

Some recent uses of AI by scammers include:

  • Just last week at the Munich Cyber Security Conference, Ukrainian officials warned that Russian hackers are using machine learning models to sift through masses of content swiped from infiltrated mailboxes and then using AI to craft more credible, tailored phishing campaigns.
  • Last month, Google announced it had identified 57 threat groups with ties to Russia, China, Iran, and North Korea using its Gemini chatbot to conduct reconnaissance on defense experts, troubleshoot code, and find new ways to burrow deep into targeted networks, using techniques like detection evasion and data exfiltration.
  • Cyber gangs have also created their own large language models like WormGPT, FraudGPT, and GhostGPT to create waves of new malware and phishing scams.

Here’s how some organizations are fighting back:

  • Mastercard used AI to scan payment data for nine partner banks to save over $35 billion in fraudulent payments across three years.
  • Amazon, which sees up to 750 million cyberattacks every day, collates all of the data on those incidents in a database. It then uses machine learning to trawl all of the data related to those incidents and identify the ones it should be most concerned about. It even runs AI-powered predictive analytics on this data to predict what threats will become more important in the future.
  • The Cybersecurity and Infrastructure Security Agency (CISA) uses AI to automatically detect and analyze potential threats to U.S. government networks, flag unusual activity, and identify patterns in vast amounts of data.
  • Europol is also using AI for repetitive tasks, giving human analysts more time for other work and preventing their overexposure to “gruesome”material.

As the field evolves with the added skills of generative AI, it also holds promise for articulating its findings into easily understandable results. That could mean condensing a bewildering array of metrics into a simple textual summary that would make more sense to busy security analysts. It could also converse with those analysts, answering natural language questions about its findings in a fluid, intuitive way.

These new capabilities offer all kinds of organizations a performance bump in the fight against digital intruders, says Mary Carmichael, a member of the ISACA Emerging Trends Working Group and president of the ISACA Vancouver Chapter. “Traditional machine learning [one type of AI] has been quite successful in several areas of cybersecurity, offering improvements over older, rule-based systems.”

Below, we outline key use cases where AI and machine learning are showing the most promise.

Detection – why neural networks excel at it

Detection is an area where machine learning has performed well, Carmichael notes. Some models identify malware by analyzing patterns in data, while others focus on using it to monitor network behavior against a baseline of normal activity. When behavior deviates enough from the norm, a flag is raised.

Traditional machine learning has been quite successful in several areas of cybersecurity, offering improvements over older, rule-based systems.

Mary Carmichael, member, ISACA Emerging Trends Working Group

That behavior could be a computer downloading too many files from an unknown IP address at an unusual time of day. Another trigger could be an email arriving for someone in the company with access to financial accounts, but with suspicious characteristics that a human administrator might miss, such as odd phrasing, and an unknown sender.

That kind of monitoring and statistical analysis requires scouring large amounts of data from various sources and often in real time. It’s something that a human could do, but you’d need thousands of them, tirelessly looking at multiple related events in your IT infrastructure. It’s not realistic.

The neural networks that underpin machine learning excel at it, Carmichael explains.

Whereas traditional software processes individual steps in sequence, neural networks pass entire loads of complex data between multiple layers of neurons. They begin using existing data that operators have labeled as, for example, legitimate or malicious. The network practices reading this labeled data to build a picture of what is legitimate and what isn’t.

[Read also: Ultimate guide to AI cybersecurity – benefits, risks, and rewards]

The trained neural network then uses this knowledge to assess new data, in a process known as “inference.” It’s the basis for a variety of machine learning tasks, from computer vision matching through to detecting malicious activity on a network, sorting spam from legitimate email, or recognizing a fraudulent financial transaction.

Threat modeling – how machine learning gets industry-specific

Machine learning not only helps to detect threats but also understands the particular threats facing an organization, explains James Stanger, chief technology evangelist at CompTIA. A bank running lots of administrative systems, for example, will face specific types of attacks from particular groups who target that industry. Those groups will use techniques tailored to exploit weaknesses in a bank’s infrastructure and different from, say, the kinds of attacks other groups would deploy against an oil and gas company.

[Using AI for] log collection and analysis leads to a faster response time.

Curtis W. Dukes, executive vp and general manager, security best practices, Center for Internet Security

Just as your employee security training should be industry- and company-specific, so should your threat modeling. There’s no room here for one-size-fits-all.

Threat modeling is the process of building a framework of threats against an organization, which can be used to analyze risk and prioritize cybersecurity investments. That process, which involves analyzing large amounts of threat hunting and other data from different sources, is essential to any cybersecurity program. It’s also slow and expensive.

[Listen also: Microsoft’s Sherrod DeGrippo joins our podcast to discuss how to lead a threat intelligence team]

“Over the past couple of decades, we have been able to create relevant and accurate threat and attack models, but the process for building out those models has for the most part been a manual process,” Stanger says. AI’s ability to chew through lots of data and identify patterns makes it a valuable asset here. “If done correctly, this will be a very welcome development,” he adds.

Automation – it’s more than just speed

The Institute for Security and Technology (IST), a think tank for marrying technology and policy to meet emerging security challenges, recently identified AI’s potential for enhancing cybersecurity operations in a report last October. Automating simple tasks such as network traffic analysis and file classification can help to free up time for security administrators, it said.

Recent leaps in AI technology bring the potential to dramatically alter the offense-defense
balance by drawing on the home field advantage.

The Implications of Artificial Intelligence in Cybersecurity: Shifting the Offense-Defense Balance, Institute for Security and Technology (October, 2024)

“[Using AI for] log collection and analysis leads to a faster response time,” agrees Curtis W. Dukes, executive vp and general manager, security best practices, at the Center for Internet Security. As director of information assurance at the National Security Agency, he spent years handling sensitive systems and understands the importance of speedy insights from system telemetry.

In addition to time savings, AI tooling can enhance a higher fidelity understanding of a network environment. “Recent leaps in AI technology bring the potential to dramatically alter the offense-defense balance by drawing on the home field advantage,” the IST report states.

[Read also: Easy-to-create orchestration and automation at scale]

By automating network inventories and audits, and other key tasks like patch management or the rooting out of shadow IT, defenders will gain an understanding of their network from the inside far greater than any attacker can grasp from outside. That kind of knowledge and understanding will only enhance future decisions as threats and threat tech evolve.

Vulnerability management – just one of the many ways generative AI is a game changer

The rise of GenAI promises still more enhancements. This technology, typified by the foundation models that power chatbots like ChatGPT and Claude, depends on large language models (LLMs). These are based on neural networks but use new algorithms designed to process large amounts of text. They’re better able to analyze creative content including text as well as audio, video, and even program source code. They can then produce new content based on this information.

[Read also: What is device vulnerability management?]

LLMs have profound implications for cybersecurity, particularly in terms of vulnerability management. The IST report highlights the potential to find some of the bugs that cybercriminals exploit in computer programs. At the August 2024 DEFCON cybersecurity conference in Las Vegas, semifinalists in DARPA’s AI Cyber Challenge proved its potential: They used LLMs to find 22 vulnerabilities in source code, automatically patching 15 of them. Separately, Google’s Project Zero security team used AI to enhance “fuzzing,” a process that floods a target system with inputs to find one that compromises it. It found a zero-day using this method in what it claimed was a world first.

SOC successes – AI assistants save time, negate noise, and tell it to you straight

These systems can also help to save tier-one staff in the security operations center (SOC) by creating more sophisticated workflows, says Dominik Penner, principal security engineer at Toronto-based cybersecurity consulting company Mand Consulting.

Most organizations interested in deploying AI within a SOC will use it for triage.

Jennifer Tang, associate for cybersecurity and emerging technologies at IST

“The logging and the monitoring isn’t that bad, but once you need to start tying it to attacks, you need a little bit more analysis,” he says. “There’s so much noise. Sorting through it and finding actionable inputs and insights ends up being the harder part of it.”

Machine learning excels at sifting through data, classifying it, and identifying anomalies. LLMs then provide a more sophisticated way for SOC analysts to query that data, says Jennifer Tang, associate for cybersecurity and emerging technologies at IST and a co-author of the report.”I think most organizations interested in deploying AI within a SOC will use it for triage,” she says. “[LLMs can] provide organizations with assessments of what alerts mean and what to do, using natural language.”

That kind of straight talk is especially important in a time-sensitive situation like a cyberattack, when communication is key and actions in multiple departments (from IT to social media) by various employees with a wide range of security expertise all must be coordinated quickly.

These capabilities are leading to the creation of AI assistants tailored for cybersecurity operations. Carmichael highlights their ability to go beyond triage and recommend remediation actions for SOC operators.

Reality check – why it’s still important to keep humans in the loop

The next logical step along this route is something that IST highlighted in its report: autonomous agents that not only recommend remediations for cyber incidents but also automatically execute them.

AI supports less-experienced analysts by providing guidance and recommendations, which helps in upskilling the workforce and improving overall team performance.

ISACA’s Mary Carmichael

Think of a navigator in the passenger seat reaching over and taking the steering wheel, but instead of taking the next left-hand turn, it might quarantine a segment on the network, downgrade an employee’s access permissions in the corporate directory, or update a firewall rule to block a particular set of IP addresses.

That might be a step too far for organizations still testing the water with AI. “What’s always held us back is organizations’ fear of the unknown,” says Dukes. “They don’t want to automate certain processes.”

And that’s okay. Automation will (and should) vary from enterprise to enterprise. Despite prevailing fears, industry observers believe there are at least five key areas in security operations where humans will remain the central asset.

Tang envisages some governance issues as companies consider handing over more control to autonomous AI agents. “I think there is going to be a problem with explainability, compliance, and where accountability lies,” she says. An automated system that takes an action should explain its reasoning, but AI systems are notoriously bad at explaining how they reached a decision.

[Read also: Racing to deploy GenAI? Security starts with good governance]

Most organizations will want a level of human oversight rather than allowing AI to automate all cybersecurity measures. In a report on AI-based automation for endpoint management, Gartner recommends deploying AI that “enables any organization to tune the level of automation according to its risk appetite.”

While Tang says board members at some companies have been calling for a “SOC in a box” using end-to-end detection and automation capabilities, she doesn’t believe the industry is there yet. Many organizations she spoke with have pledged to always keep a human in the loop, and even mandate feedback mechanisms to constantly verify AI’s findings. Consequently, she sees the future – at least for the time being – as focused on AI-assisted cybersecurity rather than AI-run.

AI assistants that simply recommend rather than execute still promise a return on investment. “AI supports less-experienced analysts by providing guidance and recommendations, which helps in upskilling the workforce and improving overall team performance,” says ISACA’s Carmichael. With a technology as fast-moving and disruptive as AI, businesses might want to take slow, careful steps before they break into a sprint.

Danny Bradbury

Danny Bradbury is a journalist, editor, and filmmaker who writes about the intersection of technology and business. He has won the prestigious BT Information Security Journalism Award, including for Best Cybercrime Feature.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW