Skip to content

AI vs. Humans: Why SecOps May (Not) Be the Next Battleground

Experts see five primary areas in security operations where, for the foreseeable future, humans will remain the central asset or be needed to collaborate with those handy – but sometimes untrustworthy – AI tools.

Perspective

Human penetration testers, those ethical hackers hired to simulate cyberattacks and test an enterprise’s cyberdefenses, are increasingly looking over their shoulders at a wave of new AI pen-testing tools, and they’re worried. With good reason.

With no more guidance than an internet address, these AI tools can quickly (and faster than humans) map and assess the security posture of a company’s computer network and web applications, spot vulnerabilities, and execute commands or even social-engineering attacks to test those weaknesses, all the while recording every aspect of the test and writing up a report in seconds.

Of course, that’s not the whole story. Those same tools often fail where humans consistently succeed. The rate of false positives (identifying threats that don’t exist) and false negatives (missing actual threats entirely) is declining but still significant, which can be costly for any enterprise. And these tools don’t yet fully understand the context of the tasks they’re performing – which means it’s tough to get AI pen testers to operate within ethical and legal boundaries or from causing unintended downtime or system damage.

Accelerate IT ops and security incident response tasks on a single platform—and in real time—before threats spread across your network.

To be sure, AI will improve in the months ahead, and these limitations will likely become less pronounced or fade entirely. In modern security operations (SecOps), AI currently enhances many functions performed by human analysts, such as alert triage and initial alert analysis to spot potential malicious activity. Modern AI systems can analyze increasingly large datasets to identify attack patterns that may slip past human observers, especially those in departments that are overworked and understaffed. Additionally, AI tools can automate essential and time-consuming tasks (log analysis, data correlation, initial incident response), craft threat intelligence reports, and predict potential attacks.

Early indicators show AI is a smart investment. According to an IBM report on data breaches from March 2022 to March 2023, organizations that extensively used security AI and automation saved $1.76 million in data breach costs compared to groups with no AI and automation.

So with AI taking an increasingly central role in security operations, it’s worth asking: Where will humans fit? Fortunately for security professionals, AI and industry experts believe humans will remain in the mix for some time.

“The technology will be very AI-heavy, but the centrality of people will remain,” says Anton Chuvakin, security adviser at the office of the CISO for Google Cloud. While AI may change how security professionals do their day-to-day work, he acknowledges, “the macro picture is the same as always: people, process, and technology.”

Within the realm of security operations – whether your organization works with SecOps professionals in general or has scaled up a specialized security operations center (SOC) where security teams share info and collaborate – experts see five primary areas where humans will remain the central asset or where workers will likely coexist with AI for the foreseeable future:

1. The analysis of threat intelligence

Threat intel teams have long suffered from security-data overload, and today’s ever-growing volume and evolving nature of cyber threats is only making things worse. Randy Lariar, practice director for security operations center technology at security services provider Optiv, says he sees security teams using AI to “pore through threat research and get the information out of the noise faster.”

The technology will be very AI-heavy, but the centrality of people will remain.

Anton Chuvakin, security adviser at the office of the CISO, Google Cloud

AI can automate certain tedious, time-consuming processes of data analysis; AI-enabled behavioral analytics tools can quickly establish baselines and spot deviations in user or system activity; and natural language processing, which allows AI tools to understand human communication, can help teams scan the dark web for cyber plots and user profiles. Lariar adds that threat intelligence teams also use AI to help craft their first drafts of reports and to help improve their communications.

[Read also: Faster threat detection? That’s only half the story. Here’s your ultimate guide to AI cybersecurity – the benefits, risks, and rewards]

While AI will deliver heavy-duty data crunching in the future, humans will still remain the ultimate interpreters of the data. Lariar and other experts see human analysts remaining in the threat intelligence cycle to provide critical context to complex intelligence and make strategic decisions based on AI’s threat-intelligence findings. It’ll also be workers who create and refine the AI models.

2. Threat detection and analysis

For protecting internal systems from potential threats, AI systems can relentlessly monitor large amounts of application and network activity data necessary to identify malicious patterns. And AI can do so consistently and without fatigue.

[AI can help SecOps teams] pore through threat research and get the information out of the noise faster.

Randy Lariar, practice director for security operations center technology, Optiv

Still, despite such benefits, AI systems can’t be fully trusted to run without human oversight, says Andrew Storms, vp of security at software distribution platform provider Replicated. AI, which remains prone to so-called “hallucinations” (that is, errors) and can be corrupted by faulty or biased training data, must be held on a short leash, and humans must continue to make decisions about what alerts warrant more profound research and perhaps immediate response.

[Read also: Chief AI officers can help you sort out AI’s pros and cons – see why more orgs are investing in this specialized role]

“You have to trust AI to a point and must have a sense of [its] validity, but it still takes a human touch to confirm AI’s findings and make the ultimate decisions,” says Storms.

3. Penetration testing

AI pen testers will look more like human partners than human replacements. AI tools can scan networks, applications, and systems to spot vulnerabilities and security gaps that attackers could exploit. Such AI pen testers will also develop complicated attack models that could be exploited and predict with high precision how attackers may adapt to changes in defenses.

As helpful as that will be, Storms contends that AI pen testers will remain extensions of human testers, who will now have more time to enhance those findings with creative remedies and compensating controls. And humans will still be essential when it comes to communicating results to executives.

4. Regulatory compliance reporting

Just as AI can scan and assess enterprise networks and applications for signs of threat actors, it can determine an organization’s regulatory-compliance status and continuously monitor for potential situations where applications, data, or access have fallen outside the tenets of new and evolving regulations.

You have to trust AI to a point. But it still takes a human touch to confirm AI’s findings and make the ultimate decisions.

Andrew Storms, vp of security, Replicated

For example, by analyzing corporate practices, supply chain activity, and environmental impact, AI can help a multinational corporation “strategize effectively to promote sustainability while reducing compliance risks,” writes CISSP Mathura Prasad in a blog post for the cybersecurity training nonprofit ISC2. AI-fueled chatbots like ChatGPT, given the appropriate prompts, can also quickly generate risk statements and policy drafts, summarize laws and regulations in plain English, or translate that material into other languages.

[Read also: Are you up on the world’s first AI law? We deep-dive into the EU’s AI Act]

Humans are still much needed and will be for some time, experts note. Security operations and regulatory compliance team members will continue to provide comprehensive reports highlighting regulatory posture trends over time and perhaps pointing out areas where compensating controls can be enhanced. These reports require the kind of nuance and forward-thinking assessment that AI cannot (and may never be able to) replicate.

5. SOC management

Human security managers augmented with AI within SOCs will find their security organizations are able to handle larger volumes of security data, reduce alert fatigue in human security analysts, and maintain a consistently strong security posture.

And yet the overall management of the SOC is unlikely to be taken over by AI any time soon, as the human creative and strategic capabilities vital to any agile cybersecurity defense program can’t be replaced. “I think the trend will be to keep people in the loop,” says Lariar. “Some of the low-level automation driven by AI decision-making will increasingly be trusted, but even then, top-level security teams will be scrutinizing the AI decision-making very carefully. I haven’t heard of a completely automated SOC,” he says.

[Read also: In this CISO success story, a real-life Marvel “superhero”tells how he uses AI to fight cybercrime]

A fully automated SOC or security team is unlikely for some time, if ever.

“If you outsource SOC operations tasks to machines, where do the machines come from?” asks Chuvakin. “Who built the machines? Today, to a considerable extent, humans build machines.”

And even if you have a mostly autonomous SOC, he adds, who would create and manage the AI and connect the AI in the right way and have it assess and manage the environment in the right way? “That’s going to be the humans.”

George V. Hulme

George V. Hulme is an information security and business technology writer. He is a former senior editor at InformationWeek magazine, where he covered the IT security and homeland security beats. His work has appeared in CSO Online, Computerworld and Network Computing.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW