Skip to content

How AI Is Redefining Data Loss Prevention (DLP)

Traditional DLP systems designed to prevent data leakage put a big burden on security teams and tend to produce excessive false positives. AI can lighten that load, making it the right choice for many enterprises – as long as security leaders stay on top of the areas where AI can go wrong.

Perspective

In the digital age, data spills faster than coffee on a Monday morning.

An employee somehow accesses confidential HR data, or accidentally emails a financial report to an external party instead of the CFO, or uploads a customer database to a thumb drive. Or, maybe worse, it’s an outside job: Company data gets exposed by a sophisticated cyberattacker.

From inside or out, organizations’ sensitive information is constantly at risk, especially as generative AI (GenAI) permeates business functions, and a leak will typically set off the familiar cycle of breach, response, recovery – and then more investment in security and shifting those costs to consumers.

Orgs have sensitive data blind spots – this solution helps you locate and manage sensitive data to meet compliance regulations and prevent data loss.

The traditional data loss-prevention (DLP) systems designed to block such access, sharing, or leaking of sensitive data put the burden on security teams to manually create rules and policies that dictate what’s allowed and what isn’t. Those teams must then tune and update such policies to keep up with evolving threats, new data types, and changing business needs. This rule-based approach often leads to high rates of false-positives, alert fatigue, missed contextual threats, and a heavy administrative burden on security teams.

Here’s where AI can transform how businesses safeguard their most valuable assets. Through machine learning, natural language processing, and behavioral analytics, AI-powered DLP solutions can be more adaptive and intelligent, able to respond to data threats more effectively.

Trouble is, such systems require state-of-the-art training. If the AI system is trained on bad data, the results can be disastrous. “It’s like the analogy of a person who goes on a trip and is off by one degree – it doesn’t seem like a big deal one mile out, but when you’re a thousand miles out, it’s a huge deal,” says Eric Vanderburg, president of cybersecurity consulting firm Nexus Cyber, in an exclusive interview with Focal Point.

Today, the stakes are high: According to a report by IBM and Ponemon Institute, data breach costs in 2024 reached an all-time high of $4.88 million, up 10 percent from the prior year. But the report found that organizations that apply security AI and automation lower the cost of breaches by an average of $2.2 million.

Automating DLP has a number of benefits for organizations, though there are still limitations and challenges. Here’s what’s important to know.

The benefits of AI-powered DLP

While AI and GenAI technologies are fueling data breaches and compelling organizations to reassess security measures, they’re also making security smarter, said Kevin Skapinetz, a veteran cybersecurity strategist and former general manager of security software at IBM, when their report was released last year.

What AI is trying to do is what the humans were doing before: looking through raw data and creating the policies around it.

Kevin Skapinetz, general partner, TechOperators

“To get ahead, businesses should invest in new AI-driven defenses and develop the skills needed to address the emerging risks and opportunities presented by generative AI,” said Skapinetz, now general partner at the cyber-focused venture capital firm TechOperators.

For one thing, AI streamlines the process of creating the rules that traditional DLP might fumble. “Essentially what AI is trying to do is what the humans were doing before: looking through raw data and creating the policies around it,” he noted. “AI does that very effectively once you go through a training period to help it understand.”

[Read also: What is data loss prevention? A simplified guide to the concept and its benefits]

In addition, AI enables the DLP to understand context and behavior better. A traditional DLP might block all emails containing “confidential,” even if it’s an approved document, while an AI-powered DLP will recognize whether the document is being shared securely within the company.

These differentiations lead to reduced false positives.

“If a human receives a lot of alerts and they’ve all been false positives, they might just whitelist the next one that comes through without truly looking at it,” says Vanderburg. “That’s what happens when you’re fatigued,” he says. “But AIs don’t get fatigued.”

AI can also analyze more factors beyond just content, such as user behavior and connection patterns to identify potential data loss incidents. If a user typically connects from a home office in Oklahoma a few days a week, for example, the AI recognizes this. If, however, the user suddenly begins connecting from Liberia, the AI will flag it, Vanderburg says.

There are compliance benefits, too. Because AIs log every decision, this data provides a clear rationale for security actions – something human-driven processes often lack, Vanderburg says. It’s also valuable in reacting to a data breach, enabling organizations to defend against regulatory fines by showing due diligence and that reasonable precautions were taken, he adds.

[Read also: Learn the essential strategies of compliance management and how it boosts the bottom line]

All of these factors reduce the operational burden for security teams, Vanderburg says. “AI can increase the security posture of the organization overall, and in some cases allows the organization to redirect resources,” he says. “Someone who was responsible for looking at the DLP all the time is able to do this part of their job far more efficiently,” he says.

Enhanced DLP requires smart AI training

While there are a number of benefits to AI-powered DLP, there are important considerations for security teams, says Candy Alexander, CISO and cyber risk practice lead at technology advisory company NeuEon, speaking exclusively with Focal Point. First is the understanding that these solutions must be trained.

If your [data] training is bad in the beginning, you might not notice it much, but 12 months down the road, suddenly it’s really skewed and awful.

Eric Vanderburg, president, Nexus Cyber

“AI is like an intern who has a feel for what they’re supposed to do but doesn’t know exactly,” she says. “AIs are only as intelligent as you train them to be – and that requires good data governance.”

To effectively train the AI, businesses need robust network visibility, which will provide a clear understanding of what data exists, where it resides, and how to protect it, Alexander says. They should be able to identify critical business data, operational data, and regulated data; assess which data poses the greatest security and compliance risks if exposed; and classify the data appropriately so the DLP can accurately detect, track, and protect the sensitive information.

[Read also: Drilling down on data privacy – 5 key charts from ISACA’s 2025 report]

In an era when enterprise leaders contemplating AI are feeling serious levels of FOMO, it’s tough to avoid the urge to adopt AI quickly and clean up whatever messes arise later. The need to “proceed with caution” will inevitably fall to security chiefs, who must assess (and effectively communicate to company executives) the long-term consequences of AI.

“With AI, if your training is bad in the beginning, you might not notice it much, but 12 months down the road, suddenly it’s really skewed and awful,” says Vanderburg.

That’s why it’s also crucial to consistently update and validate these systems, he adds; you can’t set-it-and-forget-it.

A DLP prerequisite – keeping humans in the loop

Concerns about AI replacing human workers permeate pretty much every conversation about AI. And while AI solutions are highly advanced, in the realm of DLP, at least, human oversight remains a key requirement.

AI is like an intern who has a feel for what they’re supposed to do but doesn’t know exactly.

Candy Alexander, CISO and cyber risk practice lead, NeuEon

“Organizations still need a human in the loop,” says Alexander. “Going back to the intern example, is the intern trustworthy? Have they had the right training to carry forth? You don’t automatically give interns admin privileges to all your systems.”

Humans are still needed to vet certain anomalies and false positives, override errors, and adjust certain policies, she says.

[Read also: AI vs. humans – why SecOps may (not) be the next battleground]

And while AI is helping organizations boost efficiency in preventing data loss, it also presents new challenges as attackers leverage the same technology to craft more sophisticated threats, Vanderburg says. “It’s going to be a bit of a cat-and-mouse game: We’re using AI, the attackers are using AI,” he says. “I could see something where attackers are trying to put garbage into the AI systems and introduce fake AI alerts to create fatigue. If you don’t have enough human eyes on this, you could get in trouble.”

For organizations exploring AI-powered DLP solutions, the traditional advice stands, Alexander says. “You need to understand what your requirements are: What do you want it to do? Do you have the prerequisites in place? Does your business understand the data?” she says. “It’s all about people, processes, and technology. The technology is really amazing, but where it can go wrong is in implementation and support-level investment.”

Kristin Burnham

Kristin Burnham is a freelance journalist covering IT, business technology, and leadership.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW