Skip to content

Employees Are Embracing ‘Shadow AI’ – and Putting Company Data at Risk

Major companies are trying to ban employee use of outside AI tools, but there are other ways to prevent the potential data leakage – and financial or legal fallout – they can cause. Here’s how to manage the threat.

Perspective

Alarm bells are sounding in IT security departments everywhere, and this time the culprit isn’t a hacker or some sophisticated malware – it’s something much more troubling: shadow AI.

If you’ve heard the term “shadow IT,” where employees sidestep official channels to use their own devices or software, then you’re probably familiar with the concept of shadow AI. The term refers to the unsanctioned, unauthorized use of generative AI (GenAI) tools – like ChatGPT, Google’s Gemini, and other large language models (LLMs) – by employees inside organizations without IT approval. And it’s a growing threat to company data.

In fact, Gartner predicts that by 2027, 75% of employees will have acquired, modified, or created technology outside of IT’s visibility, up from 41% in 2022. Meantime, more than half (55%) of global employees surveyed in 2023 by Salesforce admitted to using unapproved GenAI tools at work. That’s a lot of private or sensitive corporate code, documents, and presentations being developed and fed into models outside corporate firewalls.

Step into your autonomous endpoint management (AEM) journey. Join thousands of global IT and security experts for vital keynotes, curated breakout sessions, hands-on labs, and certifications. TANIUM CONVERGE 2024, Nov. 18 – 21

If it reminds you of those heady days when employees first started ignoring company policies and using their personal devices to connect to corporate networks, it should. That “bring your own device (BYOD)” to work trend raised similar questions for IT staff: Do we ban this tech altogether and provide corporate-vetted alternatives, or do we figure out ways to accommodate it safely?

Although most companies quickly realized they couldn’t stop that fast-moving BYOD train, history tends to repeat itself. Companies like Accenture, Amazon, Apple, BofA, Calix, Citigroup, Goldman Sachs, Microsoft, Northrop Grumman, Samsung, Spotify, Verizon, and Wells Fargo are reportedly all trying to ban employee use of outside AI tools like ChatGPT. Two-thirds of the top 20 pharmaceutical companies and many public schools are also doing so.

The growing problem with shadow AI

Why the draconian measures? Because as GenAI tools seize the zeitgeist, employers worry about their private data being shared with external AI systems that could be corrupted or breached, creating competitive, monetary, or legal risks.

It’s not malicious, but it is dangerous. And these aren’t isolated events.

Bernard Marr, technology futurist, author, and consultant

Imagine a marketing team, for example, using AI-powered social media analytics to decode customer behavior, optimize ad campaigns, and predict the next big trend. It’s efficient. But without IT’s approval, such tools could open the door to sensitive customer data or confidential company information being exposed, potentially leading to data breaches and serious violations of regulations like HIPAA or GDPR.

“I’ve seen several alarming incidents among my clients recently where employees inadvertently shared sensitive information with public AI systems, from product roadmaps to financial data,” says Bernard Marr, a technology futurist, author, and consultant. “It’s not malicious, but it is dangerous. And these aren’t isolated events; they are happening more frequently than companies realize or want to admit.”

Tim Morris, chief security adviser for Tanium in the Americas, warns of another, lesser-known shadow AI trend where departments needing GenAI tools to handle specific tasks aren’t getting what they need from existing tools so they build their own using commercially available LLMs. The trouble with that approach: IT hasn’t evaluated the models for internal use.

“Nobody understands where the model came from, what’s in it, what it was intended to do, and what it is doing in your enterprise,” he says. “It’s impossible to know how secure it is and whether it’s opening up your data – and your organization – to outside threats and exposures.”

[Read also: 4 critical leadership priorities for CISOs in the AI era]

Marr and Morris understand corporate impulses to shut down such problems by banning outside chatbots. Marr says that building and providing employees with secure alternatives can be a workable solution – as long as they approximate the capabilities of ChatGPT, Bard, and other tools.

“In-house solutions can work if they’re robust enough,” says Marr. “The key is to match the power and user-friendliness of consumer AI.”

But if companies can’t do that, he says, employees will find ways to use public tools, regardless of company policies.

1 surprising fact about shadow AI – and 4 steps to combat it

The reason behind such rule-breaking may surprise some enterprise leaders.

Your employees using shadow [AI] really have good intentions – more often than not, they’re just trying to figure out how to do their jobs better and faster.

Candy Alexander, CISO and cyber practice lead, NeuEon Inc.

“Your employees using shadow [AI] really have good intentions – more often than not, they’re just trying to figure out how to do their jobs better and faster,” said Candy Alexander, CISO and cyber practice lead at NeuEon Inc. Speaking on a recent episode of Focal Point’s companion podcast, Let’s Converge, Alexander explained the urge to wander outside the bounds of corporate-approved software. “A lot of users find that requesting use of software can be a long and difficult journey to go through in corporate or enterprise America. So they go out and they try free versions of fill-in-the-blank.”

[Listen also: Candy Alexander joins our podcast to talk shadow AI and IT – and smart business strategy – in the first of a two-part series]

In that case, trying to control employees’ AI tools is like herding cats because the technology is too ubiquitous and helpful to keep out of their hands. In fact, 40% of workers admit to using banned GenAI tools at work, the Salesforce survey found.

“Banning ChatGPT outright is likely to backfire,” Marr says. “Employees will find workarounds, and companies risk falling behind on crucial AI capabilities.”

So, if companies accept they can’t ban third-party GenAI apps forever, how can they get a handle on it and limit shadow AI? Here are four strategies to consider:

1. Educate employees about the risks of shadow AI

The first line of defense is education. Many employees don’t understand that public AI tools might expose confidential data, with 70% of workers globally telling Salesforce they have never received or completed training on using generative AI safely or ethically. Even the professionals – IT security teams – admit this is new territory for them, with 65% of professionals in a recent Splunk survey admitting they lack education on GenAI.

Experts like Morris recommend that companies educate teams about the risks of exposing proprietary company information through external LLMs and advise them on preventing that from happening. These best practices should include basic information about how AI chatbots store and process data, what is and is not OK to share with these tools, what tools the organization has authorized for use, and any steps for reporting unauthorized use of tools.

“Companies need to create comprehensive training programs to teach all of their employees about the risks of shadow AI, similar to the cybersecurity awareness training they already received,” says Morris.

2. Develop and enforce AI policies

Sixty percent of digital trust professionals are worried bad actors will exploit GenAI. Still, according to an ISACA pulse poll, only 15% of organizations have AI policies to do anything about it.

You need to strike a balance between innovation and control, or employees will just go around your restrictions.

Tim Morris, chief security advisor for the Americas, Tanium

“You need to strike a balance between innovation and control, or employees will just go around your restrictions like they did with BYOD,” says Morris.

Specifically, Morris recommends organizations establish AI governance frameworks that set clear standards for using AI tools. Delaying this could push employees to develop their own AI projects, bypassing security protocols, he says. The key is balancing innovation with control, allowing for flexibility within safe boundaries.

[Read also: Explore the essentials of digital experience monitoring, and its role in bolstering employee experience (and reducing rule-breaking)]

One caveat, Morris adds: Every AI policy must include a process for handling exceptions. Employees should be informed of the policy, its reasons, and how to request exceptions or additional tools. He says that clear communication prevents frustration and limits the chances of employees seeking unauthorized AI tools independently.

3. Monitor AI usage with endpoint security

Monitoring shadow AI use is no longer optional – it’s essential, because even if a company offers reasonable GenAI tools, some dissatisfied rogue will always produce their variant.

Endpoint security solutions, like Tanium (which publishes this magazine), provide visibility into unsanctioned AI usage by detecting LLMs and related scripts on employee devices. These tools can help IT departments track unauthorized downloads, flag suspicious activity, and ensure compliance with company policies. Network-level controls, such as proxy content blocking, can also prevent access to unapproved AI tools.

4. Implement regular audits and reviews

The AI landscape is evolving too quickly for a “set it and forget it” approach. IT departments must continuously audit and review AI use across the organization. This includes monitoring compliance with AI policies, analyzing emerging threats, and updating security protocols as new tools and use cases appear.

[Read also: Chief AI officers are hard to find – here’s where to find yours]

AI is moving at breakneck speed, and the security risks are multiplying just as quickly. Continuous audits and risk assessments are crucial to staying ahead of potential breaches.

The path forward

Shadow AI, like shadow IT and BYOD before it, is a natural consequence of the rapid consumerization of technology. Employees want to use AI because it makes them more productive, and IT departments can’t ignore that productivity boost. However, without the proper governance, monitoring, and security controls, shadow AI could become a ticking time bomb for companies worldwide.

“The solution lies not in banning these tools but in establishing clear policies, educating employees, and deploying secure alternatives that meet the business’s needs,” says Morris. “Like all technology, shadow AI is both a challenge and an opportunity. The companies that can strike the right balance between innovation and security will thrive in the AI-powered future.”

Wendy Lowder

Wendy Lowder is a freelance writer based in Southern California. When she’s not reporting on hot topics in business and technology, she writes songs about life, love, and growing up country.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW