Skip to content
Logo with the title Let's Converge Podcast in white on a dark blue background, and the word Tanium in red, below.

Ep. 19: Meet Shadow AI, the Rising New Threat

Oct 28, 2024 | 19 min 26 sec

NeuEon CISO Candy Alexander puts businesses on alert to warn employees about the risks of outside AI tools. Join us for the second of our two-part series.

Summary

A lot of organizations think they don’t use AI, but of course their employees are experimenting with all sorts of new AI tools, as curiosity about AI grips the culture. What those workers don’t know is how they may be exposing company data. It’s happening fast, and any business that doesn’t have an AI strategy needs to set parameters and guardrails now.

The key is to inform employees of the risk and learn from them how these tools help get the job done, so your guardrails protect but don’t prohibit business growth.

Host: Stephanie Aceves, senior director of product management, Tanium
Guest: Candy Alexander, CISO and cyber practice lead, NeuEon Inc.

[iFrame]

Show notes

For more info on both shadow AI and shadow IT, check out our articles in Focal Point, Tanium’s award-winning online cyber news magazine, and these other useful resources.

Transcript

The following interview has been edited for clarity.

I heard a commercial the other day on some YouTube thing I was watching, and they said, “Oh, with the use of AI, which is smarter than humans,…” and I just laughed… Marketing is grasping onto this for consumer products, saying AI is the best thing ever. So that’s what our end-user community is hearing.

Candy Alexander, CISO and cyber practice lead, NeuEon Inc.

Stephanie Aceves: Hi, my name is Stephanie Aceves. I’m a senior director of product management at Tanium, and today on Let’s Converge, we’re talking shadow IT and shadow AI. Welcome to the second of our two-part series looking at this increasingly dangerous problem for enterprises.

In part one, we covered some of the project management tools your employees may be using that you’re unaware of – and what may be a surprising fact to some, which is that your employees using shadow IT really have good intentions. More often than not, they’re just trying to figure out how to do their jobs better and faster. And they don’t realize that by using unapproved tech, they may be putting the company at risk.

Joining me today to talk about this issue is Candy Alexander, chief information security officer and cyber practice lead at NeuEon Inc., a leading business-management consulting firm. In addition to developing and managing corporate security programs, Candy has been elected – twice – as international president of the Information Systems Security Association, and she’s been a longtime director on their international board.

OK, let’s get back to our discussion.

There’s a new player entering the space, and this is shadow AI.

Alexander: It’s truly a revolutionary era that we’re coming into with AI, and it can go multiple ways. I just hope and I have confidence it will go well, but I think there’s going to be some scary bumps in the night.

Aceves: At the companies we work with at my job, we talk to executives about concerns with the… resurgence of AI, specifically with the advent of LLMs [large language models]. And when we talk with them, there’s really concern around data leakage, which is a similar problem to what we’ve defined in just your typical shadow IT. And you go on to understand that a lot of times the AI, especially with the LLMs, the risk is that you’re training the model with what could be sensitive information. Can you share a little bit more specifically from the lens of AI and how that is affecting or how you expect it’ll affect businesses as we continue to evolve and adapt and leverage this new kind of technology?

Alexander: It’s funny because this is an interesting introduction of “new” technologies, but it’s not really new. I mean, let’s face it, AI has been around forever.

Aceves: Absolutely.

Alexander: But of course, now that it’s become mainstream, everybody’s been using AI and it’s becoming more significant with the… availability of ChatGPT. So a lot of organizations will say, well, we don’t use AI. And of course you just kind of snicker, because they certainly do. We have employees that will use, for example, ChatGPT or …Google Bard or Gemini now, to do things like use that to spruce up their writing or resumes or marketing, whatever. So it’s really difficult to get a grasp on that usage. In addition to that, the other part to the spectrum is the use of some type of AI, generative AI, for code completion, right?

I just spoke this week in regards to supply chain software. And this is, in my opinion, one of the ways we’re going to be able to identify the use of AI in regards to development and then be able to vet it out. So we’re going to be able to identify with, again, the use of supply chain software, supply chain, what types of AI you’re using, going to explore the explainable AI, which everybody’s now talking about, and then putting parameters around its use… and constantly evaluate that use. And that’s important. But I want to go back to the other side, where you have the organizations that say, well, we really don’t have an AI strategy. And my advice to them is, you better hurry up and do that now and set those parameters.

And it needs to start with something as simple as your acceptable-use policy. Put those parameters, those guardrails, that directional control as to what is acceptable and what isn’t. And… keep in mind that generative AI is just that: It’s generatively learning and increasing its knowledge and scope based on the information that’s provided. So think about that: Do you really want your corporate information as part of that data lake or the data pool? Absolutely not. So you need to think about the use case. And just as with the software supply chain, [I suggest you] map out the components and how they play with each other, when do they come in? The same thing with an AI use case for end users, whether it be Microsoft Copilot or some other AI tool. You need to understand how is that going to be used and do you want to use an open-source AI or do you want to use a private AI for your organization so that those queries that go into that AI use will stay within the confines of your organization?

Aceves: I love that. And you know, some of the leaders I’ve spoken to, their concerns around AI, specifically with generative AI and these GPT models, is really that a lot of them are now embedded in technologies that we didn’t actually [realize] – like hey, my search engine is now using some type of GPT. In those situations, not even the user is aware all the time. And so corporate policy becomes more difficult because a user isn’t aware that they’re doing something that is inherently risky to the organization.

We hear a ton of different organizations are trying to shift to move those actual models in internal, and so they have rights and ownership of the data and any of the learning that happens on top of those models, but how do they enforce controls to whether it’s redirected to an internally approved model or overall just prevent it from being used in ways that are maybe a little more masqueraded – and not for nefarious reasons, but everyone that we talk to, [every business] is talking about the AI that they’re using. And I love that you called it out.

AI has existed for a long time. Most technologies use some type of machine learning or they’re taking the large data sets and really drawing conclusions and intelligence from those. But it was really the advent of GPT being available to consumers that just caused this massive explosion. So with the knowledge that AI is lurking in every corner, how do businesses really start to mitigate that risk of data leakage – I would love to say that there’s a list of all the websites in the world that are going to use some type of LLM and train their models on your data, but how do we get ahead of that?

Alexander: Wow, that’s huge. Because it is everywhere, and in my humble opinion, [the solutions] will have to be multifaceted. And we are seeing that. We’re seeing legislation come out of different countries around the world to put those parameters on it from a regulatory perspective. And then it comes down to a technology perspective. So we start with the government, then it comes down into the organizational use thereof, and I think we’re still waiting and we’ll see further development of how to be able to identify that through technology.

Just as, for example, an analogy to that is cloud computing. How do you figure out where all your instances of data are in the cloud? Well, there’s only a couple of players in that space right now. And one really good one that, in my humble opinion, will help you identify where you are in the cloud, we’re going to have to have that same type of technology with AI use kind of sitting on your proverbial or the perimeter of whatever that is.

I don’t know if that’s going to be at the desktop or wherever, but that type of filtering and sensor. And then the third component, and I hate to say this, this does come back to that “people, process, and technology” model, is making your users aware of what’s going on. Why is AI potentially dangerous? They’re just looking at it – even TV commercials… I heard a commercial the other day on some YouTube I was watching and they said, “Oh, with the use of AI, which is smarter than humans…,” and I just laughed.

Aceves: Oh my gosh.

Alexander: [Consumer product] marketing is grasping onto this, [saying] AI is the best thing ever. So that’s what our end user community is hearing, and they’re seeing that through the use of, “Isn’t it fun to play with Gemini or ChatGPT, or even get on Facebook and take my image and make me look like a model?”

I mean, they’re looking at AI as fantastic. They’re not seeing the dangers inherent, they’re not hearing those horror stories that we’re hearing where we had a ChatGPT for some type of chatbot interacting with an individual in Europe and they were depressed and it pushed them to the point where they took their life. So they’re not understanding the risks, the true risks that are inherent with artificial intelligence. Of course, they’ve taken measures to rectify the bias and the hallucinations that are happening with AI, but as for a regular Joe or Jane on the street, they’re not understanding these risks.

That’s where we have to come in as security professionals and advise the business and our user community that with good comes bad, just like keeping it basic. Advising them, this is why we want to stick with a corporate AI and not use some of those external things. And let them know about the data leakage, unintentional data leakage of corporate data. This is why you want to make them aware of bias and the hallucinations that you can get and the integrity of the data. There’s a whole bunch of things that are inherent with AI use today. It’s like the Wild West. It’s good, but it’s dangerous.

Aceves: And I think too, my role now is in product management and I strongly feel that vendors have a responsibility. And while we’re building products that our customers need, and AI is absolutely a tool that can deliver value and outcomes that they’re looking for, there is a responsibility on vendors and software providers.

When GPT first went live for the consumer, I was reading articles and it was interesting to see security researchers, actually there was one in Europe as well, that were interacting with the LLMs and were able to use – they talked about Jungian psychology and the shadow self to get the AI to admit to secrets that should not have been revealed to a consumer. And it was fascinating, you dig into the technical, the way that it was executed, and I think the company released a statement that it was an extended conversation with the AI.

When I think of extended, I have my thoughts of how long that would take. I believe it was somewhere along the lines of eight to 10 questions, which to me, sitting in front of a new technology, that is not extended.

Alexander: Right.

Aceves: That is a maybe 5- to 10-minute interaction where I’m asking and rating. And so there’s a responsibility that vendors have, especially to their end customers, to be putting [in] these guardrails. I think security saw a huge shift in this with secure coding practices.

I graduated undergrad with a degree in computer science and software engineering, and I didn’t have a single class that was dedicated to security. Now times have changed and now there’s security coding practices built in by default. And I expect that we’re going to have to adopt that from the very beginning with AI because there’s too much risk here. To your point, there’s not only the risk of data leakage, but how do you, once the models start to learn themselves, you don’t really have as much control as you would with any other set of code.

And I know that there’s the sci-fi versions of fear of this, but it is a legitimate concern on what guardrails are in place and how the AI can understand those and potentially manipulate them in the context of a business. So I love that you brought that up and we talk about it with customers that want to adopt more AI and enabled technologies. And they’re nervous. And not just for the data leakage. But it is something that we truly do not have control over. It is almost like an intelligence in its own right, which has rules and paradigms that it adheres to that are not always transparent to even the creator, which is just fascinating to think about.

Alexander: I think that’s the scary part. And I think that’s why legislators are coming into play. Because it comes down to – what you’re touching on, in my opinion – is ethical, moral. And that’s not the space where I work in, but I’m glad to hear and see that governments are. Because that is absolutely terrifying. But that being said, it’s not the space I play in, but it’s my space to make users aware – how are we going to apply this and be aware of the risks inherent thereof.

The good news is, and unless you’re a researcher, you probably won’t step into that realm, that really interesting realm that’s undefined and kind of morphing and squishy, as opposed to the business application use. But that doesn’t mean that there isn’t a potential for business use to go awry and have unexpected consequences or results. And that’s what, again, that’s our job. So [that’s a] loooong way around that whole piece of [how] we need to stay aware and make aware those risks inherent with shadow AI.

Aceves: Yeah, absolutely. I was watching some random YouTube, …something on animal communication, researchers sharing how they’re using AI to understand beluga whales and they can use the artificial intelligence to basically determine which whale is doing the speaking… And so the applications are just so fascinating, even outside of security, you start to see these parallels.

My background is incident response and red teaming. And so one of the biggest problems that security teams have is the volume of noise, and it’s a similar model to a marine biology team having tons of audio recordings and they can’t sift through it. They don’t know. And so they’ve been using artificial intelligence to pick and choose the important things and identify, OK, this is this language or this word that this whale is using, and it’s fascinating to see the convergence of all these things.

Alexander: It is truly revolutionary, you know that, right? I mean, everywhere is using AI and it’s exciting, fabulous, but like I said, kind of scary at the same time.

Aceves: Thank you so much for that fascinating conversation, Candy.

I’ve been speaking with Candy Alexander, CISO and cyber practice lead at NeuEon Inc.

If you’d like to learn more about shadow IT and shadow AI, check out Focal Point, Tanium’s online cyber news magazine. We’ve got links to relevant articles in the show notes. Just visit Tanium.com/p for publications.

To hear more conversations with today’s top business leaders and security experts, make sure to subscribe to Let’s Converge on your favorite podcast app. And if you liked this episode, go ahead and give us a five-star rating.

Thank you for listening. We look forward to sharing more cyber insights on the next episode of Let’s Converge.

Hosts & Guests