Skip to content
Logo with the title Let's Converge Podcast in white on a dark blue background, and the word Tanium in red, below.

Ep. 16: We Need to Get Proactive About Vulnerability Management

Sep 29, 2024 | 23 min 45 sec

Nick Brown, a senior engineer at True Zero Technologies, explains why criticality, risk acceptance, and AI tools are key to staying ahead of common vulnerabilities and exposures (CVEs).

Summary

With new CVEs popping up virtually every hour – more than 29,000 were documented worldwide in 2023 – it’s unrealistic to think you’ll ever get to zero. And that’s okay, because only a small fraction will actually be exploited by cybergangs. But which? Here, we discuss the best ways to prioritize, and how best to leverage your technology and your people.

Host: Melissa Bischoping, director of endpoint security research, Tanium
Guest: Nick Brown, senior engineer, True Zero Technologies

[iFrame]

Show notes

For more info on vulnerability management and the best and safest ways to integrate AI tools into your cybersecurity program, check out these articles in Focal Point, Tanium’s award-winning online cyber news magazine, and these other useful resources.

Transcript

The following interview has been edited for clarity.

Nick Brown: Our ability as humans to look at something and see a pattern and breaks in the pattern and dynamically remove the brakes to understand the actual relevant pattern, it still beats out machines. Now, we can’t do it as fast as machines can, if we can get a machine to do it for us, but I don’t think that we’re at that point yet.

Melissa Bischoping: When it comes to software and vulnerability management, the statistics are astounding. More than 29,000 common vulnerabilities and exposures – CVEs for short – were documented worldwide in 2023. That averages out to about 80 a day, and it’s a year-over-year increase of 15%. The reality is that only a small fraction of these are actually going to be exploited by cybergangs and pose a real problem to your enterprise. But that also means that in today’s complex environment, you need both technology and team members who can give some clarity to what you should focus on, why it matters, and how it’s actually going to affect you.

I’m Melissa Bischoping, director of endpoint security research at Tanium. Today on Let’s Converge, we’re talking vulnerability management, both the challenges and benefits of getting proactive about risk.

Joining me today is Nick Brown, a senior engineer at True Zero Technologies, a veteran-owned business offering comprehensive cybersecurity and consulting services. Nick works with organizations every day to provide critical thinking, analysis, and effective communication strategies on risk. We’re here to talk about why understanding issues like criticality, risk acceptance, and just good old-fashioned business operations is key to the vulnerability management process. Welcome to the podcast, Nick.

Brown: Thank you for having me, Melissa. Definitely appreciate it.

Bischoping: Before we get started, let’s take a look back at how vulnerability management has evolved, because when tech really started rolling out into the business landscape, there wasn’t a discipline of vulnerability management. There wasn’t really a large widespread prevalence of cyberattacks exploiting things 20 or 30 years ago. That has rapidly scaled as more and more technology has entered the ecosystem. We’re now having to scale with that. What are some of your thoughts on how we approach the fact that this is growing so fast?

Brown: Well, very similar to your intro, I think the best way to deal with that type of situation is with a combination of technology and people. We’ve watched just the general environment and the scope of enterprises and their understanding of vulnerability increase as they use more technology. So now that that’s becoming more on the forefront, that understanding and need to do something about that becomes more and more prevalent. And I think the only way you can really do that is with a combination of the two.

Bischoping: One of the things I think a lot about is how we’re getting to a point where there is so much data, so many thousands of vulnerabilities on any given report, that we have to add that layer of automation on top of it. Do you think we’re at some sort of watershed moment where we’re getting to a point with automation that may finally help us catch up and really get ahead of things, or do you see it, which is my big concern, we’ve got so much data, so many layers of tooling and so much added complexity that we’re forever playing catchup?

Brown: I have two thoughts about that. [He chuckles.] One, yes, I think we are close. I think we’re close and getting closer, but it’s very similar to in mathematics approaching zero. I don’t know that we’ll ever approach zero, but we’re going to get that number down into lots of decimals, and we’re going to get really, really close. Some of the things that already exist with some of the automation, some of the integrations of AI, there’s plenty of very specific tools that will find certain things, AV tools and vulnerability management tools that will find those specific CVEs that will look for those malicious applications on your endpoints because they know what they are.

But it’s still hard, because just like we’re trying to get computers to do what humans do in that space faster, the enemy is doing the same thing, and they’re getting fast. So it’s a battle, it’s a fight, and we’re still fighting it, and I don’t know that it’s ever going to go away as technology evolves. We’ve just got to keep doing it and stay steadfast.

Bischoping: Yeah, I agree with you. One of the points I try to make when people start talking about AI and automation is that I don’t see it as “artificial intelligence.” I see it as “augmented human intelligence.”

Brown: Mm-hmm.

Bischoping: You still need creative critical thinkers sitting behind the keyboard knowing how to use that automation effectively. Otherwise, automation just allows you to do the wrong thing faster. [They laugh.] Which can be the same problem. And that’s one of the things that I’d be curious to get your take on: the misconceptions about doing vulnerability management at scale. One thing that always sticks out to me is prioritization and how people categorize and triage vulnerabilities to go after in their environments. Can you talk to me a little bit about your experience with that and how you see the automation and scale problem affecting it?

Brown: Sure. I would say probably the biggest misconception is that you will hit zero. I can’t tell you how many conversations go, like, “Alright, so this is our number?… Oh my gosh, that’s pretty big!”

“Don’t worry, sir. It’s OK. It looks that way right now. We can put plans in place. We can take these actions, we can start breaking that down and we can bring that number pretty close. We’ll get low.”

And then they go, “Well, how long does it take to get to zero?”

“Thaaaat’s… I don’t, I can’t – no, sorry. I think for legal reasons, I can’t give you an answer to that.” [Melissa laughs.]

Because then I’ll back myself into a corner. Because there’s just no way. Because they will always change. It always changes, every month.

Bischoping: Every hour, new stuff.

Brown: Every hour! Exactly. New vulnerabilities are released.

Bischoping: That’s the thing that I talk to people about a lot – look, every minute, every hour, you’re having new endpoints come online. Or maybe an old system got re-imaged and introduced CVEs that you had previously patched. Or you’ve got a new contractor that’s hooked into your network. Your risk and vulnerability exposures change dynamically. I think we’re long past the point where things like a weekly vulnerability scan would be sufficient.

Brown: That is it exactly. And so we try to help folks understand the difference in that data and the relevancy of that data. Then it becomes, again, you start taking the specifics out and things become more contextualized. Am I within this particular threshold? Is this within our acceptable risk matrix of how much we can have or how much we can’t have? And having diligent, smart engineers that are on your team that are tracking these and understand those circumstances. Is this new set of vulnerabilities [from] a newly imaged machine and our image needs to be adjusted? Or is it a rogue device that has entered our environment illegally? Those things have to be taken into consideration. If you’re just looking at that number, you’ll be banging your head against a wall for a long time, and it’s going to hurt.

Bischoping: I think I see that a lot too, where an organization immediately wants to go into a vulnerability report and they want to drill down by, alright, what’s the highest CVSS score? I’m going to start on those first. [That is, the Common Vulnerability Scoring System, a 0-10 grade that information security teams use to rank vulnerabilities.] I have a personal opinion on whether that’s the right way to approach it, but I want to hear your thoughts on when an organization first sits down and says, “We’ve done our vulnerability scans, we have a hundred thousand vulnerabilities. Here’s how I’m going to start tackling this.” What do you see?

Brown: We like to suggest to folks: High vulnerabilities are important, and critical vulnerabilities are important, but exploited vulnerabilities are probably more important right now. That timing is important if that thing has already been exploited or – my favorite – it was a vulnerability that has a very low score and has existed forever that you didn’t worry about because you didn’t get to the bottom of the list, all of a sudden is exploited. That changes the whole view of what that’s going to be. And so now that understanding is a little bit more ubiquitous, we tend to point folks to look at this first, and then work your way through and out of that into getting the rest of your numbers.

Bischoping: This is kind of a tangent to that, but I want to touch on it because it’s hitting on something that’s very personally important to me.

When we talk about automation and scale and how we’re going to do business in the future, one of the things that often comes up is, “AI is going to take all of our jobs,” or “AI is going to reduce those entry-level positions” or whatnot. I’m not really a believer in that because I think that AI is just going to give us more data to process faster, and this is going to be true in investigations as well as vulnerability management. It’s going to give us more information, but you’re still going to need educated folks to sit down and actually understand it. So I’m really hopeful that organizations are taking this as an opportunity to start investing in educating their teams, their employees, their partners on how to use those tools to get maximum efficacy and sort of use them as a force multiplier versus individuals being afraid of them as a career killer.

Brown: Agreed, agreed. I don’t think it’s going to remove as many jobs as people say. In this particular corner of the technical industry, I’m going to say that right now that is the best way to use that kind of automation, to use language models. I mean in its system as it is, you have the ability to control how much it considers relationally with the temperature gauges. So just that existing is proof positive that we’re not at a spot yet where the machine is going to be able to take that away from a person. Because if that temperature is too strong, then you might end up with answers that don’t really apply because it’s trying too hard to match the context to the actual specific data, and if you have it too low, you end up with a bunch of erroneous data that doesn’t even matter and you don’t know what you’re doing with that. And it’s always going to take a person to suss that out.

Our ability as humans to look at something and see a pattern and breaks in the pattern and dynamically remove the breaks to understand the actual relevant pattern, it still beats out machines. Now, we can’t do it as fast as machines can, if we can get a machine to do it for us, but I don’t think that we’re at that point yet.

Bischoping: I want to pivot for just a second. Let’s talk a little less about the technical ones and zeros – the bits, as you might call them – and let’s talk about the human challenge. We’ve talked about how you need human beings to analyze data and to be able to identify sort of the signal from the noise, supported by their tooling and technology. But then you have to go down the hall and knock on someone’s door and say, “Hey, I have to take your critical business application that makes you a million dollars a day offline because it’s now a vulnerability, it’s now a risk in your environment.” That has, y’know, early in my career, I have a great master plan for how I’m going to patch all of these systems only for an executive to say, “Absolutely not.”

Brown: If finding vulnerabilities is a never-ending fight, that circumstance,…. I don’t know that you ever do win that. The best I can say for folks is to, again, stay as steadfast as possible. You know what it is, you know what the implications are. The best that we can do is help educate those folks that are involved and then produce a reasonable plan. It doesn’t make sense to go, “Your application is so vulnerable that it’s going to cause us significant problems and we’re going to take it down,” and that’s the end of the conversation. Never. That can’t be true.

That pays your bills and it pays my bills. So let’s work this out together. How do we isolate those machines while we figure out the difference? What do we need to do to update that difference? Do we need to change course entirely? Does a code base need adjusting? Whatever the case is, figure that out, plan it, and then work your way through to the end. And there’s plenty of ways to mitigate these circumstances and adjust what it is you consider good risk. So let’s say that application does end up being vulnerable. Something comes out and the code base in that application has a new vulnerability and they release a new patch, but that changes some of the ways the code works. So your development team does have to do a pretty substantial review and make some changes to the code. What do you do?

Well, we can isolate those endpoints away from the rest of the environment, make them still available to our customers outside of our environment as necessary. Isolate them internally so that we don’t increase the damage and the risk that can come from that.

And if we have to keep that up for whatever reason, again, have a plan, understand the risks. What is that vulnerability? Does that vulnerability let you do arbitrary code execution or does that vulnerability allow them to read data that you don’t necessarily want them to read but they can’t change anything? All of these things factor into understanding what that risk is going to be and how to understand that. And so you have to have those conversations. And there’s really no fighting it. [At] some organizations, that is the way the organization survives. So you have to be creative and figure out ways to deal with that and adjust your expectations until that can be adjusted.

Bischoping: And I think you touched on something really important [where] the nuance gets lost. It’s not just understanding the CVSS score or whether it’s “known exploited” but actually understanding what the impact, if it’s exploited, is. Is it remote code execution? Is it just an arbitrary file read? What is the actual impact? And then also looking at the criticality of the endpoint that it’s on. Is this your public-facing on-premises exchange server, or is this a laptop that a user maybe takes home once a week? Because those will definitely change your risk mitigation strategy and the investment you’re going to want to make in remediation. Am I right?

Brown: Oh, yes, absolutely. We have those conversations regularly with customers because we have to. They need to continue doing business, and even in the cases where you can’t be fully secure, you need to understand and accept those circumstances. And if you can’t accept those circumstances, well, it sounds like your business needs to change.

Bischoping: And that is one of the things that I’m the most passionate about in my work. It is such a professional philosophy for me to always be able to give credible guidance to an organization that they can trust in. When I tell you something is scary to me or when I tell you this is something I would prioritize, please trust me. I’m not trying to fearmonger you. I am literally just trying to keep you safer.

I think organizations often get very overwhelmed by the amount of data being thrown at them, the amount of shocking headlines on LinkedIn or Bleeping Computer or whatever. (No hate to Bleeping Computer; I read them too.) But I think that having people who can go out and say, “We know this is hard. We’re going to help you walk through this. Here’s what I understand about the vulnerability. Let me break it down for you.” If you’re someone looking to break into this field or accelerate your role in this field, if you can learn how to do that level of translation to say, “Here’s what I know about this vulnerability and why it’s important and why you should listen to me,” that is absolutely a cybersecurity superpower.

Brown: Yeah. That interpersonal is just as important as understanding the technical.

Bischoping: Let’s talk about risk acceptance for a bit. I’m a nerd. I used to play D&D [Dungeons & Dragons] and Magic: The Gathering and stuff back in high school, even into adulthood, and I often say that risk acceptance has to be treated like a mana pool [a place akin to a bank account where players of Magic: The Gathering store resources to pay for spells and abilities]. You can only spend so much before it has to be recharged. You can’t keep writing checks for risk acceptance just because you don’t want the downtime cost, you don’t want the outage cost. If [a program or application] is so important that you can’t take it down to patch it, then it’s so important that you have to have a plan to patch it, which sounds like an oxymoron but it’s true.

Brown: Oh yeah. I mean, if we’re going to stick with Magic [game] references, that would be like using a monster with some mechanism that removes it from the game for a little while. There were several functions in the game where you can spend some mana and remove that monster off the board for a while so your enemy can’t attack it, you can’t use it, and the whole point was they gained some kind of benefit while it was out of play. It gained “+1/+1 counters” [in the game, a way of boosting a creature’s power and toughness] until it came back in and then it comes back in stronger. That is exactly the case, and you have to be able to understand that.

Bischoping: I’ll borrow a phrase from my husband. He often refers to whatever the problem of the day is as the “alligator closest to the boat.” That’s the one you’re worried about the most, the one swimming closest to you, that’s the one you’ve got to deal with. I think so often that whatever the headline is in the CVE today, or the one you’re reading about on Bleeping Computer, those become the alligator closest to the boat when there are a lot of other alligators lurking below the water that are known exploited by a hundred other threat actors, and they’re not actually that far away, and maybe they’re directly beneath the boat.

Brown: That’s right. You’re on the bayou, alligators everywhere. There’s no getting away from the alligators.

Bischoping: A vulnerability guide, I love it.

Brown: You just have to have a really good plan to deal with them when they approach the boat.

Bischoping: You need a hoverboat, you need one of the hoverboats. You go after the easy-to-patch stuff and then you have to layer on different levels of expertise and techniques to go after the more complex. Use the automation to close as many of the doors as possible, remove as many alligators as you can, but you’re still going to have some that you have to manually wrestle. I’m using a lot of alligator references. This is great. This is hilarious.

Brown: Oh no, that’s good.

Bischoping: So let’s pivot again a little bit. We’re also seeing more action from external notification opportunities – for example, CISA, the Cybersecurity and Infrastructure Security Agency, is doing more notification and letting people know, “Hey, you’re vulnerable,” or, “Hey, we’ve got pre-ransomware notification.” We’re seeing a lot more about that. So organizations are not only learning from their internal scanning but also maybe there’s a third party coming to them and saying, “We believe you are breached or could be breached soon.” This is really helpful but also adds just more noise. Do you think this is going to significantly change the landscape or help organizations close that door faster?

Brown: It’s about the accessibility of that information and the ubiquity of that information. So as that becomes more accessible – actually accessible might not be the right word. Digestible, I think, is the right word. Because you’re not just covering the savvy engineer who might be there looking at what’s going on; you’re covering some of the higher-level folks. You’re talking management, you’re talking C-level folks who still need to have some kind of concept and understanding of these things as well. Because the more coverage you have for understanding and requiring action for these things, the better. We’ve said it many times: We’re all human. Not everybody’s going to be on every feed watching every website looking at every source of data all the time. So the more that becomes aggregated and easier to access, easier to digest, then that’s when that really hits it with the standard organization.

Bischoping: We’re coming up on the end of our time here together, but I want to give you a chance to give your hot takes? Anything we’ve left out, any takeaways, shameless plugs? What do you want to leave with?

Brown: Well, I’ll say for anybody who’s listening, some of the stuff that we’ve talked about makes this sound grim, like it’s never-ending. We’re not lying; it is. What I want you to understand, though, is that staying strong in that fight is the only way that any of this is going to get done, and any of this is going to be successful.

You can’t back down, you can’t step out of it. You have to keep fighting, keep communicating, keep working together in order to keep these environments safe.

And if you happen to be looking for some folks to help you do that [he then mimics the voice of an old-time radio announcer plugging a product], True Zero provides many services cyber and cyber-adjacent that will cover a bunch of those circumstances for you, including our newest service, actionable intelligence operations, where we can simulate your environments and attack thresholds and help you with discerning what those attackers are and finding them across your environment.

Bischoping: That was an exceptional radio voice. I was going to ask if you had alligator removal services, but [they chuckle], is that coming on the roadmap?

Brown: I’ll talk to some folks about it. I think we can work something.

Bischoping: OK, awesome.

I have been talking with Nick Brown, a senior engineer at True Zero Technologies.

If you’d like to learn more about vulnerability management and automation, check out Focal Point, Tanium’s online cyber news magazine. We’ve got links to relevant articles and reports in the show notes. Just visit tanium.com/p for publications.

To hear more conversations with today’s top business leaders and security experts, make sure to subscribe to Let’s Converge on your favorite podcast app. And if you like this episode, please give us a five-star rating.

Thanks for listening. We look forward to sharing more cyber insights – and maybe more alligators – on the next episode of Let’s Converge.

Hosts & Guests

Melissa Bischoping

Melissa Bischoping is Director, Endpoint Security Research at Tanium. Presenter, author, and cyber SME, she offers guidance on attack behaviors and emerging threats.

Nick Brown

Nick Brown is a systems engineer specializing in system and program analysis. Prior to his work at True Zero Technologies, he has worked for Modis, Tanium, and UPS on security and system integration in corporate and retail settings.