Why You Need a Data-Driven Vulnerability Management Framework
Accurate intelligence is a must for IT decision-makers
When it comes to vulnerability management, some things never change. Or as storied American baseball catcher, Yogi Berra once said: “It’s like déjà vu all over again.” Whether you’re working in cybersecurity or IT ops, there’s a constant barrage of “fastball” patches to handle and “curveball” zero-days to take care of. For any IT player, the pressure has never been greater to get the job done. But the hard work never stops.
A record year for IT security vulnerabilities
To find out why, just take one look at the NIST National Vulnerability Database (NVD). There were 20,136 CVEs published in 2021, the fifth year in a row they hit an all-time high. Although the number of high severity bugs fell slightly, there was certainly no room for complacency. With over 55 vulnerabilities published on average every single day, including weekends and holidays, the workload for IT and security teams has never been greater. And 2022 is already on track to at least match last year’s figures.
It’s déjà vu not only because of this steady ramping up of published CVEs, but because of the process that accompanies this. When a ProxyLogon or a Log4Shell drops, security teams will ask the same old questions: “Does this matter to us?” “Where is it?” “Have we been exploited?” “Can we patch it?” And “Are we protected?” From sysadmins to the C-suite, the questions are always the same. And from there, the familiar scramble to mitigate endpoint vulnerability risk begins.
Looking for the North Star
So how do we as an industry come to terms with the growing volume and breadth of vulnerabilities appearing these days? The best business and IT decisions can only be made with the highest quality intelligence. And the best intel comes from the cleanest and freshest data.
This is absolutely critical during incident response and crisis management, as anyone who has lived days and nights in the IT ops/cyberwar room will attest to. Data is your North Star in the tense hours and days following a serious incident. If you can’t answer with confidence those critical questions with real-time, accurate data, the organization will be flying blind. Playbooks and foundational processes are difficult, if not impossible, to follow if data is missing, stale, or inaccurate.
In short, accurate data allows teams to scope, pivot, hunt, target, triage, and remediate at speed and scale. That’s why it’s critical for organizations to have a tightly integrated platform delivering real-time data and actionable insights on as few control planes as possible. Anything less will leave critical coverage gaps, and add time and cost, which you can ill-afford during a crisis.
Which data matters?
Start with the basics. Much of the baseline data needed in a crisis is simple asset inventory data describing which devices are connected to the network, and what applications are running on those devices. Vulnerabilities like Log4Shell, or the ones exploited by the SolarWinds and Equifax hackers, require a deeper level of understanding still — i.e., what files, components, and libraries are running as part of those applications?
The good news is that with Tanium, all those questions can be answered rapidly and at scale, with the best quality, real-time data.
Take the Log4j vulnerabilities. Many organizations were left in the dark precisely because they didn’t have an accurate inventory of devices, apps, application components, dependencies and configurations. Some tried vendor attestations, fuzzing, and crawling scripts, but none of these techniques provided a complete picture of risk exposure. And some even created false positives and threatened to consume excessive CPU resources.
Tanium has the vulnerability management tools you need
Tanium was able to help customers from many different industries and verticals in various ways:
Installed & running applications – Tanium users could quickly figure out which critical applications were installed and running with the Tanium platform.
Files on disk search – Customers with Tanium Index were ahead of the game because their file systems were already indexed and searches for files with “Log4j” in the name were possible in minutes, if not seconds.
Command line execution search – Tanium Threat Response showed where Log4j was part of command line process execution.
Exploit detection – With Threat Response, multiple YARA rules and Indicators of Compromise (IoCs) were surfaced to identify signs of exploitation.
File contents search – Tanium Reveal, a sensitive data discovery tool, was used in a novel way to find versions of Log4j or “JndiLookup.class” in nested files, by inspecting file contents. Reveal has been immensely valuable in helping with the fallout of Log4Shell.
Be ready for more
Log4Shell has been described as one of the most dangerous exploits ever published. But it won’t be the last. It taught many organizations the value of having a platform with the capability, flexibility and deep insight to discover what is most important, and then remediate rapidly and at scale. With a platform like Tanium’s at hand, organizations can have the confidence to identify, protect, detect, respond, and recover.
That’s not a mutually exclusive list. The reality is we must be prepared to do all of them together. Or to use another “Yogiism” — “When you come to a fork in the road, take it!”
Do you know your organization’s IT risk posture? Get a comprehensive view of your risk posture and proactive ways to protect your organization from growing cyber threats with our five-day, no-cost Risk Assessment. Sign up today.