The Essential Guide to Slow Patching: The Reasons, the Risks, the Remedies
A recent survey shows significant delays in patching critical software vulnerabilities. Here’s why it happens, why security teams are sometimes encouraged to let it happen, and the policies that can reverse this (increasingly dangerous) trend.
Patch Tuesday, the day that Microsoft releases its software updates, comes once a month like clockwork. Almost everybody (in the cybersecurity world, at least) knows that, so you’d think IT teams would update their systems as soon as they get the alert.
Except they don’t. A recent study conducted by Synopsys found that 28% of IT professionals are taking at least three weeks to patch critical vulnerabilities, while another 20% admit to taking up to a month.
At a time when threat actors are developing new attacks targeting vulnerabilities, this slow approach to patching is putting countless networks and data at greater risk. It’s like driving a car with the service light on: You know there’s a problem, but because the car is still running, it’s easy to put off fixing it.
A car can take weeks or months to fail after that service light comes on. In security, when the patch notice arrives, you have a matter of seconds to fix the problem – if it isn’t already too late – because the bad guys are continually scanning for known vulnerabilities.
“Even if you are behind a corporate network and things are firewalled, it’s just a matter of time until someone gets into the perimeter and finds that vulnerability,” said David Brumley, software security intelligence agency adviser, cybersecurity professor at Carnegie Mellon University, and CEO at ForAllSecure. “From a security standpoint, you want to patch as quickly as possible.”
Reasons behind slow patching
Over the past year, IT and security teams were faced with vulnerabilities like MOVEit, a file-transfer zero-day flaw that could allow attackers to take control of a system; curl, a zero-day attack against an open-source tool that uses URL syntax to transfer data; and the Apache Superset exploit, a zero-day vulnerability exposing configurations and databases. While fewer than 100 new zero-day attacks have been introduced each year, they tend to do outsize damage, as they don’t target just a single organization but everyone who’s using the software. The ability to hit so many targets with minimal effort makes it a popular attack vector for nation-states and cybercrime gangs.
Even if you are behind a corporate network and things are firewalled, it’s just a matter of time until someone gets into the perimeter and finds that vulnerability.
So why is it taking IT and security teams so long to apply patches when they’re introduced?
The first challenge concerns visibility: Security teams need to know the number and types of assets and endpoints that are actually in use on a network, both the enterprise’s officially sanctioned gear and the many other devices and apps that employees introduce on their own. The vulnerability could be in shadow IT used by employees and not revealed to the IT staff, for example. Or the DevOps team could be using environments relying on unsupported tools.
A second factor that delays patching relates to visibility of software – specifically, the components within software where the vulnerability lies. Too often, when a zero-day vulnerability is announced, security teams realize they don’t know enough about their software supply chain and must scramble to determine if an infected component is built into any of the applications in use on their network.
[Read also: 7 ways to defend your software supply chain]
That’s what happened in December 2021 when a zero-day vulnerability was identified in Apache Log4j, the popular, open-source Java logging library used in tens of thousands of applications. Patches were released shortly thereafter and yet still, more than two years on, a shocking number of enterprises have not fixed the vulnerability. (A recent survey found that 38% of apps using Log4j rely on an insecure version.)
To locate those culprit components, enterprises are increasingly turning to a software bill of materials (SBOM), which spells out the components or ingredients in all the software apps and programs used by an organization. “Think of the different items in your pantry,” explained Melissa Bischoping, director of endpoint security research at Tanium (which owns this magazine). Every ingredient in every item is well-tracked by food manufacturers, which allows them to – in the case of, say, a listeria or E. coli outbreak – pinpoint exactly when and where a contaminated ingredient was mixed into their product. Similarly, an SBOM makes it easier to track software build components and make that information more readily available when a vulnerability is detected.
A third potential reason for slow patching comes down to business.
“The business has to continue, and many systems require outages or downtime or reboots,” said Bischoping. “That can result in pushback from business owners or operations teams who say the patch has to wait.”
Then there are the times when a vulnerability is known and enterprise leaders are informed and committed to making the patch, but nothing can be done until the vendor releases that patch. This is especially a problem in open-source software or systems that use open-source components.
“You have to wait for the vendors to update their software,” Bischoping adds. “That means waiting for a whole ecosystem to make those updates available, and then having the processes, technologies, and policies in place to facilitate the updates in our environment.”
Why a slow patch is sometimes so tempting
Some patching procrastinators claim they have an excuse: They fear that applying the patch immediately could end up doing more harm.
Outages or downtime or reboots… can result in pushback from business owners or operations teams who say the patch has to wait.
“Patching introduces unknowns,” said Brumley. As a user, you don’t know how long it is going to take to upgrade, if systems will break by applying the fix, or what problem you are addressing or preventing.
From a developer point of view, it can be a struggle maintaining the stability of existing code. “When you wrote the code, it was working fine. But now when you are required to update, the developer doesn’t know where the dependency has a new instability,” he noted.
Developers get conflicting messages when told to apply a patch. One is to upgrade all the vulnerable pieces as quickly as possible, and another is to make sure nothing breaks. That second one, Brumley added, is how developers evaluate their work – they want to create solid code rather than have to fix it.
To take some of the burden off developers and to reduce the temptation to wait for patching unknowns to be verified, IT and security teams could (and should) deploy a security-by-design approach. This approach is intended to eliminate vulnerabilities as much as possible through continuous testing and building authentication protections into the development process. The National Cybersecurity Strategy, released by the White House in 2023, strongly recommends organizations adopt security-by-design protocols. According to the Cybersecurity and Infrastructure Security Agency (CISA), becoming secure by design will better address the gaps in cybersecurity – including that space between a patch release and a patch application.
Until we’ve reached a point where security-by-design is the norm across organizations, we will need some sort of process that will work for IT, security, and development teams to make sure patching picks up the pace.
Patching starts with policies
Unlike an individual sitting at a home computer and applying updates to Chrome and Windows, addressing patches in a business setting begins with creating policies, according to Kyle Miller, a partner in Dentons’ global data privacy and cybersecurity group.
If a tool is monitoring 80% of your applications, then 20% of your applications are going to present an outsized risk of being insecure because they’re not in your patch management tool.
“The place to start is documenting the policies and procedures for patch management,” said Miller. This provides organizational control to ensure that whoever is implementing the patch knows what the company’s expectations are.
“Most companies I work with will set aside a window outside of business hours to apply routine patches,” Miller said. “But they also need a process for approving the patching of critical vulnerabilities as needed, and sometimes that’s as quickly as possible.” That requires conducting an inventory of the systems so you can identify potential risks and the source of the patch.
One reason patching is done outside of business hours is because it decreases any downtime involved. And if patches are done manually – a lengthy process – this is often the only time security teams can get to it.
This makes the case for both automation and AI. Automating the patching process will not only reduce the need for manual intervention on an otherwise repetitive task but also allow more prompt attention to updates, which, in turn, shortens the window of vulnerability. Automating the patching process can also improve the company’s stand on compliance goals by decreasing the risk of an exploit leading to a data breach. Organizations can also turn to AI, which can prioritize vulnerabilities based on critical risks and mitigate the patches based on that priority.
There’s an array of patch management tools available, but Miller warns that for any tool to be effective, it has to be tailored to your specific needs.
“If a tool is monitoring 80% of your applications, then 20% of your applications are going to present an outsized risk of being insecure because they’re not in your patch management tool,” Miller said. Likewise, patch management tools that can help identify older versions of your internal systems are useful only if a security staffer reviews those notifications and brings those systems up to date.
That said, you also have to beware of tool sprawl. Too many tools can actually hinder effective patching and overall cybersecurity. For the most effective patching process, IT and security teams will want to evaluate the tools they already have on hand, identify redundancies and any uncovered areas, determine the effectiveness of each (are the tools even addressing security problems you have today or work with the systems currently in place), and determine how the tools can best reduce manual tasks with automated processes.
“There needs to be a combination of acquiring the tool and making sure it’s tailored to your system and implemented appropriately, and then monitored by your internal IT team,” said Miller.
Improving your patching culture
No matter what tools are used, what policies are put in place, or how solid the code is at the development stage, vulnerabilities will always pop up. Recognizing that reality, businesses need to shift their culture around patching. One way to do that is to create a vulnerability management program.
“Building an actual vulnerability management program is a part of building an effective patch management program,” said Tanium’s Bischoping.
They might sound like the same thing, but they’re not. Vulnerability management assesses threats and how each can impact your environment. Some may be extremely dangerous to your organization’s security, while other high-severity vulnerabilities may not be exploitable in your system.
Besides creating vulnerability management programs, organizations need to improve overall security awareness training. Showing users the effects of unsecured systems, stressing the dangers of unpatched vulnerabilities, and building policies and procedures that streamline the system will help IT and security teams better address the challenges that prevent timely patch application.