Americas

  • United States

Asia

Oceania

John Leyden
Senior Writer

Patch management: A dull IT pain that won’t go away

Feature
16 Sep 20249 mins
Patch Management SoftwareRisk Management

Following basic security practices for patch management strategies is only partially solved by better tools and culture shifts, say IT experts.

Programmer sitting at the desk in his office and trying to solve problem of new code.
Credit: Shutterstock / BalanceFormCreative

Enterprise security patching remains a challenge despite improvements in both vulnerability assessment and update technology.

Competing priorities, organizational challenges, and technical debt continue to transform an ostensibly straightforward aim of keeping systems up to date into a major headache, according to IT experts quizzed by CSO.

Because of these and other issues, approximately 60% of enterprise applications remain unpatched six months after a vulnerability is disclosed, according to cloud security vendor Qualys. The industry average for patching critical vulnerabilities within the first 30 days is around 40%.

Application creep isn’t doing enterprise IT any favors. A recent survey from patch management firm Adaptiva found that, on average, organizations manage 2,900 applications. That’s a lot of potential patching to do given that the number of detected vulnerabilities continues to grow.

And it all serves to ensure greater likelihood of business disruption.

“The more patches an organization has to deploy, the higher the risk of downtime due to reboots or a patch breaking a business application,” Eran Livne, senior director of product management at Qualys, tells CSO.

Here’s a look at the current state of patching in the enterprise — with advice from IT experts on developing a more robust patch management strategy in the face of ongoing issues.

Automation — and its shortcomings

In recent years, patch management has become a risk reduction practice, with organizations aligning security and remediation workflows and prioritizing which vulnerabilities to fix based on calculations around security exposure, the likelihood of downtime due to misfiring updates, and what that downtime might subsequently cost the business.

Knowing the playing field and having clear remediation policies are key, says Sanjay Macwan, CIO and CISO at communications platform Vonage.

“Businesses should ensure they have an up-to-date asset inventory to keep track of all the components that require patches, as well as formal, stringent policies for each team to follow in order to coordinate timely, effective patches,” Macwan says. “Patches should be assigned risk levels to determine the order in which they are tackled, and teams should be given clear deployment processes, including post-patch monitoring to catch critical errors.”

Here, Qualys’ Livne advocates use of automation tools “to help reduce the manual work involved in responding to the vast number of vulnerabilities, especially vulnerabilities that are ‘easy’ to fix, such as browsers, media players, and document readers.”

With automation targeting lower-priority patching, IT operations and security teams can be freed up to concentrate on critical and time-sensitive security fixes.

But despite their promise to reduce workload, cut down on errors, and speed up patch delivery, automated tools have their limitations, says Rich Newton, managing consultant at Pentest People.

“Tool-recommended patch priorities based on vulnerability severity may not always align with the organization’s specific risk tolerance or business objectives, emphasizing the need for human oversight,” Newton tells CSO. “Relying solely on a patch management solution, especially in complex IT environments, can be futile. Not all systems can be fully supported by automated tools, making it essential to have policies and procedures in place for continuous monitoring and assessment of the patch status across the entire IT estate.”

Elie Feghaly, CSO at global broadcast technology company Vizrt, agrees that, although vulnerability assessment and automated patching tools are highly useful, they are no panacea.

“Automated remediation roles on complicated IT environments seldom blend well with highly dynamic, and potentially error-prone environments,” Feghaly says.

The legacy factor — and lingering issues

Moreover, the vast majority of complex IT environments also run substantial amounts of legacy software that is no longer patched by the vendor, points out Martin Biggs, vice president and managing director for EMEA and strategic initiatives at Spinnaker.

“Where patches are available, they can be highly disruptive and need extensive regression testing before deploying,” Biggs says.

For sensitive environments, it can be nearly impossible to patch, even when a patch is available. In other scenarios, applying a patch fails to solve the underlying vulnerability, which is only addressed in subsequent updates, Biggs warns.

“It’s quite usual in the Oracle world for the same vulnerability to be re-addressed in patches for many quarters after the original patch,” according to Biggs.

With such factors at play, it’s little wonder why many patch management strategies are broken today.

Testing takes center stage

Vizrt’s Feghaly points out another common issue enterprises face with patch management.

“We have all experienced this: A patch works flawlessly in the staging or test lab, and yet generates great havoc in production due to an unexpected dependency on another application,” Feghaly says.

“External factors or dependencies are why testing is still paramount.”

July’s high-profile outages caused by CrowdStrike’s problematic Falcon content update that crashed systems across the world has put the importance of testing prior to patch deployment back in the spotlight.

“Vulnerability assessment and automated patching tools can significantly alleviate the challenges associated with patch management by providing continuous monitoring, identifying vulnerabilities in real-time, and automating the deployment of patches without manual intervention,” says Chris Morgan, senior cyber threat intelligence analyst at ReliaQuest. “However, their effectiveness depends on proper configuration, regular updates, and integration with broader security practices. Patches should be thoroughly tested and initially deployed to a smaller subset of systems to minimize the risk of outages from faulty patches.”

Here, automated testing environments can help reduce the risk of disruption, says Thomas Richards, associate principal at the Synopsys Software Integrity Group. But not if you have limited visibility into what must be patched in the first place.

“The challenge we often see our customers experience is getting the tools configured properly to scan and patch all the live systems within their organization,” Richards says. “There are a variety of reasons why systems may not be covered by this process, including legacy devices, misconfigurations, shadow IT, and systems that should be decommissioned but remained online.”

Richards concludes: “The most important part of having a patching program is to ensure all systems are covered by it and are being patched regularly.”

Bridging the cultural divide

Dave Harvey, director of the cyber response team at KPMG UK, says that, in addition to proper prioritization and remediation, a successful patch management strategy depends on “the integration of effective cyber threat intelligence, regular review, and effective collaboration between IT and security teams.”

To that last point, Madeline Lawrence, CBO at Aikido Security, says that engineering teams can often be left feeling “overwhelmed and annoyed” when dealing with security vulnerabilities.

There’s a total mindset disconnect between security teams “taught to consider every possibility” and developers who love “shortcuts and efficiency,” she adds.

“To many developers, the security teams showing up with requests is like chaperones crashing the party,” Lawrence explains. “This fundamental difference in approach and priorities creates significant challenges for organizations trying to get IT operations and security teams to work more closely together in resolving security vulnerabilities.”

“Bridging this gap isn’t just about new tools or processes — it requires addressing the cultural and communication divide between these essential but often misaligned teams,” she says.

At the center of this divide is the fact that IT operations teams prioritize system uptime and performance, while security teams focus on mitigating threats. This tension often leads to conflicts and delays in addressing security issues.

“This challenge is further complicated by the complexity of modern IT environments, which span multiple platforms and make it difficult to maintain visibility and control,” Christiaan Beek, senior director of threat analytics at Rapid7, told CSO. “It’s also common to see differing risk tolerances between the teams that can lead to disagreements about which vulnerabilities to prioritize, delaying necessary actions.”

To get IT operations, software developers, and security teams on the same page, Qualys’ Livne advises focusing on common goals.

“From a team perspective, look at how you can create shared goals across developer, IT operations, and security teams to work together and deliver better results. Working on common objectives makes it easier to collaborate, communicate and eliminate risks,” he says. “This also improves accountability across all the teams involved, rather than shifting blame between teams, as has happened in the past.”

Pentest People’s Newton adds: “Significant improvements in patching practices can be made by establishing joint ownership of patch delivery between IT and security teams.”

Dave Harvey, director of the cyber response team at KPMG UK, agrees, adding that successful companies infuse secure practices early in their development processes.

“Integrating their security and risk resources into the development process from the start has enabled that improved understanding so that systems are designed and built secure rather than having security applied as an afterthought,” he says.

The bottom line? Data-based decision-making

To understand their risk, enterprises should monitor IT assets in real-time, giving them insights into issues across their infrastructure as soon as possible.

At the same time, not all issues are created equal. Less than 1% of CVE issues released this year have been exploited, so it’s best to concentrate on risks that matter to the business — and to do so by following the data.

“This will help you make decisions based on data, and you can communicate around those risks with other teams, too,” Qualys’ Livne says.

John Leyden
Senior Writer

John Leyden is a senior writer for CSO Online. He has written about computer networking and cyber-security for more than 20 years. Prior to the advent of the web, he worked as a crime reporter at a local newspaper in Manchester, UK. John holds an honors degree in electronic engineering from City, University of London.

More from this author