Breaking the Patch Sound Barrier: Your Vulnerability Remediation Will Not Keep Up With AI Exploit Speed. So?
Many years ago while at Gartner, I wrote a blog post where I defined the concept of the “Patch Sound Barrier.” (original via Archive if you don’t believe that I was that smart back in 2013 🙂) This was an idea of a maximum speed that a given organization could fix a given vulnerability. If you full throttle beyond that, the engines will whirr louder, but the plane won’t fly faster, essentially.

The discussion arose from people constantly asking about the “optimal” or “desired” speed of patching. In my time as an analyst, I reviewed plenty of policies as well as “operational practices” (which is what people call it when they don’t actually follow their own policy “because reasons” :-)). BTW, I utterly hated “30 days flat” policies that say that vulnerabilities are fixed within 30 days no matter what, and always steered people to more nuanced risk-based policies.
One concept emerged: Given a particular IT environment, there is often a maximum physical speed at which an organization can patch. That is my Patch Sound Barrier.
Why bring this up now? Because the speed of vulnerability discovery is accelerating and so does exploit dev speed, but for many organizations, the speed of remediation simply cannot be accelerated. It is not accelerating, because it cannot. Full stop.
In the past, my guidance was to focus on better vulnerability prioritization so that you fix “real risks” using CISA KEV, EPSS, CVSS (OK, maybe not in the 2020s) and various tools that analyze the data and give you a ranked list.
But today we will have more vulns and prioritization tools won’t save you. If you have 1,000,000 vulns and 1000 are “risky for you” (however defined, let’s say you have the magical tool that reveals the true and real risk for your organization … ha), you can reduce the risk enough by fixing the 1000, if you have the bandwidth to fix the 1000 (in theory). Now, imagine you have 10m vulns (thanks AI!) and say 5000 are risky. But your bandwidth is there to only fix the 1000. So your risk goes up anyway, while you work as hard as before.
Now, you might say, “Anton, you’re making absolute statements. Surely things are flexible given enough money, enough talented engineers, and these days, enough LLM tokens?”
This is true in theory. But notice I said, “given the IT environment.”
There are definitely methods for accelerating remediation in a modern, beautifully and carefully designed environment (check our podcast episode 109 for those ideas).
But let’s review the scoreboard:
- The speed of vulnerability discovery? Increased.
- The speed of exploit development? Increased.
- The speed of remediation in legacy environments? Unchanged.
OK, some of you might still think “cannot” is too harsh. But people at modern organizations — all DevOps, CI/CD, open source and now AI agents — sometimes cannot comprehend what it takes to deal with a 1990s-era “DBA from Hell” who views his beloved database as a pet, not cattle, and will only allow a patch twice a year on a rigid schedule. Don’t even get me started on OT or the sea of unpatched edge appliances out there (there are “forti” millions of them there, I hear …)
So, yes, I spent years providing recommendations on how to deal with this “vulnerability flood.” This isn’t just about the current fascination with AI; at one point, the “boogeyman” was Metasploit, or something else. Or, as old people told me, SATAN / SANTA in the mid-1990s.
The fact remains: there are more risky vulns than you have time / capability. Today. AI can find the bugs in milliseconds, but it still can’t convince a legacy middleware admin to reboot a production server on a Tuesday. Or in July. Or in 2026. Or this freakin’ century …
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: