Seven Critical Backup Vulnerabilities This Week Prove Self-Managed Infrastructure Is a Liability

· 4 min read · managed infrastructure
Seven Critical Backup Vulnerabilities This Week Prove Self-Managed Infrastructure Is a Liability

Eight CVEs dropped on March 12th for one of the most widely deployed backup platforms in the world. Four of them scored CVSS 9.9 out of 10. The root cause: CRLF injection in configuration parameters for Linux Backup Proxies that lets any domain-authenticated user escalate to system-level code execution.

Read that again. Not an admin. Not someone with backup console credentials. Any user on your Active Directory domain.

The affected software runs in 550,000+ organizations. Rapid7 reports that over 20% of their incident response cases this year involved this same backup platform being accessed or exploited. And as of today, roughly 3,000 instances remain directly exposed to the public internet.

That same week, CISA added a separate workflow automation vulnerability (also CVSS 9.9) to the Known Exploited Vulnerabilities catalog, with 24,700+ exposed instances identified by the Shadowserver Foundation. Two critical infrastructure tools. Two near-perfect severity scores. One week.

The gap that actually matters

Here's the number that should keep you up at night: the average time to patch a critical vulnerability across organizations is 38 to 47 days. The average time from disclosure to active exploitation is 14 to 21 days. And that window is shrinking.

That's not a technology problem. It's a staffing problem.

At a 50-person company, nobody's job is to monitor CVE feeds at 6pm on a Wednesday. Nobody's testing whether a backup platform patch breaks the restore chain before deploying it to production. Nobody's cross-referencing CISA's KEV catalog against the software inventory.

And that's exactly what attackers count on. 74% of ransomware attacks exploit known, already-patched vulnerabilities. The patches existed. The knowledge existed. The staffing to act on it didn't.

"Authenticated" is doing a lot of heavy lifting

Security advisories love the word "authenticated" because it implies a vulnerability is harder to exploit. In practice, for these four CVSS 9.9 flaws, "authenticated" means any user with basic domain credentials. In most small and mid-size businesses, that's every employee.

Your receptionist's AD login could trigger remote code execution on your backup server. That's what "authenticated" means here.

This distinction matters because it changes the risk calculus. An unauthenticated RCE requires an attacker to find and reach the server. An authenticated one just requires a single phished credential -- something that happens thousands of times a day across every industry.

Backup infrastructure is now the primary target

Ransomware operators have figured out something obvious: if you destroy the backups first, the victim has no choice but to pay. Data from the first half of 2025 shows backup systems were compromised in 18% of ransomware incidents. Only 10% of attacked organizations recovered more than 90% of their data. 57% recovered less than half.

The math is brutal. Average ransom payments spiked to $1.13 million in Q2 2025, a 104% increase from Q1. The highest payments came from organizations with no valid backups -- either the backup infrastructure was compromised during the attack, or it was never properly configured in the first place.

This particular backup platform has been patched for critical RCE flaws three times in fifteen months: September 2024, January 2026, and now March 2026. Each time, ransomware groups exploited the previous round's vulnerabilities within weeks. This isn't a one-time event. It's a pattern that demands continuous monitoring -- not quarterly check-ins.

The insurance angle nobody talks about

Most cyber insurance policies now require critical patches within 14 to 30 days. Delayed patching can void coverage entirely. So when the average patch time is 38 to 47 days, organizations aren't just leaving themselves exposed to attackers -- they're potentially voiding the safety net they're paying premiums for.

52% of critical vulnerabilities remain unpatched after 30 days. 63% of patches are delayed because teams fear operational disruption. That fear is rational -- patching a backup server wrong can break your entire recovery chain. But the alternative is worse.

This is a capacity problem, not a knowledge problem

88% of all ransomware data breaches hit small and mid-size businesses, per the 2025 Verizon report. The median victim organization has 228 employees. Only 14% of SMBs have a cybersecurity plan.

These aren't companies that don't care about security. They're companies that don't have a dedicated ops team watching CVE feeds, testing patches in staging environments, and deploying fixes within the exploitation window. They're running the same enterprise-grade backup software as Fortune 500 companies, without the Fortune 500 security operations center.

The CVE volume is accelerating. Estimated 31,000 to 34,000 new CVEs in 2026, a 21% year-over-year increase. No in-house team at a 50-person company can track that. No part-time IT person can maintain patch velocity across backup platforms, web servers, workflow tools, operating systems, and every dependency in between.

What managed infrastructure actually solves

The pitch for managed infrastructure isn't "we're smarter than your team." It's that patch velocity is a function of staffing, tooling, and process -- and those things cost money whether you build them in-house or not.

LTFI runs every client on dedicated, isolated infrastructure. Hardened servers with automated patching, default-deny firewall policies, and continuous monitoring. Not shared hosting where one tenant's missed patch becomes everyone's problem. Dedicated resources where your backup infrastructure, your web stack, and your security posture are someone's actual job -- every day, not just when something breaks.

Zero security incidents across the managed fleet. 30+ automated verification checks per deployment. That's not a sales pitch. That's what happens when infrastructure management is the core business, not a side responsibility for someone who also handles procurement and facilities.

The 38-day patch gap exists because organizations are asking people to do infrastructure management as their second or third job. When it's your first job -- your only job -- the gap closes.

If your backup platform is running unpatched right now, you already know the answer. Talk to us about your infrastructure.