Comment by BrenBarn
> The practical problem with this is that many large organizations have a security/infosec team that mandates a "zero CVE" posture for all software.
The solution is to fire those teams.
> The practical problem with this is that many large organizations have a security/infosec team that mandates a "zero CVE" posture for all software.
The solution is to fire those teams.
Aren't some of these government regulations for cloud, etc.?
SOC2/FIPS/HIPAA/etc don't mandate zero-CVE, but a zero-CVE posture is an easy way to dodge all the paperwork that would be involved in exhaustively documenting exactly why each flagged CVE doesn't actually apply to your specific scenario (and then potentially re-litigating it all again in your annual audit).
So it's more of a cost-cutting/cover-your-ass measure than an actual requirement.
There are several layers of translation between public regulations, a company's internal security policy and the processes used to enforce those policies.
Let's say the reg says you're liable for damages caused by software defects you ship due to negligence, giving you broad leeway how to mitigate risks. The corporate policy then says "CVEs with score X must be fixed in Y days; OWASP best practices; infrastructure audits; MFA; yadda yadda". Finally the enforcement is then done by automated tooling like sonarqube, prisma. dependabot, burpsuite, ... and any finding must be fixed with little nuance because the people doing the scans lack the time or expertise to assess whether any particular finding is actually security-relevant.
On the ground the automated, inflexible enforcement and friction then leads to devs choosing approaches that won't show up in scans, not necessarily secure ones.
As an example I witnessed recently: A cloud infra scanning tool highlighted that an AppGateway was used as TLS-terminating reverse proxy, meaning it used HTTP internally. The tool says "HTTP bad", even when it's on an isolated private subnet. But the tool didn't understand Kubernetes clusters, so a a public unencrypted ingress, i.e. public HTTP didn't show up. The former was treated as a critical issue that must be fixed asap or the issue will get escalated up the management chain. The latter? Nobody cares.
Another time I got pressure to downgrade from Argon2 to SHA2 for password hashing because Argon2 wasn't on their whitelist. I resisted that change but it was a stressful bureaucratic process with some leadership being very unhelpful and suggesting "can't you just do the compliant thing and stop spending time on this?".
So I agree with GP that some security teams barely correlate with security, sometimes going into the negative. A better approach would to integrate software security engineers into dev teams, but that'd be more expensive and less measurable than "tool says zero CVEs".
Someone has the authority to fix the problem. Maybe it’s them.
This isn’t a serious response. Even if you had the clout to do that, you’d then own having to deal with the underlying pressure which lead them to require that in the first place. It’s rare that this is someone waking up in the morning and deciding to be insufferable, although you can’t rule that out in infosec, but they’re usually responding to requirements added by customers, auditors needed to get some kind of compliance status, etc.
What you should do instead is talk with them about SLAs and validation. For example, commit to patching CRITICAL within x days, HIGH with y, etc. but also have a process where those can be cancelled if the bug can be shown not to be exploitable in your environment. Your CISO should be talking about the risk of supply chain attacks and outages caused by rushed updates, too, since the latter are pretty common.