Comment by the8472
There are several layers of translation between public regulations, a company's internal security policy and the processes used to enforce those policies.
Let's say the reg says you're liable for damages caused by software defects you ship due to negligence, giving you broad leeway how to mitigate risks. The corporate policy then says "CVEs with score X must be fixed in Y days; OWASP best practices; infrastructure audits; MFA; yadda yadda". Finally the enforcement is then done by automated tooling like sonarqube, prisma. dependabot, burpsuite, ... and any finding must be fixed with little nuance because the people doing the scans lack the time or expertise to assess whether any particular finding is actually security-relevant.
On the ground the automated, inflexible enforcement and friction then leads to devs choosing approaches that won't show up in scans, not necessarily secure ones.
As an example I witnessed recently: A cloud infra scanning tool highlighted that an AppGateway was used as TLS-terminating reverse proxy, meaning it used HTTP internally. The tool says "HTTP bad", even when it's on an isolated private subnet. But the tool didn't understand Kubernetes clusters, so a a public unencrypted ingress, i.e. public HTTP didn't show up. The former was treated as a critical issue that must be fixed asap or the issue will get escalated up the management chain. The latter? Nobody cares.
Another time I got pressure to downgrade from Argon2 to SHA2 for password hashing because Argon2 wasn't on their whitelist. I resisted that change but it was a stressful bureaucratic process with some leadership being very unhelpful and suggesting "can't you just do the compliant thing and stop spending time on this?".
So I agree with GP that some security teams barely correlate with security, sometimes going into the negative. A better approach would to integrate software security engineers into dev teams, but that'd be more expensive and less measurable than "tool says zero CVEs".