Comment by alganet
Comment by alganet a day ago
Something sounds fishy in this. Has these bugs really been found by AI? (I don't think they were).
If you read Corgea's (one of the products used) "whitepaper", it seems that AI is not the main show:
> BLAST addresses this problem by using its AI engine to filter out irrelevant findings based on the context of the application.
It seems that AI is being used to post-process the findings of traditional analyzers. It reduces the amount of false positives, increasing the yield quality of the more traditional analyzers that were actually used in the scan.
Zeropath seems to use similar wording like "AI-Enabled Triage" and expressions like "combining Large Language Models with AST analysis". It also highlights that it achieves less false positives.
I would expect someone who developed this kind of thing to setup a feedback loop in which the AI output is somehow used to improve the static analysis tool (writing new rules, tweaking existing ones, ...). It seems like the logical next step. This might be going on on these products as well (lots of in-house rule extensions for more traditional static analysis tools, written or discovered with help of AI, hence the "build with AI" headline in some of them).
Don't get me wrong, this is cool. Getting an AI to triage a verbose static analysis report makes sense. However, it does not mean that AI found the bugs. In this model, the capabilities of finding relevant stuff are still capped at the static analyzer tools.
I wonder if we need to pay for it. I mean, now that I know it is possible (at least in my head), it seems tempting to get open source tools, set them to max verbosity, and find which prompts they are using on (likely vanilla) coding models to get them to triage the stuff.
Hi there, I'm Ahmad, CEO at Corgea, and the author of the white paper. We do actually use LLMs to find the vulnerabilities AND triage findings. For the majority of our scanning, we don't use traditional static analysis. At the core of our engine is the LLM reading the line of code to find CWEs in them.