Comment by bArray
Comment by bArray 6 days ago
LLMs for code review, rather than code writing/design could be the killer feature. I think that code review has been broken for a while now, but this could be a way forward. Of particular interest would be security, undefined behaviour, basic misuse of features, double checking warnings out of the compiler against the source code to ensure it isn't something more serious, etc.
My current use of LLMs is typically via the search engine when trying to get information about an error. It has maybe a 50% hit rate, which is okay because I'm typically asking about an edge case.
ChatGPT is great for debugging common issues that have been written about extensively on the web (before the training cutoff). It's a synthesizer of Stack Overflow and greatly cuts down on the time it takes to figure out what's going on compared with searching for discussions and reading them individually.
(This IP rightly belongs to the Stack Overflow contributors and is licensed to Stack Overflow. It ought to be those parties who are exploiting it. I have mixed feelings about participating as a user.)
However, the LLM output is also noisy because of hallucinations — just less noisy than web searching.
I imagine that an LLM could assess a codebase and find common mistakes, problematic function/API invocations, etc. However, there would also be a lot of false positives. Are people using LLMs that way?