Comment by flohofwoe
Comment by flohofwoe a day ago
This is exactly what I'd want from an 'AI coding companion'.
Don't write or fix the code for me (thanks but I can manage that on my own with much less hassle), but instead tell me which places in the code look suspicious and where I need to have a closer look.
When I ask Claude to find bugs in my 20kloc C library it more or less just splits the file(s) into smaller chunks and greps for specific code patterns and in the end just gives me a list of my own FIXME comments (lol), which tbh is quite underwhelming - a simple bash script could do that too.
ChatGPT is even less useful since it basically just spend a lot of time to tell me 'everything looking great yay good job high-five!'.
So far, traditional static code analysis has been much more helpful in finding actual bugs, but static analysis being clean doesn't mean there are no logic bugs, and this is exactly where LLMs should be able to shine.
If getting more useful potential-bugs-information from LLMs requires an extensively customized setup then the whole idea is getting much less useful - it's a similar situation to how static code analysis isn't used if it requires extensive setup or manual build-system integration instead of just being a button or menu item in the IDE or enabled by default for each build.
This is a point I see discussed surprisingly little. Given that many (most?) programmers like designing and writing code (excluding boilerplate), and not particularly enjoy reviewing code, it certainly feels backwards to make the AI write the code and relegate the programmer to reviewing it. (I know, of course, that the whole thing is being sold to stakeholders as "LoC machine goes brrrr" – code review? what's that?)