Comment by ruszki
Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.
They were slower than coding by hand, if you wanted to keep quality. Some were almost as quick as copy-pasting from the code just above the generated one, but their quality was worse. They even kept some bugs in the code during their reviews.
So the different world is probably what the acceptable level of quality means. I know a lot of coders who don’t give a shit whether it makes sense what they’re doing. What their bad solution will cause in the long run. They ignore everything else, just the “done” state next to their tasks in Jira. They will never solve complex bugs, they simply don’t care enough. At a lot of places, they are the majority. For them, LLM can be an improvement.
Claude Code the other day made a test for me, which mocked everything out from the live code. Everything was green, everything was good. On paper. A lot of people simply wouldn’t care to even review properly. That thing can generate a few thousands of lines of semi usable code per hour. It’s not built to review it properly. Serena MCP for example specifically built to not review what it does. It’s stated by their creators.
Honestly I think LLMs really shine best when your first getting into a language.
I just recently got into JavaScript and typescript and being able to ask the llm how to do something and get some sources and link examples is really nice.
However using it in a language I'm much more familiar with really decreases the usefulness. Even more so when your code base is mid to large sized