Comment by spacechild1

Comment by spacechild1 2 days ago

3 replies

> Importantly, there is no need to trust the LLM or review its output when its job is just saving me an hour or two by telling me where the bug is, for me to reason about it and fix it.

Except they regularly come up with "explanations" that are completely bogus and may actually waste an hour or two. Don't get me wrong, LLMs can be incredibly helpful for identifying bugs, but you still have to keep a critical mindset.

danielbln 2 days ago

OP said "for me to reason about it", not for the LLM to reason about it.

I agree though, LLMs can be incredible debugging tools, but they are also incredibly gullable and love to jump to conclusions. The moment you turn your own fleshy brain off is when they go to lala land.

  • spacechild1 a day ago

    > OP said "for me to reason about it", not for the LLM to reason about it.

    But that's what I meant! Just recently I asked an LLM about a weird backtrace and it pointed me the supposed source of the issue. It sounded reasonable and I spent 1-2 hours researching the issue, only to find out it was a total red herring. Without the LLM I wouldn't have gone down that road in the first place.

    (But again, there have been many situations where the LLM did point me to the actual bug.)

    • danielbln a day ago

      Yeah that's fair, I've been there before myself. It doesn't help when it throws "This is the smoking gun!" at you. I've started using subagents more, specifically a subagent that shells out codex. This way I can have Claude throw a problem over to GPT5 and both can come to a consensus. Doesn't completely prevent wild goose chases, but it helps a lot.

      I also agree that many more times the LLM is like a blood hound leading me to the right thing (which makes it all the more annoying the few times when it chases a red herring).