Comment by lordnacho

Comment by lordnacho 2 days ago

6 replies

I'm not surprised it worked.

Before I used Claude, I would be surprised.

I think it works because Claude takes some standard coding issues and systematizes them. The list is long, but Claude doesn't run out of patience like a human being does. Or at least it has some credulity left after trying a few initial failed hypotheses. This being a cryptography problem helps a little bit, in that there are very specific keywords that might hint at a solution, but from my skim of the article, it seems like it was mostly a good old coding error, taking the high bits twice.

The standard issues are just a vague laundry list:

- Are you using the data you think you're using? (Bingo for this one)

- Could it be an overflow?

- Are the types right?

- Are you calling the function you think you're calling? Check internal, then external dependencies

- Is there some parameter you didn't consider?

And a bunch of others. When I ask Claude for a debug, it's always something that makes sense as a checklist item, but I'm often impressed by how it diligently followed the path set by the results of the investigation. It's a great donkey, really takes the drudgery out of my work, even if it sometimes takes just as long.

vidarh 2 days ago

> The list is long, but Claude doesn't run out of patience like a human being does

I've flat out had Claude tell me it's task was getting tedious, and it will often grasp at straws to use as excuses for stopping a repetitive task and moving in to something else.

Keeping it on task when something keeps moving forward, is easy, but when it gets repetitive it takes a lot of effort to make it stick to it.

  • 8note 2 days ago

    ive been getting that experience with both claude-code and gemini, but not from cline and qcli. i wonder why

    • danielbln a day ago

      Different system prompts, also Claude Code is aware of its own context and will sometimes try to simplify or take a short cut as the context nears exhaustion.

      Some global rules will generally keep it on track though, telling it to ask me before it simplifies or give up, and I ask it frequently to ask me clarifying questions, which generally also helps keeping it chugging in the right direction and uncover gaps in its understanding.

ay 2 days ago

> Claude doesn't run out of patience like a human being does.

It very much does! I had a debugging session with Claude Code today, and it was about to give up with the message along the lines of “I am sorry I was not able to help you find the problem”.

It took some gentle cheering (pretty easy, just saying “you are doing an excellent job, don’t give up!”) and encouragement, and a couple of suggestions from me on how to approach the debug process for it to continue and finally “we” (I am using plural here because some information that Claude “volunteered” was essential to my understanding of the problem) were able to figure out the root cause and the fix.

  • lordnacho 2 days ago

    That's interesting, that only happened to me on GPT models in Cursor. It would apologize profusely.

  • ericphanson 2 days ago

    Claude told me it stopped debugging since it would run out of tokens in its context window. I asked how many tokens it had left and it said actually it had plenty so could continue. Then again it stopped, and without me asking about tokens, wrote

    Context Usage • Used: 112K/200K tokens (56%) • Remaining: 88K tokens • Sufficient for continued debugging, but fresh session recommended for clarity

    lol. I said ok use a subagent for clarity.