Comment by Kim_Bruning
Comment by Kim_Bruning 2 days ago
> Isn’t “instruction following” the most important thing you’d want out of a model in general,
No. And for the same reason that pure "instruction following" in humans is considered a form of protest/sabotage.
I don’t understand the point you’re trying to make. LLMs are not humans.
From my perspective, the whole problem with LLMs (at least for writing code) is that it shouldn’t assume anything, follow the instructions faithfully, and ask the user for clarification if there is ambiguity in the request.
I find it extremely annoying when the model pushes back / disagrees, instead of asking for clarification. For this reason, I’m not a big fan of Sonnet 4.5.