Comment by stingraycharles
Comment by stingraycharles 2 days ago
I don’t understand the point you’re trying to make. LLMs are not humans.
From my perspective, the whole problem with LLMs (at least for writing code) is that it shouldn’t assume anything, follow the instructions faithfully, and ask the user for clarification if there is ambiguity in the request.
I find it extremely annoying when the model pushes back / disagrees, instead of asking for clarification. For this reason, I’m not a big fan of Sonnet 4.5.
Full instruction following looks like monkey’s paw/malicious compliance. A good way to eliminate a bug from a codebase is to delete the codebase, that type of thing. You want the model to have enough creative freedom to solve the problem otherwise you are just coding using an imprecise language spec.
I know what you mean: a lot of my prompts include “never use em-dashes” but all models forget this sooner or later. But in other circumstances I do want it to push back on something I am asking. “I can implement what you are asking but I just want to confirm that you are ok with this feature introducing an SQL injection attack into this API endpoint”