Comment by simonw

Comment by simonw 3 days ago

8 replies

I think the asking clarifying questions thing is solved already. Tell a coding agent to "ask clarifying questions" and watch what it does!

nightski 3 days ago

Obviously if you instruct the autocomplete engine to fill in questions it will. That's not the point. The LLM has no model of the problem it is trying to solve, nor does it attempt to understand the problem better. It is merely regurgitating. This can be extremely useful. But it is very limiting when it comes to using as an agent to write code.

  • wrs 3 days ago

    You can work with the LLM to write down a model for the code (aka a design document) that it can then repeatedly ingest into the context before writing new code. That what “plan mode” is for. The technique of maintaining a design document and a plan/progress document that get updated after each change seems to make a big difference in keeping the LLM on track. (Which makes sense…exactly the same thing works for human team mambers too.)

    • habinero 2 days ago

      Every time I hear someone say something like this, I think of the pigeons in the Skinner box who developed quirky superstitious behavior when pellets were dispensed at random.

    • troupo 2 days ago

      > that it can then repeatedly ingest into the context

      1. Context isn't infinite

      2. Both Claude and OpenAI get increasingly dumb after 30-50% of context had been filled

      • wrs 20 hours ago

        Not sure how that's relevant... I haven't seen many design documents of infinite size.

        • troupo 12 hours ago

          "Infinite" is a handy shortcut for "large enough".

          Even the "million token context window" becomes useless once it's filled to 30-50% and the model starts "forgetting" useful things like existing components, utility functions, AGENTS.md instructions etc.

          Even a junior programmer can search and remember instructions and parts of the codebase. All current AI tools have to be reminded to recreate the world from scratch every time, and promptly forget random parts of it.

  • subjectivationx 2 days ago

    I think at some point we will stop pretending we have real AI. We have a breakthrough in natural language processing but LLMs are much closer to Microsoft Word than something as fantastical as "AGI". We don't blame Microsoft Word for not having a model of what is being typed in. It would be great if Microsoft Word could model the world and just do all the work for us but it is a science fiction fantasy. To me, LLMs in practice are largely massively compute inefficient search engines plus really good language disambiguation. Useful, but we have actually made no progress at all towards "real" AI. This is especially obvious if you ditch "AI" and call it artificial understanding. We have nothing.

danielbln 3 days ago

I've added "amcq means ask me clarifying questions" to my global Claude.md so I can spam "amcq" at various points in time, to great avail.