Comment by jamalaramala

Comment by jamalaramala a day ago

3 replies

> Even more concerning was Devin’s tendency to press forward with tasks that weren’t actually possible. (...)

> Devin spent over a day attempting various approaches and hallucinating features that didn’t exist.

One of the big problems of GenAI is its inability to know what they don't know.

Because of that, they don't ask clarifying questions.

Humans, in the same situation, would spend a lot of time learning before they could be truly productive.

iLoveOncall a day ago

Your statement is factually wrong, Claude 3.5v2 asks clarifying questions when needed "natively", and you can add similar instructions in your prompt for any model.

  • sitkack a day ago

    The default system prompts are tuned for the naive case. LLMs being all purpose text handling tools, can be reprogrammed for any behavior you wish. This is the crux of skilled use of LLMs.

    The better the LLMs get, the worse the average prompt quality.

    • baobabKoodaa a day ago

      Yep. It's fairly trivial to prompt an LLM to say "I don't know" when it doesn't know something.