Comment by AdieuToLogic

Comment by AdieuToLogic 11 hours ago

1 reply

>> Support engineer ran customer query through Claude (trained on our public and internal docs) and it very, very confidently made a bunch of stuff up in the response.

> Yeah, LLMs are not really good about things that can't be done.

From the GP's description, this situation was not a case of "things that can't be done", but instead was the result of a statistically generated document having what should be the expected result:

  It was quite plausible sounding and it would have been 
  great if it worked that way, but it didn't.
verdverm 3 hours ago

The core issue is likely not with the LLM itself. Given sufficient context, instructions, and purposeful agents, a DAG of these will not produce such consistently wrong results with good grounding context

There are a lot of devils in the details and there are few in the story