Comment by scotty79

Comment by scotty79 14 hours ago

7 replies

Yeah, LLMs are not really good about things that can't be done.

At some point you'll be better off with implementing features they hallucinated. Some people with public APIs already took this approach.

AdieuToLogic 12 hours ago

>> Support engineer ran customer query through Claude (trained on our public and internal docs) and it very, very confidently made a bunch of stuff up in the response.

> Yeah, LLMs are not really good about things that can't be done.

From the GP's description, this situation was not a case of "things that can't be done", but instead was the result of a statistically generated document having what should be the expected result:

  It was quite plausible sounding and it would have been 
  great if it worked that way, but it didn't.
  • verdverm 4 hours ago

    The core issue is likely not with the LLM itself. Given sufficient context, instructions, and purposeful agents, a DAG of these will not produce such consistently wrong results with good grounding context

    There are a lot of devils in the details and there are few in the story

131hn 13 hours ago

They are trained with 100% true facts and sucessfull paths.

We humans grec our analysis/reasoning skills towards the 99.9999% failed attempts of everything we did, uncessfull trials and errors, wastefull times and frustrations.

So we know that behind a truth, there’s a bigger world of fantasy.

For LLM, everything is just a fantasy. Everything is as much true as it’s opposite. It will need a lot more than the truth to build intelligence, it will require controled malice and deceptions

  • antinomicus 13 hours ago

    I was with you until the very last line, can you expand on that?

    • abakker 12 hours ago

      I think he was getting at the fact that the Truth is not good news to everyone.