Comment by cdecl

Comment by cdecl 3 months ago

2 replies

> LLMs are just text prediction. That's what they are.

This sort of glib talking point really doesn't pass muster, because if you showed the current state of affairs to a random developer from 2015, you would absolutely blow their damned socks off.

aredox 3 months ago

They would be blown off by the "Unreasonable Effectiveness of [text prediction]", but it is still text prediction.

That's the very root cause why we still have unsolved problems like the inability to get the same answer to the same questions, the inability to do riguorous maths or logic (any question that only has one good answer, in fact) and hallucinations!

  • cdecl 3 months ago

    The problem is not the text prediction, which nobody denies, rather the "just" which minimizes its impact.