Comment by jumploops
Comment by jumploops a day ago
> It is absolutely true, and AI cannot think, reason, comprehend anything it has not seen before.
The amazing thing about LLMs is that we still don’t know how (or why) they work!
Yes, they’re magic mirrors that regurgitate the corpus of human knowledge.
But as it turns out, most human knowledge is already regurgitation (see: the patent system).
Novelty is rare, and LLMs have an incredible ability to pattern match and see issues in “novel” code, because they’ve seen those same patterns elsewhere.
Do they hallucinate? Absolutely.
Does that mean they’re useless? Or does that mean some bespoke code doesn’t provide the most obvious interface?
Having dealt with humans, the confidence problem isn’t unique to LLMs…
> The amazing thing about LLMs is that we still don’t know how (or why) they work!
You may want to take a course in machine learning and read a few papers.