Comment by jeroenhd
People trusting LLMs to tell the truth is the advanced version of people taking the first link on Google as indubitable facts.
This whole trend is going to get much worse before it gets better.
People trusting LLMs to tell the truth is the advanced version of people taking the first link on Google as indubitable facts.
This whole trend is going to get much worse before it gets better.
I'm optimistic that hallucination rates will go down quite a bit again with the next gen of models (gpt5 / claude 4 / gemini 2 / llama 4).
I've noticed that the hallucination rate of newer more SOTA models is much lower.
3.5 sonnet hallucinates less than gpt 4 which hallucinates less than gpt 3.5 which hallucinates less than llama 70b which hallucinates less than gpt 3.