Comment by tikkun
I'm optimistic that hallucination rates will go down quite a bit again with the next gen of models (gpt5 / claude 4 / gemini 2 / llama 4).
I've noticed that the hallucination rate of newer more SOTA models is much lower.
3.5 sonnet hallucinates less than gpt 4 which hallucinates less than gpt 3.5 which hallucinates less than llama 70b which hallucinates less than gpt 3.
Eventually won’t most training data be AI generated? Will we see feedback issues?