Comment by jeroenhd

Comment by jeroenhd 2 days ago

2 replies

People trusting LLMs to tell the truth is the advanced version of people taking the first link on Google as indubitable facts.

This whole trend is going to get much worse before it gets better.

tikkun 2 days ago

I'm optimistic that hallucination rates will go down quite a bit again with the next gen of models (gpt5 / claude 4 / gemini 2 / llama 4).

I've noticed that the hallucination rate of newer more SOTA models is much lower.

3.5 sonnet hallucinates less than gpt 4 which hallucinates less than gpt 3.5 which hallucinates less than llama 70b which hallucinates less than gpt 3.

  • nytesky 2 days ago

    Eventually won’t most training data be AI generated? Will we see feedback issues?