Comment by falcor84
> This is a really common problem with science reporting in general. Its often the case that the news will say things about the paper that aren't in the paper, often they say something that is completely the opposite of what the paper actually represents with its data.
I wonder if perhaps a part of LLM hallucinations can be explained by them being provided such reporting and having it (mistakenly) tagged as high-quality training data.
For sure some can. I mean the LLMs are susceptible to priming. The OpenAI demo had an error in the airplane wing part because of that. It is a very common mistake (included in many textbooks) but the LLM repeated it. More importantly, I saw someone be able to get it to give the right answer without spoiling it through the prompt.