Comment by alganet
My question was very simple. Suitable for a simpler model.
I can come up with prompts that make better models hallucinate (see post below).
I don't understand your objection. This is a known fact, LLMs hallucinate shit regardless of the model size.
LLMs are getting better. Are you?
Nothing matters in this business except the first couple of time derivatives.