Comment by aeternum
Hallucinations are also trained by the incentive structure: reward for next-token prediction, no penalty for guessing.
Hallucinations are also trained by the incentive structure: reward for next-token prediction, no penalty for guessing.
That's not a matter of training, it's an inherent part of the architecture. The model has no idea of its own confidence in an answer. The servers get a full distribution of possible output tokens and they pick one (often the highest ranking one), but there is no way of knowing whether this token represents reality or just a plausible answer. This distribution is never fed back to the model so there is no possible way that it could know how confident it was in its own answer.