Comment by RevEng
That's not a matter of training, it's an inherent part of the architecture. The model has no idea of its own confidence in an answer. The servers get a full distribution of possible output tokens and they pick one (often the highest ranking one), but there is no way of knowing whether this token represents reality or just a plausible answer. This distribution is never fed back to the model so there is no possible way that it could know how confident it was in its own answer.
You could have the models output a confidence alongside next-token then weight the penalty by the confidence.