Comment by RevEng

Comment by RevEng 17 hours ago

1 reply

That's not a matter of training, it's an inherent part of the architecture. The model has no idea of its own confidence in an answer. The servers get a full distribution of possible output tokens and they pick one (often the highest ranking one), but there is no way of knowing whether this token represents reality or just a plausible answer. This distribution is never fed back to the model so there is no possible way that it could know how confident it was in its own answer.

aeternum 15 hours ago

You could have the models output a confidence alongside next-token then weight the penalty by the confidence.