datadrivenangel 6 months ago

With an open model, you could probably reverse engineer the token probabilities and get that probability estimate.

Something like: "Is {sentence_a} a plausible answer to {sentence_b}? Respond only with a single yes/no token" and then look at the probabilities of those.

  • wongarsu 6 months ago

    If the model is not open turn up the temperature a bit (if the API allows that) and ask the above question multiple times. The less sure the model is the more the answer will vary.

danielmarkbruce 6 months ago

Absolutely you can. Rip off the last layer, add a regression layer in it's place, fine tune.

OutOfHere 6 months ago

Of course one can just ask the LLM for the output probability. It will give a reasonably calibrated output, typically a multiple of 0.05. I would ask it for an integer percentage though.