Comment by lawrencechen

Comment by lawrencechen 4 days ago

0 replies

Happy to hear!

> Is this using some sort of measure of distance between the expected token in this position vs the actual token?

The main implementation is in this file: https://github.com/manaflow-ai/cmux/blob/main/apps/www/lib/s...

EDIT: yeah it's just a LLM prompt haha

Just a simple prompt right now, but I think we could try an approach where we directly see which tokens might be hallucinated. Gonna try to find the paper for this idea. Might be kinda analogous to the "distance between the expected token in this position vs the actual token."