jsheard 5 days ago

It seems plausible that stressing the importance of the system prompt instructions might do something, but I don't see how telling the model not to hallucinate would work. How could the model know that its most likely prediction has gone off the rails, without any external point of reference?

  • jshmrsn 5 days ago

    Some of the text that the LLM is trained on is fictional, some of the text that its trained on is factual. Telling it to not make things up can tell it to generate text that’s more like the factual text. Not saying it does work, but this is a reason how it might work.

  • viraptor 5 days ago

    The model can be trained to interpret "don't hallucinate" as "refer only to the provided context and known facts, do not guess or extrapolate new information", which wouldn't get rid of the issue completely, but likely would improve the quality if that's what you're after and if there's enough training data for "I don't know" responses.

    (But it all depends on the fine-tuning they did, so who knows, maybe it's just an Easter egg)

  • potatoman22 4 days ago

    I think it's more likely that it's included for liability reasons.

tkz1312 4 days ago

I’ve had pretty good experience with it personally. It quite often just tells me it doesn’t know or isn’t sure instead of just making something up.

  • mrfinn 4 days ago

    I did something similar and to my surprise effectively made the LLM in my tests admit when they don't know something. Not always but worked sometimes. I don't prompt "don't hallucinate" but "admit when you don't know something". It's a logical thing in the other hand, many prompts just transmit the idea of being "helpful" or "powerful" to the LLMs without any counterweight idea. So the LLM tries to say something "helpful" in any case.

  • magicalhippo 4 days ago

    Playing around with local models, Gemma for example will usually comply when I tell it "Say you don't know if you don't know the answer". Others, like Phi-3, completely ignores that instruction and confabulates away.

    • fkyoureadthedoc 4 days ago

      Stop trying to make f̶e̶t̶c̶h̶ confabulate happen, it's not going to happen.

astrange 4 days ago

It does help if you train the model to make it help.

wkat4242 5 days ago

Yeah and some of the other prompts were misspelled and of doubtful use:

> In order to make the draft response nicer and complete, a set of question [sic] and its answer are provided," reads one prompt. "Please write a concise and natural reply by modify [sic] the draft response," it continues.

This really sounds like a placeholder made up by one engineer until a more qualified team sits down and defines it.

  • astrange 4 days ago

    That's not a big problem since it will understand it, and if they already fine tuned the model to work with that prompt it'd get harder to change.

    • wkat4242 4 days ago

      I just don't think Apple would release something like this. They're the company that laser engraves their screws because of their attention to detail.

      • NavinF 4 days ago

        Which apple screws are laser engraved?

        • wkat4242 4 days ago

          The ones on the MacBook Pro used to be. At least were when I still used Apple until 2015 or so.

          The butterfly keyboards were unusable to me and also the OS got too locked down so I left the platform.