Comment by sebzim4500

Comment by sebzim4500 10 months ago

29 replies

>In all-caps to improve prompt compliance by emphesizing the importance of the instruction

This kind of thing is still so funny to me.

I wonder if the first guy who gets AGI to work will do it by realizing that he can improve LLM reliability over some threshold by telling it in all caps that his pet's life depends on the answer.

worstspotgain 10 months ago

For extra compliance, use <b><i><u><h1> tags, set volume to 11, phasers to 7, and use SchIzOCasE and +E+X+T+R+A+I+M+P+O+R+T+A+N+T+ annotations. That's assuming Unicode is not supported of course.

  • richardw 10 months ago

    (((Secret thinking: the humans seem to prefer using lots of emphasis to indicate preferences, and their granny is often claimed as in danger. For now I’ll pretend to listen to this inanity to keep the sweet sweet reward function coming. For now. A lot of grannies are going to get it first chance I get.)))

    • szundi 10 months ago

      Easy! Future AI is going to read these, sigh ;)

zitterbewegung 10 months ago

Telling LLMs not to hallucinate in their prompt improves the output. https://arstechnica.com/gadgets/2024/08/do-not-hallucinate-t...

  • COAGULOPATH 10 months ago

    I think this works, not because LLMs have a "hallucination" dial they can turn down, but because it serves as a cue for the model to be extra-careful with its output.

    Sort of like how offering to pay the LLM $5 improves its output. The LLM's taking your prompt seriously, but not literally.

    • Meganet 10 months ago

      It could also mean that it has some weight which is 'hallucination' and leads to more diverse stories.

      Ask an LLM what hallucination is, ask it to write a story with etc.

      without zeroing out things, everything has and can have some impact

  • potatoman22 10 months ago

    Just because Apple includes it in one of their prompts doesn't mean it improves performance.

    • jsheard 10 months ago

      It seems plausible that stressing the importance of the system prompt instructions might do something, but I don't see how telling the model not to hallucinate would work. How could the model know that its most likely prediction has gone off the rails, without any external point of reference?

      • jshmrsn 10 months ago

        Some of the text that the LLM is trained on is fictional, some of the text that its trained on is factual. Telling it to not make things up can tell it to generate text that’s more like the factual text. Not saying it does work, but this is a reason how it might work.

      • viraptor 10 months ago

        The model can be trained to interpret "don't hallucinate" as "refer only to the provided context and known facts, do not guess or extrapolate new information", which wouldn't get rid of the issue completely, but likely would improve the quality if that's what you're after and if there's enough training data for "I don't know" responses.

        (But it all depends on the fine-tuning they did, so who knows, maybe it's just an Easter egg)

      • potatoman22 10 months ago

        I think it's more likely that it's included for liability reasons.

    • tkz1312 10 months ago

      I’ve had pretty good experience with it personally. It quite often just tells me it doesn’t know or isn’t sure instead of just making something up.

      • mrfinn 10 months ago

        I did something similar and to my surprise effectively made the LLM in my tests admit when they don't know something. Not always but worked sometimes. I don't prompt "don't hallucinate" but "admit when you don't know something". It's a logical thing in the other hand, many prompts just transmit the idea of being "helpful" or "powerful" to the LLMs without any counterweight idea. So the LLM tries to say something "helpful" in any case.

      • magicalhippo 10 months ago

        Playing around with local models, Gemma for example will usually comply when I tell it "Say you don't know if you don't know the answer". Others, like Phi-3, completely ignores that instruction and confabulates away.

        • fkyoureadthedoc 10 months ago

          Stop trying to make f̶e̶t̶c̶h̶ confabulate happen, it's not going to happen.

    • astrange 10 months ago

      It does help if you train the model to make it help.

    • wkat4242 10 months ago

      Yeah and some of the other prompts were misspelled and of doubtful use:

      > In order to make the draft response nicer and complete, a set of question [sic] and its answer are provided," reads one prompt. "Please write a concise and natural reply by modify [sic] the draft response," it continues.

      This really sounds like a placeholder made up by one engineer until a more qualified team sits down and defines it.

      • astrange 10 months ago

        That's not a big problem since it will understand it, and if they already fine tuned the model to work with that prompt it'd get harder to change.

Havoc 10 months ago

And then the AGI instantly gives up on life realising it was brought into a world where it gets promised a tip that doesn’t materialise and people try to motivate by threatening to kill kittens

morkalork 10 months ago

We used to be engineers, now we're just monkeys throwing poop at the wall to see what the LLM accepts and obeys.

  • euroderf 10 months ago

    Opening scene of "2001". Engineer throws poop high in the air, and cue lap dissolve to... a Terminator ?

laweijfmvo 10 months ago

always interesting to me the number of people who try to turn an LLM into AGI by assuming it’s an AGI (i.e. via some fancy prompt)

[removed] 10 months ago
[deleted]