Comment by chankstein38

Comment by chankstein38 10 months ago

18 replies

Does anyone have more info on this? They thank Azure at the top so I'm assuming it's a flavor of GPT? How do they prevent hallucinations? I am always cautious about asking an LLM for facts because half of the time it feels like it just adds whatever it wants. So I'm curious if they addressed that here or if this is just poorly thought-out...

EMIRELADERO 10 months ago
  • morsch 10 months ago

    Thanks. There's an example page (markdown) at the very end. You can pretty easily spot some weaknesses in the generated text, it's uncanny valley territory. The most interesting thing is that the article contains numbered references, but unfortunately those footnotes are missing from the example.

Sn0wCoder 10 months ago

Not sure how it prevents hallucinations, but I tried inputting too much info and got a pop-up saying it was using Chat GPT 3.5 The article it generated was OK but seemed to repeat the same thing over and over with slightly different wording

infecto 10 months ago

If you ask an LLM what color is the sky it might say purple but if you give it a paragraph describing the atmosphere and then ask the same question it will almost always answer correctly. I don't think hallucinations are as big of a problem as people make them out to be.

  • misnome 10 months ago

    So, it only works if you already know enough about the problem to not need to ask the LLM, check.

    • infecto 10 months ago

      Are you just writing negative posts without even seeing the product? The system queries the internet, aggregates that information and writes information based on your query.

      • misnome 10 months ago

        ChatGPT, please explain threaded discussions and context of statements as if you were talking to a five year old.

        • infecto 10 months ago

          Ahh so you are a child who has no intellectual capability past writing negative attack statements. Got it.

    • keiferski 10 months ago

      No, if the data you’re querying contains the information you need, then it is mostly fine to ask for that data in a format amendable to your needs.

      • o11c 10 months ago

        The problem with LLMs is not a data problem. LLMs are stupid even on data they just generated.

        One recent catastrophic failure I found: Ask an LLM to generate 10 pieces of data. Then in a second input, ask it to select (say) only numbers 1, 3, and 5 from the list. The LLM will probably return results numbered 1, 3, and 5, but chances are at least one of them will actually copy the data from a different number.

  • chx 10 months ago

    There are no hallucinations. It's just the normal bullshit people hang a more palatable name on. There is nothing else.

    https://hachyderm.io/@inthehands/112006855076082650

    > You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

    > Alas, that does not remotely resemble how people are pitching this technology.

  • infecto 10 months ago

    Why does this get downvoted so heavily? It’s my experience running LLM in production. At scale hallucinations are not a huge problem when you have reference material.