UqWBcuFx6NV4r 6 hours ago

I am not anti-LLM by almost any stretch but your lack of fundamental understanding coupled with willingness to assert BS is at the point where it’s impossible to discuss anything.

You started off by asking a question, and people are responding. Please, instead of assuming that everyone else is missing something, perhaps consider that you are.

  • simianwords 6 hours ago

    You’ve misunderstood my position and you rely on slander.

    Here’s what I mean: LLMs can absolutely be directed to just search for trustable sources. You can do this yourself - ask ChatGPT a question and ask it to use sources from trustworthy journals. Come up with your own rubric maybe. It will comply.

    Now, do you disagree that ChatGPT can do this much? If you do, it’s almost trivially disprovable.

    One of the posters said that hallucination is a problem but if you’ve used ChatGPT for search, you would know that it’s not. It’s grounding on the results anyway a worst case the physician is going to read the sources. So what’s hallucination got to do here?

    The poster also asked a question “can you ask it to not hallucinate”. The answer is obviously no! But that was never my implication. I simply said you can ask it to use higher quality sources.

    Since you’ve said in asserting BS, I’m asking you politely to show me exactly what part of what I said constitutes as BS with the context I have given.

palmotea 7 hours ago

The point was: will telling it to not hallucinate make it stop hallucinating?

  • simianwords 6 hours ago

    No, but did I suggest this? I only suggested you can ask ChatGPT to rely on higher quality sources. ChatGPT has a trade off to do when performing a search - it can rely on lower quality sources to answer questions at the risk of these sources being wrong.

    Please read what I have written clearly instead of assuming the most absurd interpretation.