Comment by softwaredoug

Comment by softwaredoug 2 days ago

5 replies

The other day I was researching with ChatGPT.

* ChatGPT hallucinated an answer

* ChatGPT put it in my memory, so it persisted between conversations

* When asked for a citation, ChatGPT found 2 AI created articles to back itself up

It took a while, but I eventually found human written documentation from the organization that created the technical thingy I was investigating.

This happens A LOT for topics on the edge of knowledge easily found on the Web. Where you have to do true research, evaluate sources, and make good decisions on what you trust.

fireflash38 2 days ago

AI reminds me of combing through stackoverflow answers. The first one might work... Or it might not. Try again, find a different SO problem and answer. Maybe third times the charm...

Except it's all via the chat bot and it isn't as easy to get it to move off of a broken solution.

visarga 2 days ago

Simple solution - run the same query on 3 different LLMs with different search integrations, if they concur chances of hallucination are low.

  • softwaredoug a day ago

    Ultimately, once I had a spidey sense something was wrong I double checked ChatGPT using Claude

  • asdff 2 days ago

    Or they've converged on the same bullshit