Comment by BurningFrog
Comment by BurningFrog 6 days ago
The LLMs copy human written text, so maybe they'll implement Motivated Reasoning just like humans do?
Or maybe it's telling people what they want to hear, just like humans do
Comment by BurningFrog 6 days ago
The LLMs copy human written text, so maybe they'll implement Motivated Reasoning just like humans do?
Or maybe it's telling people what they want to hear, just like humans do
They definitely tell people what they want to hear. Even when we'd rather they be correct, they get upvoted or downvoted by users, so this isn't avoidable (but is is fawning or sychophancy?)
I wonder how deep or shallow the mimicry of human output is — enough to be interesting, but definitely not quite like us.