Comment by evrimoztamur
Comment by evrimoztamur a day ago
Sounds like LLMs short-circuit without necessarily testing their context assumptions.
I also recognize this from whenever I ask it a question in a field I'm semi-comfortable in, I guide the question in a manner which already includes my expected answer. As I probe it, I often find then that it decided to take my implied answer as granted and decide on an explanation to it after the fact.
I think this also explains a common issue with LLMs where people get the answer they're looking for, regardless of whether it's true or there's a CoT in place.
The LLMs copy human written text, so maybe they'll implement Motivated Reasoning just like humans do?
Or maybe it's telling people what they want to hear, just like humans do