Comment by namaria
Seriously, do people around you not normally double check, proofread, review what they turn in as done work?
Maybe I am just very fortunate, but people who are not capable of producing documents that are factually correct do not get to keep producing documents in the organizations I have worked with.
I am not talking about typos, misspelling words, bad formatting. I am talking about factual content. Because LLMs can actually produce 100% correct text but they routinely mangle factual content in a way that I have never had the misfortune of finding in the work of my colleagues and teams around us.
A friend of mine asked an AI for a summary of a pending Supreme Court case. It came back with the decision, majority arguments, dissent, the whole deal. Only problem was that the case hadn't happened yet. It had made up the whole thing, and admitted that when called on it.
A human law clerk could make a mistake, like "Oh, I thought you said 'US v. Wilson,' not 'US v. Watson.'" But a human wouldn't just make up a case out of whole cloth, complete with pages of details.
So it seems to me that AI mistakes will be unlike the human mistakes that we're accustomed to and good at spotting from eons of practice. That may make them harder to catch.