Comment by smusamashah
Comment by smusamashah 2 days ago
On a similar note, has anyone found themselves absolutely not trusting non-code LLM output?
The code is at least testable and verifiable. For everything else I am left wondering if it's the truth or a hallucination. It incurs more mental burden that I was trying to avoid using LLM in the first place.
Absolutely. LLMs are a "need to verify" the results almost always. LLMs (for me) shine by pointing me in the right direction, getting a "first draft", or for things like code where I can test it.