stevekemp 5 days ago

And because LLMs will "explain" things that contain outright hallucinations - a beginner won't know which parts are real and which parts are suspect.

  • hansmayer 5 days ago

    Exactly this. The thing which irritates and worries me, is that I notice a lot of junior folks tend to try and apply these machines in solving open-ended problems the machines don't have the context for. The lawsuits with made-up referent cases are just the beginning I am afraid, we're in for a lot more slop endangering our services and tools.

cpach 6 days ago

Exactly. Nothing wrong with LLMs, but we’re trying to have a human conversation here – which would be impossible if people would have all their conversations with LLMs instead.