Comment by yunwal
I totally agree with most of the article, but the hallucinations bit puzzles me. If it’s genuinely an unchangeable limitation of the product (as hallucinations are with LLMs) it’s good to set the right expectation rather than making promises you can’t deliver on.
It doesn't matter to the end user if hallucinations are an unchangeable limitation, the fact that they happen undermines the confidence that people have in them as a tool.
I've wondered the same thing as the author about why we even call them "hallucinations." They're errors, the LLM generated an erroneous output.