Comment by stephen_g
I think the key is how you define “good” - LLMs certainly can turn small amounts of text into larger amounts effortlessly, but if in doing so the meaningful information is diluted or even damaged by hallucinations, irrelevant info, etc., then that’s clearly not “good” or effective.