Comment by alganet
The part on training is misleading and full of shit.
Training is not a "one-time cost". There is an implied never-ending need for training. LLMs are useless (for one of their main purposes) if the models get stale.
I can use Musk's own argument on this one. Each model is a plane, fully built, that LLM researchers made into a disposable asset destined to be replaced by a newly built plane on the next training. Just incredibly stupid and inneficient.
I know what you're thinking right now: fine-tuning, etc. That is the "reusable" analogy to that, is it not? But fine-tuning is far, far from reusability (the major players don't even care about it that much). It's not even on the "hopper" stage.
_Stop training new shit, and the argument becomes valid. How about that?_
---
I am sure the more radical environmentalists know that LLMs can be eco-friendly. The point is: they don't believe it will go that way, so they fight it. I can't blame them, this has happened before.
_This monster was made by environment promises that were not met_. If they're not met again, the monster will grow and there's nothing anyone can do about it. I've been more moderate than this article in several occations and still got attacked by it. If not LLMs, it will target something else. Again, can't blame them.
If we're trying to figure out a reasonable number for how much energy a single ChatGPT search uses, it seems weird to factor in all future training of all future models. It would be like trying to figure out the carbon cost of a single plane ride and trying to figure out what fraction of additional plane flights that one ticket incentivizes. It's too murky to put a clear number on and probably doesn't add much to the cost on its own. I tried to make it clear that the post is about your own personal use of ChatGPT, not the entire AI industry as a whole.