Comment by mlyle
> When doing automations that are perfectly handled by deterministic systems why would I put the outcomes of those in the hands of a non-deterministic one?
The stuff I'm punting isn't stuff I can automate. It's stuff like, "build me a quick command line tool to model passes from this set of possible orbits" or "convert this bulleted list to a course articulation in the format preferred by the University of California" or "Tell me the 5 worst sentences in this draft and give me proposed fixes."
Human assistants that I would punt this stuff to also consume a lot of wattage and power. ;)
> We didn't have these tools 5 years ago. 5 years ago you dealt with said "drudgery". On the other hand you then say it can't do "most things I do".
I'm not sure why you think this is paradoxical.
I probably eliminate 20-30% of tasks at this point with AI. Honestly, it probably does these tasks better than I would (not better than I could, but you can't give maximum effort on everything). As a result, I get 30-40% more done, and a bigger proportion of it is higher value work.
And, AI sometimes helps me with stuff that I -can't- do, like making a good illustration of something. It doesn't surpass top humans at this stuff, but it surpasses me and probably even where I can get to with reasonable effort.
It is absolutely impossible that human assistants being given those tasks would use even remotely within the same order of magnitude the power that LLM’s use.
I am not an anti-LLM’er here but having models that are this power hungry and this generalisable makes no sense economically in the long term. Why would the model that you use to build a command tool have to be able to produce poetry? You’re paying a premium for seldom used flexibility.
Either the power drain will have to come down, prices at the consumer margin significantly up or the whole thing comes crashing down like a house of cards.