Comment by ethmarks

Comment by ethmarks 2 days ago

8 replies

I agree wholeheartedly. It irks me when people critique automation because it uses large amounts of resources. Running a machine or a computer almost always uses far less resources than a human would to do the same task, so long as you consider the entire resource consumptions.

Growing the food that a human eats, running the air conditioning for their home, powering their lights, fueling their car, charging their phone, and all the many many things necessary to keep a human alive and productive in the 21st century are a larger resource cost than almost any machine/system that performs the same work. From an efficiency perspective, automation is almost always the answer. The actual debate comes from the ethical perspective (the innate value of human life).

pepoluan 11 hours ago

Not ALL automation can be more efficient.

Just ask Elon about his efforts to fully automate Tesla production.

Same as A.I. Current LLM-based A.I.s are not at all as efficient as a human brain.

runarberg a day ago

I suspect you may be either underestimating how efficient our brains are at computing or severely underestimating how much energy these AI models take to train and run.

Even including our system of comfort like refrigerated blueberries in January and AC cooling a 40° C heat down to 25° C (but excluding car commutes, because please work from home or take public transit) the human is still far far more energy efficient in e.g. playing go then alpha-go. With LLMs this isn’t even close (and we can probably factor in that stupid car commute, because LLMs are just that inefficient).

  • zelphirkalt a day ago

    Hm, that gives me an idea: The next human vs engine matches in chess, go, and so on, should be set at a specific level of energy consumption of the engines, that's close or approximately that of an extremely good human player, like a world champion or at least grand master. Let's see how engines keep up then!

    • ethmarks a day ago

      That sounds delightful. Get a Raspberry Pi or something connected to a power supply capped at 20 watts (approximate electricity consumption of the human brain). It has to be able to run its algorithm in less than the time limit per turn for speed chess. Then you'd have to choose an algorithm based on if it produces high-quality guesses before arriving at its final answer so that if it runs out of time it can still make a move. I wonder if this is already a thing?

  • keeda 13 hours ago

    Wait hold on, let's put some numbers on this. Please correct my calculations if I'm wrong.

    1. The human brain draws 12 - 20 watts [1, 2]. So, taking the lower end, a task taking one hour of our time costs 12 Wh.

    2. An average ChatGPT query is between 0.34 Wh - 3 Wh. A long input query (10K tokens) can go up to 10 Wh. [3] I get the best results by carefully curating the context to be very tight, so optimal usage would be in the average range.

    3. I have had cases where a single prompt has saved me at least an hour of work (e.g. https://news.ycombinator.com/item?id=44892576). Let's be pessimistic and say it takes 3 prompts at 3 Wh (9 Wh) and 10 minutes (2 Wh) of my time prompting and reviewing to complete a task. That is 11 Wh for the same task, which still beats out the human brain unassisted!

    And that's leaving aside the recent case where I vibecoded and deployed a fully-tested endpoint on a cloud platform I had no prior experience in, over the course of 2 - 3 hours. I estimate it would have taken me a whole day just to catch up on the documentation and another 2 days tinkering with the tools, commands and code. That's at least an 8x power savings assuming an 8-hour workday!!

    4. But let's talk data instead of anecdotes. If you do a wide search, there is a ton of empirical evidence that improves programmer productivity by 5 - 30% (with a lot of nuance). I've cited some here: https://news.ycombinator.com/item?id=45379452 -- there is no measure of the amount of prompt usage to estimate energy usage, but those are significant productivity boosts.

    Even the METR study that appeared to show AI coding lowering productivity also showed that AI usage broadly increased in idle-time in users. That is, calendar time for task completion may have gone up, but that included a lot of idle time where people were doing no cognitive work at all. Someone should run the numbers, but maybe it resulted in lower power consumption!

    ---

    But what about the training costs? Sure we've burned gazillions of GWh on training already, and the usual counterpoint is "what about the cost involved in evolution?" but let's assume we stopped training all models today. They will still serve all future prompts at the same power consumption rates discussed above.

    However every new human will take 15 - 20 years of education to get to be a novice in a single domain, followed by many more years of experience to become proficient. We're comparing apples and blueberries here, but that's a LOT of energy to even start becoming productive, but a trained LLM is instantly productive in multiple domains forever.

    My hunch is that if we do a critical analysis of amortized energy consumption, LLMs will probably beat out humans. If not already, soon with the rate of token costs plummeting all the time.

    [1] https://psychology.stackexchange.com/questions/12385/how-muc...

    [2] https://press.princeton.edu/ideas/is-the-human-brain-a-biolo...

    [3] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...

  • ethmarks a day ago

    That's a great point, and I think I was being vague before.

    To clarify, I was making a broad statement about automation in general. Running an automated loom is more efficient in every way that getting humans to weave cloth by hand. For most tasks, automation is more efficient.

    However, there are tasks that humans can still do more efficiently than our current engines of automation. Go is a good example because humans are really good at it and it AlphaGo can only sometimes beat the top players despite massive training and inference costs.

    On the other hand, I would dispute that LLMs fall into this category, at least for most tasks, because we have to factor in marginal setup costs too. I think that raising from infancy all of the humans needed to match the output speed of an LLM has a greater cost than training the LLM. Even if you include the cost of mining the metal and powering the factories necessary to build the machines that the LLMs run on. I'm not 100% confident in this statement, but I do think that it's much closer than you seem to think. Supporting the systems that support the systems that support humans takes a lot of resources.

    To use your blueberries example, while the cost of keeping the blueberries cold isn't much, growing a single serving of blueberries requires around 95 liters of water[1]. In a similar vein, the efficiency of the human brain is almost irrelevant because the 20 watts of energy consumed by the brain is akin from a resource consumption perspective to the electricity consumed by the monitor to read out the LLM's output: it's the last step in the process, but without the resource-guzzling system behind it, it doesn't work. Just as the monitor doesn't work without the data center which doesn't work without electricity, your brain doesn't work without your body which doesn't work without food which doesn't get produced without water.

    As sramam mentioned, these kinds of utilitarian calculations tend to seem pretty inhuman. However, most of the time, the calculations turn out in favor of automation. If they didn't, companies wouldn't be paying for automated systems (this logic doesn't apply to hype-based markets like AI. I'm talking more about markets that are stably automated like textile manufacturing). If you want an anti-automation argument, you'll have a better time arguing based on ethics instead of efficiency.

    Again, thanks for the Go example. I genuinely didn't consider the tasks where humans are more efficient than automation.

    [1]: https://watercalculator.org/water-footprint-of-food-guide/

    • runarberg 20 hours ago

      I‘m not convinced this exercise in what to and what not to include in this cost-benefit-analysis will lead to anything. We can always arbitrarily include an extra item to include to shift the calculations in our favor. For example I could simply add the cost of creating the data which is fed into the training set of an LLM, that creation is done by our human biological machinery and hence has the cost of the frozen blueberries, the rigid fiber insulations, the machinery that dug the waterpipe for their shower, etc.

      Instead I would like to shift the focus on the benefits of LLM. I know the costs are high, very very very high, but you seem to think that the benefits are also so high measured in time saved. That is the amount of tasks automated are enough to save humans doing similar tasks by miles. If that is what you think I disagree. LLMs have yet to prove them selves with real world application. We are seeing when we actually do measure how much LLMs save work-hours, that it the effects are at best negligible (see e.g. https://news.ycombinator.com/item?id=44522772). Worse, generative AI is disrupting our systems in worse way, where e.g. teachers, peer-reviewers, etc. have to put in a bunch of extra work to verify that the submitted work was actually written by that person, and not simply generated by AI. Just last Friday I read that arXiv will no longer accept submissions unless they have been previously peer-reviewed because they are overwhelmed by AI generated submissions[1].

      There are definitely technologies which have saved us time and created a much more efficient system then was previously possible. The loom is a great example of one, I would claim the railway is another, and even the digital calculator for sure. But LLMs, and generative AI more generally are not that. There may be utilities for this technology, but automation and energy/work savings is not one of them.

      1: https://blog.arxiv.org/2025/10/31/attention-authors-updated-...

      • ethmarks 19 hours ago

        You've convinced me. I did not consider the human cost of producing training data, I did not consider whether or not LLMs were actually saving effort, and I did not consider the extra effort to verify LLM output. I have nothing more to add other than to thank you for taking the time to write such a persuasive and high-quality reply. The internet would be a better place if there were more people like you on it. Thank you for making me less wrong.