esperent 12 hours ago

Here's the report. Could you tell me where in it you found a link to 33x reduction (or any large reduction) for any specific non-tiny model? Because all I can find is lots of references to "median Gemini". In fact, I would say they're being extremely careful in this paper not to mention any particular Google models with regards to energy reduction.

https://services.google.com/fh/files/misc/measuring_the_envi...

  • mgraczyk 12 hours ago

    Figure 4

    I think you are assuming we are talking about swapping API usage from one model to another. That is not what happened. A specific product doing a specific thing uses less energy now.

    To clarify: the way models become more efficient is usually by training a new one with a new architecture, quantization, etc.

    This is analogous to making a computer more efficient by putting a new CPU in it. It would be completely normal to say that you made the computer more efficient, even though you've actually swapped out the hardware.

    • sigilis 12 hours ago

      Don’t they call all their LLM models Gemini? The paper indicates that they specifically used all the AI models to come up with this figure when they describe the methodology. It looks like they even include classification and search models in this estimate.

      I’m inclined to believe that they are issuing a misleading figure here, myself.

      • simianwords 7 hours ago

        “Gemini App” would be the specific Gemini App in the App Store. Why would it be anything different?

      • mgraczyk 12 hours ago

        They reuse the word here for a product, not a model. It's the name of a specific product surface. There is no single model and the models used change over time and for different requests

    • esperent 12 hours ago

      > Figure 4: Median Gemini Apps text prompt emissions over time—broken down by Scope 2 MB emissions (top) and Scope 1+3 emissions (bottom). Over 12 months, we see that AI model efficiency efforts have led to a 47x reduction in the Scope 2 MB emissions per prompt, and 36x reduction in the Scope 1+3 emissions per user prompt—equivalent to a 44x reduction in total emissions per prompt.

      Again, it's talking about "median Gemini" while being very careful not to name any specific numbers for any specific models.

      • logicprog 7 hours ago

        You're grouping those words wrong. As another commenter pointed out to you, which you ignored, it's median (Gemini Apps) not (median Gemini) Apps. Gemini Apps is a highly specific thing — with a legal definition even iirc — that does not include search, and encompasses a list of models you can actually see and know.

      • simianwords 7 hours ago

        What do you think the Gemini app means? It can only mean the consumer facing actually existing Gemini App that exposes 2 models.

      • mgraczyk 12 hours ago

        That isn't what that means. Look at the paragraph above that where they explain.

        This is the median model used to serve requests for a specific product surface. It's exactly analogous to upgrading the CPU in a computer over time