Comment by mgraczyk
Comment by mgraczyk 16 hours ago
Figure 4
I think you are assuming we are talking about swapping API usage from one model to another. That is not what happened. A specific product doing a specific thing uses less energy now.
To clarify: the way models become more efficient is usually by training a new one with a new architecture, quantization, etc.
This is analogous to making a computer more efficient by putting a new CPU in it. It would be completely normal to say that you made the computer more efficient, even though you've actually swapped out the hardware.
Don’t they call all their LLM models Gemini? The paper indicates that they specifically used all the AI models to come up with this figure when they describe the methodology. It looks like they even include classification and search models in this estimate.
I’m inclined to believe that they are issuing a misleading figure here, myself.