Comment by jjani
- The report doesn't name any Gemini models at all, only competitors'. Wonder why that is? If the models got so much more efficient, they'd be eager to show this.
- The report doesn't name any averages (means), only medians. Why oh why would they be doing this, when all other marketing pieces always use the average because outside of HN 99% of Joes on the street have no idea what a median is/how it differs from the mean? The average is much more relevant here when "measuring the environmental impact of AI inference".
- The report doesn't define what any of the terms "Gemini Apps", "the Gemini AI assistant" or "Gemini Apps text prompt" concretely mean
The report also doesn't define what the word "AI" means. What are they trying to hide?!
In reality, we know what Google means by the term "Gemini Apps", because it's a term they've had to define for e.g. their privacy policies[0].
> The Gemini web app available through gemini.google.com and browser sidebars
> The Gemini mobile apps, which include:
> The Gemini app, including as your mobile assistant, on Android. Note that Gemini is hosted by the Google app, even if you download the Gemini app.
> The Gemini app on iOS
> Gemini in the Google Messages app in specific locations
> The Gemini in Chrome feature. Learn more about availability.
That established definition does not include AI summaries (actually AI Overviews) on search like you very claimed. And it's something where Google probably is going to be careful -- the "Gemini Apps" name is awkward, but they need a name that distinguishes these use cases from other AI use cases with different data boundaries / policies / controls.
If the report was talking about "Gemini apps", your objection might make sense.
[0] https://support.google.com/gemini/answer/13594961?hl=en