Comment by jjani
Comment by jjani 14 hours ago
Here's what happened:
1. Google rolled our AI summaries on all of their search queries, through some very tiny model 2. Given worldwide search volume, that model now represents more than 50% of all queries if you throw it on a big heap with "intentional" LLM usage 3. Google gets to claim "the median is now 33x lower!", as the median is now that tiny model giving summaries nobody asked for
It's very concerning that this marketing puff piece is being eaten up by HN of all places as evidenced by the other thread.
Google is basing this all of "median" because there's orders of magnitudes difference betwen strong models (what most people think of when you talk AI) and tiny models, which Google uses "most" by virtue of running them for every single google search to produce the summaries. So the "median" will be whatever tiny model they use for those models. Never mind that Gemini 2.5 Pro, which is what everyone here would actually be using, may well consume >100x much.
It's absurdly misleading and rather obvious, but it feels like most are very eager to latch on to this so they can tell themselves their usage and work (for the many here in AI or at Google) is all peachy. I've been reading this place for years and have never before seen such uncritical adoption of an obvious PR piece detached from reality.
It's not what the report says.
> It's very concerning that this marketing puff piece is being eaten up by HN of all places as evidenced by the other thread.
It's very concerning that you can just make shit up on HN and be the top comment as long as it's to bash Google.
> Never mind that Gemini 2.5 Pro, which is what everyone here would actually be using, may well consume >100x much
Yes, exactly, never mind that. The report is to compare against a data point from May 2024, before Gemini 2.5 Pro became a thing.