Measuring the environmental impact of AI inference
(arstechnica.com)105 points by ksec 4 hours ago
Research paper: https://services.google.com/fh/files/misc/measuring_the_envi...
Google blog post: https://cloud.google.com/blog/products/infrastructure/measur...
Here's what happened:
1. Google rolled our AI summaries on all of their search queries, through some very tiny model 2. Given worldwide search volume, that model now represents more than 50% of all queries if you throw it on a big heap with "intentional" LLM usage 3. Google gets to claim "the median is now 33x lower!", as the median is now that tiny model giving summaries nobody asked for
It's very concerning that this marketing puff piece is being eaten up by HN of all places as evidenced by the other thread.
Google is basing this all of "median" because there's orders of magnitudes difference betwen strong models (what most people think of when you talk AI) and tiny models, which Google uses "most" by virtue of running them for every single google search to produce the summaries. So the "median" will be whatever tiny model they use for those models. Never mind that Gemini 2.5 Pro, which is what everyone here would actually be using, may well consume >100x much.
It's absurdly misleading and rather obvious, but it feels like most are very eager to latch on to this so they can tell themselves their usage and work (for the many here in AI or at Google) is all peachy. I've been reading this place for years and have never before seen such uncritical adoption of an obvious PR piece detached from reality.