Comment by shoyer

Comment by shoyer 4 days ago

3 replies

Glad to see that you can make ensemble forecasts of tropical cyclones! This absolutely essential for useful weather forecasts of uncertain events, and I am a little dissapointed by the frequent comparisons (not just you) of ML models to ECMWF's deterministic HRES model. HRES is more of a single realization of plausible weather, rather than an best estimate of "average" weather, so this is a bit of apples vs oranges.

One nit on your framing: NeuralGCM (https://www.nature.com/articles/s41586-024-07744-y), built by my team at Google, is currently at the top of the WeatherBench leaderboard and actually builds in lots of physics :).

We would love to metrics from your model in WeatherBench for comparison. When/if you have that, please do reach out.

cbodnar 4 days ago

Agree looking at ensembles is super essential in this context and this is what the end of our blogpost is meant to highlight. At the same time, a good control run is also a prerequisite for good ensembles.

Re NeuralGCM, indeed, our post should have said "*most* of these models". Definitely proves that combining ML and physics models can work really well. Thanks for your comments!

bbor 4 days ago

HN never disappoints, jeez. Thanks for chiming in with some expert context! I highly recommend any meteoronoobs like me to check out the pdf version of the linked paper, the diagrams are top notch — https://www.nature.com/articles/s41586-024-07744-y.pdf

Main takeaway, gives me some hope:

  Our results provide strong evidence for the disputed hypothesis that learning to predict short-term weather is an effective way to tune parameterizations for climate. NeuralGCM models trained on 72-hour forecasts are capable of realistic multi-year simulation. When provided with historical SSTs, they capture essential atmospheric dynamics such as seasonal circulation, monsoons and tropical cyclones. 
But I will admit, I clicked the link to answer a more cynical question: why is Google funding a presumably super-expensive team of engineers and meteorologists to work on this without a related product in sight? The answer is both fascinating and boring:

  In recent years, computing has both expanded as a field and grown in its importance to society. Similarly, the research conducted at Google has broadened dramatically, becoming more important than ever to our mission. As such, our research philosophy has become more expansive than the hybrid approach to research we described in our CACM article six years ago and now incorporates a substantial amount of open-ended, long-term research driven more by scientific curiosity than current product needs.
From https://research.google/philosophy/. Talk about a cool job! I hope such programs rode the intimidation-layoff wave somewhat peacefully…
  • bruckie 3 days ago

    Google uses a lot of weather data in their products (search, Android, maps, assistant, probably others). If they license it (they previously used AccuWeather and Weather.com, IIRC), it presumably costs money. Now that they generate it in house, maybe it costs less money?

    (Former Google employee, but I have no inside knowledge; this is just my speculation from public data.)

    Owning your own data and serving systems can also make previously impossible features possible. When I was a Google intern in 2007 I attended a presentation by someone who had worked on Google's then-new in-house routing system for Google Maps (the system that generates directions between two locations). Before, they licensed a routing system from a third party, and it was expensive ($) and slow.

    The in-house system was cheap enough to be almost free in comparison, and it produced results in tens of milliseconds instead of many hundreds or even thousands of milliseconds. That allowed Google to build the amazing-at-the-time "drag to change the route" feature that would live-update the route to pass through the point under your cursor. It ran a new routing query many times per second.