Comment by seedie
Imo they explain pretty well what they are trying to achieve with SIMA and Genie in the Google Deepmind Podcast[1]. They see it as the way to get to AGI by letting AI agents learn for themselves in simulated worlds. Kind of like how they let AlphaGo train for Go in an enormous amount of simulated games.
That makes even less sense, because an AI agent cannot learn effectively from a hallucinated world without internal consistency guarantees. It's an even stronger case for leveraging standard game engines instead.