Comment by phailhaus
Comment by phailhaus 3 days ago
I have no idea why Google is wasting their time with this. Trying to hallucinate an entire world is a dead-end. There will never be enough predictability in the output for it to be cohesive in any meaningful way, by design. Why are they not training models to help write games instead? You wouldn't have to worry about permanence and consistency at all, since they would be enforced by the code, like all games today.
Look at how much prompting it takes to vibe code a prototype. And they want us to think we'll be able to prompt a whole world?
This was a common argument against LLMs, that the space of possible next tokens is so vast that eventually a long enough sequence will necessarily decay into nonsense, or at least that compounding error will have the same effect.
Problem is, that's not what we've observed to happen as these models get better. In reality there is some metaphysical coarse-grained substrate of physics/semantics/whatever[1] which these models can apparently construct for themselves in pursuit of ~whatever~ goal they're after.
The initially stated position, and your position: "trying to hallucinate an entire world is a dead-end", is a sort of maximally-pessimistic 'the universe is maximally-irreducible' claim.
The truth is much much more complicated.
[1] https://www.arxiv.org/abs/2512.03750