Comment by LarsDu88
I don't think you grasp how many GPUs are used to run world simulation models. It is vastly more intensive in compute that the current dominant realtime rendering or rasterized triangles paradigm
I don't think you grasp how many GPUs are used to run world simulation models. It is vastly more intensive in compute that the current dominant realtime rendering or rasterized triangles paradigm
Yeah, which is pretty slow due to the need to autoregressively generate each image frame token in sequence. And leading diffusion models need to progressively denoise each frame. These are very expensive computationally. Generating the entire world using current techniques is incredibly expensive compared to rendering and rasterizing triangles, which is almost completely parallelized by comparison.
in a few years it's possible that this will run locally in real time
I don't think that will ever happen die to extreme hardware requirements. What I do see happen is that only an extremely low fidelity scene is rendered with only basic shapes, no or very little textures etc. that is them filled in by AI. DLSS taken to the extreme, not just resolution but the whole stack.
I’m thinking more procedural generation of assets. If done efficiently enough, a game could generate its assets on the fly, and plan for future areas of exploration. It doesn’t have to be rerendered every time the player moves around. Just once, then it’s cached until it’s not needed anymore.
I don't think you grasp what I'm saying? I'm talking about next token prediction to generate video frames.