Comment by aurareturn

Comment by aurareturn 3 days ago

42 replies

Beyond the PS6, the answer is very clearly graphics generated in real time via a transformer model.

I’d be absolutely shocked if in 10 years, all AAA games aren’t being rendered by a transformer. Google’s veo 3 is already extremely impressive. No way games will be rendered through traditional shaders in 2035.

wartywhoa23 3 days ago

The future of gaming is the Grid-Independent Post-Silicon Chemo-Neural Convergence, the user will be injected with drugs designed by AI based on a loose prompt (AI generated as well, because humans have long lost the ability to formulate their intent) of the gameplay trip they must induce.

Now that will be peak power efficiency and a real solution for the world where all electricity and silicon are hogged by AI farms.

/s or not, you decide.

  • pavlov 3 days ago

    Stanislaw Lem’s “The Futurological Congress” predicted this in 1971.

    • wartywhoa23 3 days ago

      FYI it's got an amazing film adaptation by Ari Folman in his 2013 "The Congress". The most emotionally striking film I've ever watched.

  • speed_spread 3 days ago

    There will be a war between these biogamers and smart consoles that can play themselves.

lm28469 3 days ago

Is this before or after fully autonomous cars and agi? Both should be there in two years right?

10 years ago people were predicting VR would be everywhere, it flopped hard.

  • aurareturn 3 days ago

    I've been riding Waymo for years in San Francisco.

    10 years ago, people were predicting that deep learning will change everything. And it did.

    Why just use one example (VR) and apply it to everything? Even then, a good portion of people did not think VR would be everywhere by now.

    • SecretDreams 3 days ago

      > I've been riding Waymo for years in San Francisco.

      Fully autonomous in select defined cities owned by big corps is probably a reasonable expectation.

      Fully autonomous in the hands of an owner applied to all driving conditions and working reliably is likely still a distant goal.

    • Fade_Dance 3 days ago

      Baidu Apollo Go is conpletes millions of rides a year as well, with expansions into Europe in the Middle East. In China they've been active for a long time - during COVID they were making autonomous deliveries.

      It is odd how many people don't realize how developed self-driving taxis are.

      • oblio 3 days ago

        The future isn't evenly distributed.

        I think most people will consider self driving tech to be a thing when it's as widespread as TVs were, 20 years after their introduction.

        • ksec 2 days ago

          TV tech was ready, it just wasn't cheap enough. Self Driving is not wide spread not because of cost issue though. It is still not quite good enough for universal usage. Give it another 10 years I think we should be close, especially in places like Japan.

    • raw_anon_1111 3 days ago

      And outside of a few major cities with relatively good weather, self driving is non existent

  • wartywhoa23 3 days ago

    It did flop, but still a hefty loaf of money was sliced off in the process.

    Those with the real vested interest don't care if that flops, while zealous worshippers to the next brand new disruptive tech are just a free vehicle to that end.

  • kranke155 3 days ago

    VR is great industrial tech and bad consumer tech. It’s too isolating for consumers.

MarCylinder 3 days ago

Just because it's possible doesn't mean it is clearly the answer. Is a transformer model truly likely to require less compute than current methods? We can't even run models like Veo 3 on consumer hardware at their current level of quality.

  • aurareturn 2 days ago

    I’d imagine AAA games will evolve to hundreds of billions of polygons and full path tracing. There is no realistic way to compute a scene like that on consumer hardware.

    The answer is clearly transformer based.

fidotron 3 days ago

Transformer maybe not, but neural net yes. This is profoundly uncomfortable for a lot of people, but it's the very clear direction.

The other major success of recent years not discussed much so far is gaussian splats, which tear up the established production pipeline again.

  • aurareturn 3 days ago

    Neural net is already being used via DLSS. Neural rendering is the next step. And finally, a full transformer based rendering pipeline. My guess anyway.

meindnoch 3 days ago

How much money are you willing to bet?

  • aurareturn 3 days ago

    All my money.

    • CuriouslyC 3 days ago

      Even in a future with generative UIs, those UIs will be composed from pre-created primitives just because it's faster and more consistent, there's literally no reason to re-create primitives every time.

    • bigyabai 3 days ago

      Go short Nintendo and Sony today. I'm the last one who's going to let my technical acumen get in the way of your mistake.

      • aurareturn 3 days ago

        Why would gaming rendering using transformers lead to one shorting Nintendo and Sony?

CuriouslyC 3 days ago

That's just not efficient. AAA games will use AI to pre-render assets, and use AI shaders to make stuff pop more, but on the fly asset generation will still be slow and produce low quality compared to offline asset generation. We might have a ShadCN style asset library that people use AI to tweak to produce "realtime" assets, but there will always be an offline core of templates at the very least.

  • aurareturn 2 days ago

    It is likely a hell of a lot more efficient than path tracing a full ultra realistic game with billions of polygons.

Certhas 3 days ago

This _might_ be true, but it's utterly absurd to claim this is a certainty.

The images rendered in a game need to accurately represent a very complex world state. Do we have any examples of Transformer based models doing something in this category? Can they do it in real-time?

I could absolutely see something like rendering a simplified and stylised version and getting Transformers to fill in details. That's kind of a direct evolution from the upscaling approach described here, but end to end rendering from game state is far less obvious.

  • kgdiem 3 days ago

    Doesn’t this imply that a transformer or NN could fill in details more efficiently than traditional techniques?

    I’m really curious why this would be preferable for a AAA studio game outside of potential cost savings. Also imagine it’d come at the cost of deterministic output / consistency in visuals.

  • aurareturn 3 days ago

      I could absolutely see something like rendering a simplified and stylised version and getting Transformers to fill in details. That's kind of a direct evolution from the upscaling approach described here, but end to end rendering from game state is far less obvious.
    
    Sure. This could be a variation. You do a quick render that any GPU from 2025 can do and then make the frame hyper realistic through a transformer model. It's basically saying the same thing.

    The main rendering would be done by the transformer.

    Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games. I don't see why this wouldn't be the default rendering mode for AAA games in 2035. It's insanity to think it won't be.

    Veo3: https://aistudio.google.com/models/veo-3

    • LtdJorge 3 days ago

      > Google Veo 3 is generating pixels far more realistic than AAA games

      That’s because games are "realtime", meaning with a tight frame-time budget. AI models are not (and are even running on multiple cards each costing 6 figures).

      • aurareturn 3 days ago

        I mistaken veo3 for Genie model. Genie is the Google model I should have referenced. It is real time.

    • Certhas 3 days ago

      Well you missed the point. You could call it prompt adherence. I need veo to generate the next frame in a few milliseconds, and correctly represent the position of all the cars in the scene (reacting to player input) reliably to very high accuracy.

      You conflate the challenge of generating realistic pixels with the challenge of generating realistic pixels that represent a highly detailed world state.

      So I don't think your argument is convincing or complete.

    • jsheard 3 days ago

      > Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games.

      Traditional rendering techniques can also easily exceed the quality of AAA games if you don't impose strict time or latency constraints on them. Wake me up when a version of Veo is generating HD frames in less than 16 milliseconds, on consumer hardware, without batching, and then we can talk about whether that inevitably much smaller model is good enough to be a competitive game renderer.

  • mdale 3 days ago

    Genie 3 is already a frontier approach to interactive generative world views no?

    It will be AI all the way down soon. The models internal world view could be multiple passes and multi layer with different strategies... In any case; safe to say more AI will be involved in more places ;)

    • Certhas 3 days ago

      I am super intrigued by such world models. But at the same time it's important to understand where they are at. They are celebrating the achievement of keeping the world mostly consistent for 60 seconds, and this is 720p at 24fps.

      I think it's reasonable to assume we won't see this tech replace game engines without significant further breakthroughs...

      For LLMs agentic workflows ended up being a big breakthrough to make them usable. Maybe these World Models will interact with a sort of game engine directly somehow to get the required consistency. But it's not evident that you can just scale your way from "visual memory extending up to one minute ago" to 70+ hour game experiences.

KeplerBoy 3 days ago

Be prepared to be shocked. This industry moves extremely slow.

  • aurareturn 2 days ago

    They'll have to move fast when a small team can make graphically richer game than a big and slow AAA studio.

    Competition works wonders.