Comment by in-silico

Comment by in-silico 3 days ago

63 replies

Everyone here seems too caught up in the idea that Genie is the product, and that its purpose is to be a video game, movie, or VR environment.

That is not the goal.

The purpose of world models like Genie is to be the "imagination" of next-generation AI and robotics systems: a way for them to simulate the outcomes of potential actions in order to inform decisions.

benlivengood 3 days ago

Agreed; everyone complained that LLMs have no world model, so here we go. Next logical step is to backfill the weights with encoded video from the real world at some reasonable frame rate to ground the imagination and then branch the inference on possible interventions (actions) in the near future of the simulation, throw the results into a goal evaluator and then send the winning action-predictions to motors. Getting timing right will probably require a bit more work than literally gluing them together, but probably not much more.

  • patapong 2 days ago

    This is the most convincing take of what might actually get us to AGI I've heard so far :)

  • [removed] 3 days ago
    [deleted]
avaer 3 days ago

Soft disagree; if you wanted imagination you don't need to make a video model. You probably don't need to decode the latents at all. That seems pretty far from information-theoretic optimality, the kind that you want in a good+fast AI model making decisions.

The whole reason for LLMs inferencing human-processable text, and "world models" inferencing human-interactive video, is precisely so that humans can connect in and debug the thing.

I think the purpose of Genie is to be a video game, but it's a video game for AI researchers developing AIs.

I do agree that the entertainment implications are kind of the research exhaust of the end goal.

  • in-silico 3 days ago

    Sufficiently informative latents can be decoded into video.

    When you simulate a stream of those latents, you can decode them into video.

    If you were trying to make an impressive demo for the public, you probably would decode them into video, even if the real applications don't require it.

    Converting the latents to pixel space also makes them compatible with existing image/video models and multimodal LLMs, which (without specialized training) can't interpret the latents directly.

    • soulofmischief 2 days ago

      At which point you're training another model on top of the first, and it becomes clear you might as well have made one model from the start!

  • NitpickLawyer 3 days ago

    > I think the purpose of Genie is to be a video game, but it's a video game for AI researchers developing AIs.

    Yeah, I think this is what the person above was saying as well. This is what people at google have said already (a few podcasts on gdm's channel, hosted by Hannah Fry). They have their "agents" play in genie-powered environments. So one system "creates" the environment for the task. Say "place the ball in the basket". Genie creates an env with a ball and a basket, and the other agent learns to wasd its way around, pick up the ball and wasd to the basket, and so on. Pretty powerful combo if you have enough compute to throw at it.

  • SequoiaHope 3 days ago

    Didn’t the original world models paper do some training in latent space? (Edit: yes[1])

    I think robots imagining the next step (in latent space) will be useful. It’s useful for people. A great way to validate that a robot is properly imagining the future is to make that latent space renderable in pixels.

    [1] “By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.”

    https://arxiv.org/abs/1803.10122

  • sailingparrot 3 days ago

    > you don't need to make a video model. You probably don't need to decode the latents at all.

    If you don't decode, how do you judge quality in a world where generative metrics are famously very hard and imprecise? How do you go about integrating RLHF/RLAF in your pipeline if you don't decode, which is not something you can skip anymore to get SotA?

    Just look at the companies that are explicitly aiming for robotics/simulation, they *are* doing video models.

  • magospietato 2 days ago

    I wonder what training insights could be gained by having proven general intelligences actively navigate a generative world model?

  • abraxas 3 days ago

    > if you wanted imagination you don't need to make a video model. You probably don't need to decode the latents at all.

    Soft disagree. What is the purpose of that imagination if not to map it to actual real world outfcomes. For this to compare them to the real world and possibly backpropagate through them you'll need video frames.

  • empath75 3 days ago

    I am not sure we are at the "efficiency" phase of this.

    Even if you just wire this output (or probably multiples running different counterfactuals) into a multimodal LLM that interprets the video and uses it to make decisions, you have something new.

  • ACCount37 3 days ago

    If you train a video model, you by necessity train a world model for 3D worlds. Which can then be reused in robotics, potentially.

    I do wonder if I can frankenstein together a passable VLA using pretrained LTX-2 as a base.

  • koolala 3 days ago

    What model do you need then? If you want 3D real-time understanding of how realities work? Are you focusing on "imagination" in a different abstract way?

  • thegabriele 3 days ago

    Sure, but at some point you want humans in the loop i guess?

  • thegabriele 3 days ago

    Sure, but at some point you want humans in the loop i guess?

echelon 3 days ago

Whoa, whoa, whoa. That's just one angle. Please don't bin that as the only use case for "world models"!

First of all, there are a variety of different types of world models. Simulation, video, static asset, etc. It's a loaded term, just as the use cases are widespread.

There are world models you can play in your browser inferred entirely by your CPU:

https://madebyoll.in/posts/game_emulation_via_dnn/ (my favorite, from 2022!)

https://madebyoll.in/posts/world_emulation_via_dnn/ (updated, in 3D)

There are static asset generating world models, like WorldLabs' Marble. These are useful for video games, previz, and filmmaking.

https://marble.worldlabs.ai/

I wrote open source software to leverage marble for filmmaking (I'm a filmmaker, and this tech is extremely useful for scene consistency):

https://www.youtube.com/watch?v=wJCJYdGdpHg

https://github.com/storytold/artcraft

There are playable video-oriented models, many of which are open source and will run on your 3080 and above:

https://diamond-wm.github.io/

https://github.com/Robbyant/lingbot-world

There are things termed "world models" that really shouldn't be:

https://github.com/Tencent-Hunyuan/HunyuanWorld-1.0

There are robotics training oriented world models:

https://github.com/leggedrobotics/robotic_world_model

Genie is not strictly robotics-oriented.

  • in-silico 3 days ago

    The entertainment industry, as big as it is, just doesn't have as much profit potential as robots and AI agents that can replace human labor. Just look at how Nvidia has pivoted from gaming and rendering to AI.

    The other examples you've given are neat, but for players like Google they are mostly an afterthought.

    • echelon 3 days ago

      Robotics: $88B TAM

      Gaming: $350B TAM

      All media and entertainment: $3T TAM

      Manufacturing: $5T TAM

      Roughly the same story.

      This tech is going to revolutionize "films" and gaming. The entire entertainment industry is going to transform around it.

      When people aren't buying physical things, they're distracting themselves with media. Humans spend more time and money on that than anything else. Machines or otherwise.

      AI impact on manufacturing will be huge. AI impact on media and entertainment will be huge. And these world models can be developed in a way that you develop exposure and competency for both domains.

      edit: You can argue that manufacturing will boom when we have robotics that generalize. But you can also argue that entertainment will boom when we have holodecks people can step into.

      • thecupisblue 2 days ago

        Not so sure around gaming. While it opens some interesting "generate quest on demand" and "quick demo" cases, an infinite world generator wouldn't really vibe with people.

        They would try it once, think its cool and stop there. You would probably have a niche group of "world surfers" that would keep playing with it.

        Most people do not have an idea on what they would want to play and how it would look like - they want a curated experience. As games adapted to the mass market, they became more and more curated experiences with lots of hand-holding the player.

        Yeah, a holodeck would be popular, but that's a whole different technology ballpark and akin to talking about flying cars in this context.

        This will have a giant impact on robotics and general models tho, as now they can simulate action/reaction inside a world in parallel, choosing the best course, by just having a picture of the world and probably a generated image of the end result or "validators" to check if task is accomplished.

        And while robotics is $88B TAM nowadays, expect it to hit $888B in the next 5-10 years, with world simulators like this being one of the reasons.

        From the team side, gotta be cool to build this, feels like one of those things all devs dream about.

      • in-silico 3 days ago

        The current robotics industry is $88B. You have to take into account the potential future industry of general purpose robots that replace a big chunk of blue-collar work.

        Robots is also just one example. A hypothetically powerful AI agent (which might also use a world model) that controls a mouse and keyboard could replace a big chunk of white-collar work too.

        Those are worth 10's of trillions of dollars. You can argue about whether they are actually possible, but the people backing this tech think they are.

wasmainiac 2 days ago

Have a source for that?

I think you are anthropomorphising the AI too much. Imagination is inspired by reality, which AI does not have. Introducing a reality which the AI fully controls (looking beyond issues of vision and physics simulation) would only induce psychosis in the AI itself since false assumptions would only be amplified.

  • ForceBru 2 days ago

    > psychosis in the AI itself

    I think you're anthropomorphising the AI too much: what does it mean for an LLM to have psychosis? This implies that LLMs have a soul, or a consciousness, or a psyche. But... do they?

    Speaking of reality, one can easily become philosophical and say that we humans don't exactly "have" a reality either. All we have are sensor readings. LLMs' sensors are texts and images they get as input. They don't have the "real" world, but they do have access to tons of _representations_ of this world.

    • wasmainiac 2 days ago

      > I think you're anthropomorphising the AI too much

      I don’t get it. Is that supped to be a gotchya? Have you tried maliciously messing with an LLM? You can get it into a state that resembles psychosis. I mean you give it a context that is removed from reality, yet close enough to reality to act on and it willl give you crazy output.

      • ForceBru 2 days ago

        Sorry, I was just trying to be funny, no gotcha intended. Yeah, I once found some massive prompt that was supposed to transform the LLM into some kind of spiritual advisor or the next Buddha or whatever. Total gibberish, in my opinion, possibly written by a mentally unstable person. Anyway, I wanted to see if DeepSeek could withstand it and tell me that it was in fact gibberish. Nope, it went crazy, going on about some sort of magic numbers, hidden structure of the Universe and so on. So yeah, a state that resembles psychosis, indeed.

    • ericmcer 2 days ago

      Psychosis is obviously being used in this context to reference the very well documented "hallucinations" that LLMs experience.

oceanplexian 3 days ago

Yeah and the goal of Instagram was to share quirky pictures you took with your friends. Now it’s a platform for influencers and brainrot; arguably it has done more damage than drugs to younger generations.

As soon as this thing is hooked up to VR and reaches a tipping point with the general public we all know exactly what is going to happen. The creation of the most profitable, addictive and ultimately dystopian technology Big Tech has ever come up with.

  • ceejayoz 3 days ago

    The good news is we’ll finally have an answer for the Fermi Paradox.

    • jacquesm 2 days ago

      What's interesting is that that has gone from an interesting paradox to something where we now have a multitude of very plausible answers in a very short time.

    • dryarzeg 3 days ago

      Your positive mindset impresses me, honestly. In a good way.

    • Ozymandias-9 2 days ago

      wait ... how?

      • cellular 2 days ago

        Yeah how? A solution means EVERYONE gets sucked into vr world.

        Surely a small percentage, at least, would go on to colonize.

cyanydeez 3 days ago

Like LLMs, though: Do you really think a simulation will get them to all the corner cases robots/AI needs to know about, or will it be largely the same problem -- they'll be just good enough to fool the engineers and make the business ops drool and they'll be put into production and suddenly we'll see in a year or two stories about robots crushing peoples hands, stepping in drains and falling over or falling off roofs cause of some bizarre miscommunication between training and reality.

So, like, it's very important to understand the lineage of training and not just the "this is it"

rzmmm 3 days ago

I feel that this is too costly for that kind of usage. Probably quote different architecture is needed for robotics.

dyauspitr 3 days ago

That’s part of it but if you could actually pull out 3D models from these worlds, it would massively speed up game development.

  • avaer 3 days ago

    You already can, check out Marble/World Labs, Meshy, and others.

    It's not really as much of a boon as you'd think though, since throwing together a 3D model is not the bottleneck to making a sellable video game. You've had model marketplaces for a long time now.

    • dyauspitr 2 days ago

      It definitely is. Model marketplaces don’t have ready to go custom models for a custom game. You have to pay a real person a significant amount of money for 100s of a models a truly custom game requires.

    • echelon 3 days ago

      > It's not really as much of a boon as you'd think though

      It is for filmmaking! They're perfect for constructing consistent sets and blocking out how your actors and props are positioned. You can freely position the camera, control the depth of field, and then storyboard your entire scene I2V.

      Example of doing this with Marble: https://www.youtube.com/watch?v=wJCJYdGdpHg

      • avaer 3 days ago

        This I definitely agree with, before you had to massage the I2I and now you can just drag the camera.

        Marble definitely changes the game if the game is "move the camera", just most people would not consider that a game (but hey there's probably a good game idea in there!)

pizzafeelsright 3 days ago

Environment mapping to AI generated alternative outcomes is the holodeck.

I prefer real danger as living in the simulation is derivative.

holografix 2 days ago

Correct and the more you interact the more you create training data

seydor 2 days ago

Creating robots for an imaginary universe? Who needs those

  • ForceBru 2 days ago

    The military. The robots will roam the battlefield, imagine consequences of shooting people and performing actions that maximize the probability of success according to the results of their "imagination"/simulation.

  • subscribed 2 days ago

    Me! Me! I want to drive a tiny robot though the generated world.

    Read "Stars don't dream" by Chi Hui (vol1 of "Think weirder") :)

whytaka 3 days ago

I think this is the key component of developing subjective experience.

  • realmadludite 2 days ago

    I think a subjective experience is impossible to explain by any substrate independent phenomenon, which includes software running on a computer.

slashdave 3 days ago

This is a video model, not a world model. Start learning on this, and cascading errors will inevitably creep into all downstream products.

You cannot invent data.

  • kingstnap 3 days ago

    Related: https://arxiv.org/abs/2601.03220

    This is a paper that recently got popular ish and discusses the counter to your viewpoint.

    > Paradox 1: Information cannot be increased by deterministic processes. For both Shannon entropy and Kolmogorov complexity, deterministic transformations cannot meaningfully increase the information content of an object. And yet, we use pseudorandom number generators to produce randomness, synthetic data improves model capabilities, mathematicians can derive new knowledge by reasoning from axioms without external information, dynamical systems produce emergent phenomena, and self-play loops like AlphaZero learn sophisticated strategies from games

    In theory yes, something like the rules of chess should be enough for these mythical perfect reasoners that show up in math riddles to deduce everything that *can* be known about the game. And similarly a math textbook is no more interesting than a book with the words true and false and a bunch of true => true statements in it.

    But I don't think this is the case in practice. There is something about rolling things out and leveraging the results you see that seems to have useful information in it even if the roll out is fully characterizable.

    • slashdave 3 days ago

      Interesting paper, thanks! But, the authors escape the three paradoxes they present by introducing training limits (compute, factorization, distribution). Kind of a different problem here.

      What I object to are the "scaling maximalists" who believe that if enough training data were available, that complicated concepts like a world model will just spontaneously emerge during training. To then pile on synthetic data from a general-purpose generative model as a solution to the lack of training data becomes even more untenable.

  • andy12_ 2 days ago

    How is it not a world model? The latents of the model apparently encode enough information to represent a semi-consistent interactuable world. Seems enough world-modely to me.

    Besides, we already know that agents can be trained with these world models successfully. See[1]:

    > By learning behaviors in imagination, Dreamer 4 is the first agent to obtain diamonds in Minecraft purely from offline data, without environment interaction. Our work provides a scalable recipe for imagination training, marking a step towards intelligent agents

    [1] https://arxiv.org/pdf/2509.24527

  • 2bitencryption 3 days ago

    Given that the video is fully interactive and lets you move around (in a “world” if you will) I don’t think it’s a stretch to call it a world model. It must have at least some notion of physics, cause and effect, etc etc in order to achieve what it does.

    • [removed] 3 days ago
      [deleted]
    • slashdave 3 days ago

      No, it actually needs none of that.

      • in-silico 2 days ago

        How would it do what it does without those things?

  • whytaka 3 days ago

    They have a feature where you can take a photo and create a world from that.

    If instead of a photo you have a video feed, this is one step closer to implementing subjective experience.

    • realmadludite 2 days ago

      It's not a subjective experience. It's the mimicry of a subjective experience.