Comment by slashdave

Comment by slashdave 2 days ago

1 reply

Pixel by pixel, time-slice by time-slice, in a 2D+T convolution. You provide enough examples of videos of changing point-of-view, and the model reproduces what it is given.

in-silico 2 days ago

Yes, it reproduces what it is given by modelling the rules of physics, geometry, etc.

For example, image generators like stable diffusion carry strong representations of depth and geometry, such that performant depth estimation models can be built out of them with minimal retraining. This continues to be true for video generation models.

Early work on the subject: https://arxiv.org/pdf/2409.09144