Comment by AnotherGoodName

Comment by AnotherGoodName 13 hours ago

7 replies

Fwiw this absolutely works amazingly well with modern coding assistants. “I want a diagram of equation X morphing into Y” or similar is always a one shot success for me.

Part of it is the simplistic syntax and the sheer amount of open source manim examples to train on but it’s a pretty great demonstration of ai coding agents time saving. Especially since the output video looking correct is all you care about here. Ie. I don’t actually care about the specifics of how my explanatory videos were created, just that they were created via a simple prompt and it gave me what i wanted.

sansseriff 10 hours ago

I remember listening to a podcast where Grant Sanderson basically said the opposite. He tried generating manim code with LLMs and found the results unimpressive. Probably just goes to show that competence in manim looks very different to us layman than it does to Grant haha

  • apetresc 10 hours ago

    I wonder if that’s also because Grant uses his own branch of manim from which the mainstream public one (manim-ce) has diverged quite a bit.

    I can imagine LLMs being very confused being asked to write “manim” when everyone talking about “manim” (and the vast majority of public manim code) is actually the subtly-but-substantially different “manim-ce”.

  • AnotherGoodName 10 hours ago

    I’m having 100% success even when doing transitions between screens etc on the latest agents. Wonder if this is due to time and the agents vastly improving lately. Possibly also grant knows manim so well he can beat the time to type a prompt. For the rest of us i’m tempted to make a website for educators to type a prompt to get a video out since it’s been that reliable for me.

  • icelancer 5 hours ago

    Yeah I've mostly had Grant's experience. Some frameworks have hooked in VLMs to "review" the manim animations and drawings but it doesn't help much.

pkoird 11 hours ago

Curious to see how it'll combine with a RAG on its documentation.