tleyden5iwx 18 minutes ago

Agree with LeCun that current ai doesn’t exhibit anything close to actual intelligence.

I think the solution lies into cracking the core algorithms used by nature to build the brain. Too bad it’s such an inscrutable hairball of analog spaghetti code.

intalentive 4 hours ago

Generative world models seem to be doing ok. Dreamer V4 looks promising. I’m not 100% sold on the necessity of EBMs.

Also I’m skeptical that self-supervised learning is sufficient for human level learning. Some of our ability is innate. I don’t believe it’s possible for statistical methods to learn language from raw audiovisual data the way children can.

  • suddenlybananas 5 minutes ago

    I don't know why people really dislike the idea of innate knowledge so much, it's obvious other animals have tons of it, why would we be any different.

SilverElfin 3 hours ago

This seems like the same exact talk LeCun has been giving for years, basically pushing JEPA, world models, and attacking contemporary LLMs. Maybe he’s right but it also seems like he’s wrong in terms of timing or impact. LLMs have been going strong for longer than he expected, and providing more value than expected.

  • Philpax 2 hours ago

    This is also my read; JEPA is a genuinely interesting concept, but he's been hawking it for several years, and nothing has come of it in the domains in which LLMs are successful. Hoping that changes at some point!

  • charcircuit an hour ago

    >LLMs have been going strong for longer than he expected

    Have they? They still seem to be a dead end toward AGI.