Comment by intalentive

Comment by intalentive 9 hours ago

7 replies

Generative world models seem to be doing ok. Dreamer V4 looks promising. I’m not 100% sold on the necessity of EBMs.

Also I’m skeptical that self-supervised learning is sufficient for human level learning. Some of our ability is innate. I don’t believe it’s possible for statistical methods to learn language from raw audiovisual data the way children can.

ACCount37 24 minutes ago

Human DNA has under 1GB of information content in it. Most of which isn't even used in the brain. And the brain doesn't have a mechanism to read data out from the DNA efficiently.

This puts a severe limit on how much "innate knowledge" a human can possibly have.

Sure, human brain has a strong inductive bias. It also has a developmental plan, and it follows that plan. It guides its own learning, and ends up being better at self-supervised learning than even the very best of our AIs. But that guidance, that sequencing and that bias must all be created by the rules encoded in the DNA, and there's only this much data in the DNA.

It's quite possible that the human brain has a bunch of simple and clever learning tricks that, if we pried out and applied to our AIs, would give us x100 the learning rate and x1000 the sample efficiency. Or it could be that a single neuron in the human brain is worth 10000 neurons in an artificial neural network, and thus, the biggest part of the "secret" of human brain is just that it's hilariously overparameterized.

suddenlybananas 6 hours ago

I don't know why people really dislike the idea of innate knowledge so much, it's obvious other animals have tons of it, why would we be any different.

  • yorwba 2 hours ago

    The problem with assuming tons of innate knowledge is that it needs to be stored somewhere. DNA certainly contains enough information to determine the development of various different neuron types and which kinds of other neurons they connect to, but it certainly cannot specify weights for every individual synapse, except for animals with very low neuron counts.

    So the existence of a sensorimotor feedback loop for a basic behavior is innate (e.g. moving forward to seek food), but the fine-tuning for reliabily executing this behavior while adapting to changing conditions (e.g. moving over difficult terrain with an injured limb after spotting a tasty plant) needs to be learned through interacting with the environment. (Stumbling around eating random stuff to find out what is edible.)

    • suddenlybananas an hour ago

      >certainly cannot specify weights for every individual synapse

      That's not the only way to one could encode innate knowledge. Besides, we have demonstrated that animals have innate knowledge experimentally many times, the only reason we can't do this to humans is that it would be horrifically unethical.

      >Stumbling around eating random stuff to find out what is edible

      Plenty of animals have innate knowledge about what is and isn't edible: it's why, for example, tasty things generally speaking smell good and why things that are bad (rotting meat) smell horrific.

      • yorwba 10 minutes ago

        I'm not saying that there's no innate knowledge. This entire list of reflexes https://en.wikipedia.org/wiki/List_of_reflexes is essentially a list of innate knowledge in humans, many of which have been demonstrated in newborns, apparently without considering such experiments unethical.

        I'm saying that there are limits to how much knowledge can be inherited. I.e. the question isn't "Where could innate knowledge be encoded other than in synapses?" but "Considering the extremely large number of synapses involved in complex behavior far exceeds genetic storage capacity, how are their weights determined?" And since we know that in addition to having innate behaviors, animals are also capable of learning (e.g. responding to artificial stimuli not found in nature), it stands to reason that most synapse weights must be set by a dynamic learning process.

  • krallistic 4 hours ago

    Various reasons

    Some people just believe there is no innate knowledge or we dont need it if we just scale/learn better (in the direction of Bitter Lesson)

    (ML) Academia is also heavily biased against it due to mainly two reasons: - Its harder to publish, since if you learn Task X with innate Knowledge, its not as general, so reviewer can claims its just (feature) engineering - Which hurts acceptance, so people always try to frame their work as general as possible - Historical reasons due to the conflict the symbolic community (which rely heavily on innate knowledge)

geremiiah 5 hours ago

But generative models are always going to seem like they are doing ok. That's how they work. They are good at imitating and producing misleading demos.