Comment by gavinray

Comment by gavinray 3 days ago

5 replies

Two close friends of mine who were math prodigies that went on to do ML very early (mid 2010's) were always talking to me about an algorithm that sounds similar to this:

"NEAT/HyperNEAT" (Neuroevolution of Augmented Topologies) [0]

I'm no ML practictioner, but as I understood it, the primary difference between NEAT and what is described in this paper is that while NEAT evolves the topology of the network, this paper seems to evolve the weights.

Seems like two approaches trying to solve the same problem -- one evolving networking structure, and the other the weights.

Those 2 friends are quite possibly the most intelligent people I've ever met, and they were very convinced that RL and evolutionary algorithms were the path forward in ML.

[0] https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t...

khalic 3 days ago

Humans are amazing, we build a hypothetical computing system trying to understand neurons, then find out it’s not really how they do it, but whatever, we still build a paradigm shifting tech around it. And we’re still enhancing it with ideas from that imaginary system

  • zelphirkalt 2 days ago

    Since we lack knowledge and means to build like the real thing, this is what we have to go on with for now. I think it is obvious, that the industry goes with whatever is available. Though all the uninformed hype of it by people thinking it works like the brain is certainly annoying.

robviren 3 days ago

I just got sucked into this idea recently! After some success with using genetic algorithms to clone voices for Kokoro I wondered if it would be possible to evolve architecturers. So interested in the idea of self assembled intelligence, but do wonder how it can be made feasible. A hybrid approach like this might be for the best given how llms have turned out.

  • hdjdbdirbrbtv 2 days ago

    So the issue with genetic algorithms / genetic programming is you need a good way to handle the path the population takes. It is more reinforcement than y = f(x) for deep learning f() is what the nn is computing. X and y is the training data.

    Finding a good scoring algorithm is hard as it is so easy for a GA to cheat...

    Source: experience