imhoguy 3 days ago

I don't know why I have read "HN", indeed HN can solve any problem.

tossandthrow 3 days ago

Feed forward NNs can approximate all functions f: X -> Y only for closed domains.

But recurrent neural networks can do solve any computational problem given enough precision.

  • roboboffin 3 days ago

    Does that mean when we reduce the precision of a NN, for example using bfloat16 instead of float32, we reduce the set of computational problems that can be solved.

    How would that compare with a biological neural network with presumably near-infinite precision ?

  • wodenokoto 3 days ago

    First day of introductions to NN we were asked to create all the logic gates using artificial neurons, and then told "If you have all gates, you can do all computations".

    I got to admit, I'm sorta sticking to that at face value, because I don't know enough computer science to a) discern if that is true and b) know what "f: X -> Y only for closed domains" means.

    • tossandthrow 3 days ago

      I think the easiest way to think about this is in terms of natural numbers, ie. 1, 2, 3, 4.

      When you only have a fixed width, ie. a static feed forward network, you have an upper limit to the data you can represent and compute on.

      Eg. if the highest number you can represent is 1.000, then you will need a new NN if you want to do computations on 1.001.

      ... or use an inductive structure, like a recurrent neural network has.