Comment by roboboffin

Comment by roboboffin 3 days ago

0 replies

Does that mean when we reduce the precision of a NN, for example using bfloat16 instead of float32, we reduce the set of computational problems that can be solved.

How would that compare with a biological neural network with presumably near-infinite precision ?