Comment by gwern

Comment by gwern 8 hours ago

0 replies

My point is more that Schmidhuber is saying that the gates or the initialization are the innovation solely because they produce well-behaved gradients, which is why Hochreiter's 1991 thing is where he starts and nothing before that counts. But it's not clear to me why we should define it like that when you can solve the gradient misbehavior other ways, which is why https://gwern.net/doc/ai/nn/fully-connected/1988-lang.pdf#pa... works and doesn't diverge: if I'm understanding them right, they did warmup, so the gradients don't explode or vanish. So why doesn't that count? They have shortcut layers and a solution to exploding/vanishing gradients and it works to solve their problem. Is it literally 'well, you didn't use a gate neuron or fancy initialization to train your shortcuts stably, therefore it doesn't count'? Such an argument seems carefully tailored to exclude all prior work...