ansk 4 hours ago

Of all Schmidhuber's credit-attribution grievances, this is the one I am most sympathetic to. I think if he spent less time remarking on how other people didn't actually invent things (e.g. Hinton and backprop, LeCun and CNNs, etc.) or making tenuous arguments about how modern techniques are really just instances of some idea he briefly explored decades ago (GANs, attention), and instead just focused on how this single line of research (namely, gradient flow and training dynamics in deep neural networks) laid the foundation for modern deep learning, he'd have a much better reputation and probably a Turing award. That said, I do respect the extent to which he continues his credit-attribution crusade even to his own reputational detriment.

  • godelski 2 hours ago

    I think one of the best things to learn from Schmidhuber is that progress involves a lot of players and over a lot of time. Attribution is actually a difficult game and usually we are only assigning credit to those at the end of some milestone. It's like giving a gold medal to the runner in the last leg of a relay race or focusing only on the lead singer of a band. It's never one person that does it alone. Shoulders of giants, but those giants are just a couple of dudes in a really big trenchcoat.

    Another important lesson is that often good ideas get passed over because of hype or politics. We often like to pretend that science is all about the merit and what is correct. Unfortunately this isn't true. It is that way in the long run, but in the short run there's a lot of politics and humans still get in their own way. This is a solvable problem, but we need to acknowledge it and create systematic changes. Unfortunately a lot of that is coupled to the aforementioned one.

      > I do respect the extent to which he continues his credit-attribution crusade even to his own reputational detriment.
    
    
    As should we all. Clearly he was upset that others got credit for his contributions. But what I do appreciate is that he has recognized that it is a problem bigger than him, and is trying to combat the problem at large and not just his own little battlefield. That's respectable.
    • dchftcs an hour ago

      It's a bit of an aside but I believe this is one reason Zuckerberg's vision for establishing the superintelligence lab is misguided. Including VCs, too many people get distracted by rock stars in this gold rush.

      • godelski 30 minutes ago

        Just last week I said something inline with that[0]. Many people conflated my claim that Meta has a lot of good people with "Meta /is/ winning the AI race". I just claimed they had some of who I think are some of the best researchers in the field, but do not give them nearly the same resources or capacity to further their research that they give to these "rock stars". Tbh, the same is true for any top lab, I just think this happens more at Meta because Meta is so metric and rock star focused.

        So I agree. The vision is misguided. I think they'd have done better had they taken that same money and just thrown it at the people they already have but who are working in different research areas. Everyone is trying to win my doing the same things. That's not a smart strategy. You got all that money, you gotta take risks. It's all the money dumped into research that got us to this point in the first place.

        It's good to shift funds around and focus on what is working now, but you also have to have a pipeline of people working on what will work tomorrow, next year, 5 years, and 10 years. The people are there that can do that work. The people are there that want to do the work. The only thing is there's little to no people that want to fund that work. Unfortunately it takes time to bake a cake.

        Quite frankly, these companies also have more than enough money to do both. They have enough money to throw cash hand over fist at every wild and crazy idea. But they get caught in the hype, which is no different than an over focus on the attribution rather than the process or pipeline that got us the science in the first place.

        [0] https://news.ycombinator.com/item?id=45554147

gwern 2 hours ago

> Note again that a residual connection is not just an arbitrary shortcut connection or skip connection (e.g., 1988)[LA88][SEG1-3] from one layer to another! No, its weight must be 1.0, like in the 1997 LSTM, or in the 1999 initialized LSTM, or the initialized Highway Net, or the ResNet. If the weight had some other arbitrary real value far from 1.0, then the vanishing/exploding gradient problem[VAN1] would raise its ugly head, unless it was under control by an initially open gate that learns when to keep or temporarily remove the connection's residual property, like in the 1999 initialized LSTM, or the initialized Highway Net.

After reading Lang & Witbrock 1988 https://gwern.net/doc/ai/nn/fully-connected/1988-lang.pdf I'm not sure how convincing I find this explanation.

  • CamperBob2 30 minutes ago

    That's a cool paper. Super interesting to see how work was progressing at the time, when Convex was the machine everybody wanted on (or rather next to) their desks.

ekjhgkejhgk 3 hours ago

I spent some time in the academia.

The person with whom an idea ends up associated often isn't the first person to have the idea. Most often is the person who explains why the idea is important, or find a killer application for the idea, or otherwise popularizes the idea.

That said, you can open what Schmidhuber would say is the paper which invented residual NNs. Try and see if you notice anything about the paper that perhaps would hinder the adoption of its ideas [1].

[1] https://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdv...

aDyslecticCrow 4 hours ago

I thought it was ResNet that invented the technique, but it's interesting to see it rooted back through LSTM which feels like a very architecture. ResNet really made massive waves in the field, and it was hard finding a paper that didn't reference it for a while.

alyxya 3 hours ago

The notion of inventing or creating something in ML doesn't seem very important as many people can independently come up with the same idea. Conversely, you can create novel results just by reviewing old literature and demonstrating it in a project.

  • ekjhgkejhgk 3 hours ago

    That's how all/most science normally works.

    Conversely, a huge amount of science is just scientists going "here's something I found interesting" but no one can figure out what to do with it. Then 30 or 100 years go by and it's a useful in a field that didn't even exist at the time.

    • alyxya an hour ago

      It doesn’t apply to empirical science because there’s a lot more variation in observations. The variation of ideas in ML model architecture is limited by being theoretical.

ekjhgkejhgk 3 hours ago

To comment on the substance.

It seems that these two people Schimidhuber and Hochreiter were perhaps solving the right problem for the wrong reasons. They thought this was important because they expected that RNNs could hold memory indefinitely. Because of BPTT, you can think of that as a NN with infinitely many layers. At the time I believe nobody worries about vanishing gradient for deep NNs, because the compute power for networks that deep just didn't exist. But nowadays that's exactly how their solution is applied.

That's science for you.

  • [removed] 14 minutes ago
    [deleted]
bjourne an hour ago

I'm not a giant like Schmidhuber so I might be wrong, but imo there are at least two features that set residual connections and LSTMs apart:

1. In LSTMs skip connections help propagate gradients backwards through time. In ResNets, skip connections help propagate gradients across layers.

2. Forking the dataflow is part of the novelty, not only the residual computation. Shortcuts can contain things like batch norm, down sampling, or any other operation. LSTM "residual learning" is much more rigid.

scarmig 4 hours ago

From the domain, I'm guessing the answer is Schmidhuber.

HarHarVeryFunny 2 hours ago

How about Schmidhuber actually invents the next big thing rather than waiting for it to come along then claim credit for it?