Comment by fc417fc802

Comment by fc417fc802 3 days ago

6 replies

> Generalization is the ability for a model to perform well on new unseen data within the same task that it was trained for.

By that logic a chess engine can generalize in the same way that AlphaGo Zero does. It is a black box that has never seen the vast majority of possible board positions. In fact it's never seen anything at all because unlike an ML model it isn't the result of an optimization algorithm (at least the old ones, back before they started incorporating ML models).

If your definition of "generalize" depends on "is the thing under consideration an ML model or not" then the definition is broken. You need to treat the thing being tested as a black box, scoring only based on inputs and outputs.

Writing the chess engine is analogous to wiring up the untrained model, the optimization algorithm, and the simulation followed by running it. Both tasks require thoughtful work by the developer. The finished chess engine is analogous to the trained model.

> They were originally trained for ...

I think you're in danger here of a definition that depends intimately on intent. It isn't clear that they weren't inadvertently trained for those other abilities at the same time. Moreover, unless those additional abilities to be tested for were specified ahead of time you're deep into post hoc territory.

voidspark 3 days ago

You're way off. This is not my personal definition of generalization.

We are talking about a very specific technical term in the context of machine learning.

An explicitly programmed chess engine does not generalize, by definition. It doesn't learn from data. It is an explicitly programmed algorithm.

I recommend you go do some reading about machine learning basics.

https://www.cs.toronto.edu/~lczhang/321/notes/notes09.pdf

  • fc417fc802 3 days ago

    I thought we were talking about metrics of intelligence. Regardless, the terminology overlaps.

    As far as metrics of intelligence go, the algorithm is a black box. We don't care how it works or how it was constructed. The only thing we care about is (something like) how well it performs across an array of varied tasks that it hasn't encountered before. That is to say, how general the black box is.

    Notice that in the case of typical ML algorithms the two usages are equivalent. If the approach generalizes (from training) then the resulting black box would necessarily be assessed as similarly general.

    So going back up the thread a ways. Someone quotes Chollet as saying that LLMs can't generalize. You object that he sets the bar too high - that, for example, they generalize just fine at Go. You can interpret that using either definition. The result is the same.

    As far as measuring intelligence is concerned, how is "generalizes on the task of Go" meaningfully better than a procedural chess engine? If you reject the procedural chess engine as "not intelligent" then it seems to me that you must also reject an ML model that does nothing but play Go.

    > An explicitly programmed chess engine does not generalize, by definition. It doesn't learn from data. It is an explicitly programmed algorithm.

    Following from above, I don't see the purpose of drawing this distinction in context since the end result is the same. Sure, without a training task you can't compare performance between the training run and something else. You could use that as a basis to exclude entire classes of algorithms, but to what end?

    • voidspark 2 days ago

      We still have this mixup with the term "generalize".

      ML generalization is not the same as "generalness".

      The model learns from data to infer strategies for its task (generalization). This is a completely different paradigm to an explicitly programmed rules engine which does not learn and cannot generalize.

  • daveguy 2 days ago

    If you are using the formal definition of generalization in a machine learning context, then you completely misrepresented Chollet's claims. He doesn't say much about generalization in the sense of in-distribution, unseen data. Any AI algorithm worth a damn can do that to some degree. His argument is about transfer learning, which is simply a more robust form of generalization to out-of-distribution data. A network trained on Go cannot generalize to translation and vice versa.

    Maybe you should stick to a single definition of "generalization" and make that definition clear before you accuse people of needing to read ML basics.

    • voidspark 2 days ago

      I was replying to a claim that LLMs "can’t generalize" at all, and I showed they do within their domain. No I haven't completely misrepresented the claims. Chollet is just setting a high bar for generalization.

      • daveguy a day ago

        It is a very basic form of generalization. And one that most people understand as fundamental to general intelligence.