Comment by alganet

Comment by alganet 2 days ago

2 replies

"I mean, today we can do jet engines in garage shops. Why would they needed a catapult system? They could have used this simple jet engine. Look, here is the proof, there's a YouTuber that did a small tiny jet engine in his garage. They were held back by ideas, not aerodynamics and tooling precision."

See how silly it is?

Now, focus on the simple question. How would you train the 300K model in 1997? To run it, you someone to train it first.

rahen 2 days ago

Reductio ad absurdum. A 300K-param model was small enough to be trained offline, on curated datasets, with CPUs and RAM capacities that absolutely existed at the time, especially in research centers.

Backprop was known. Data was available. Narrow tasks (completion, summarization, categorization) were relevant. The model that runs on a Pentium II could have been trained on a Cray, or across time on any reasonably powerful 90s workstation. That’s not fantasy, LeNet 5 with its 65K weight was trained on a mere Sun station in the early 90s.

The limiting factor wasn’t compute, it was the conceptual framing as well as the datasets. No one seriously tried, because the field was dominated by symbolic logic and rule-based AI. That’s the core of the argument.

  • alganet 2 days ago

    > Reductio ad absurdum.

    My dude, you came up with the Wright brothers comparison, not me. If you don't like fallacies, don't use them.

    > on any reasonably powerful 90s workstation

    https://hal.science/hal-03926082/document

    Quoting the paper now:

    > In 1989 a recognizer as complex as LeNet-5 would have required several weeks’ training and more data than were available and was therefore not even considered.

    Their own words seem to match my assessment.

    Training time and data availability determined how much this whole thing could advance, and researchers were aware of those limits.