Comment by blonder

Comment by blonder a day ago

10 replies

I appreciate your rabid optimism, but considering that Moores Law has ceased to be true for multiple years now I am not sure a handwave about being able to scale to infinity is a reasonable way to look at things. Plenty of things have slowed down in progress in our current age, for example airplanes.

timschmidt a day ago

Someone always crawls out of the woodwork to repeat this supposed "fact" which hasn't been true for the entire half-century it's been repeated. Jim Keller (designer of most of the great CPUs of the last couple decades) gave a convincing presentation several years ago about just how not-true it is: https://www.youtube.com/watch?v=oIG9ztQw2Gc Everything he says in it still applies today.

Intel struggled for a decade, and folks think that means Moore's law died. But TSMC and Samsung just kept iterating. And hopefully Intel's 18a process will see them back in the game.

  • eru a day ago

    During the 1990s (and for some years before and after) we got 'Dennard scaling'. The frequency of processors tended to increase exponentially, too, and featured prominently in advertising and branding.

    I suspect many people conflated Dennard scaling with Moore's law and the demise of Dennard scaling is what contributes to the popular imagination that Moore's law is dead: frequencies of processors have essentially stagnated.

    See https://en.wikipedia.org/wiki/Dennard_scaling

    • timschmidt a day ago

      Yup. Since then we've seen scaling primarily in transistor count, though clock speed has increased slowly as well. Increased transistor count has led to increasingly complex and capable instruction decode, branch prediction, out of order execution, larger caches, and wider execution pipelines in attempt to increase single-threaded performance. We've also seen the rise of embarrassingly parallel architectures like GPUs which more effectively make use of additional transistors despite lower clock speeds. But Moore's been with us the whole time.

      Chiplets and advanced packaging are the latest techniques improving scaling and yield keeping Moore alive. As well as continued innovation in transistor design, light sources, computational inverse lithography, and wafer scale designs like Cerebras.

      • eru a day ago

        Yes. Increase in transistor count is what the original Moore's law was about. But during the golden age of Dennard scaling it was easy to get confused.

observationist a day ago

The Law of Accelerating Returns is a better formulation, not tied to any particular substrate, it's just not as widely known.

https://imgur.com/a/UOUGYzZ - had chatgpt whip up an updated chart.

LoAR shows remarkably steady improvement. It's not about space or power efficiency, just ops per $1000, so transistor counts served as a very good proxy for a long time.

There's been sufficiently predictable progress that 80-100 TFLOPS in your pocket by 3035 is probably a solid bet, especially if a fully generative AI OS and platform catches on as a product. The LoAR frontier for compute in 2035 is going to be more advanced than the limits of prosumer/flagship handheld products like phones, so theres a bit of lag and variability.

js8 a day ago

You could put 64TBs of storage into your pocket with current technology. There are 4TB microSD cards available.

Not sure about the stated GFlops.. but I suspect we find that AI doesn't need that much compute to begin with.

  • fragmede a day ago

    You can run models locally on high end smartphones today with apps like PocketPal or Local LLM.