slwvx 18 hours ago

My impression is that solar (and maybe wind?) energy have benefited from learning-by-doing [1][2] that has resulted in lower costs and/or improved performance each year. It seems reasonable to me that a similar process will apply to AI (at least in the long run). The rate of learning could be seen as a "pace" of improvement. I'm curious, do you have a reference for the deceleration of pace that you refer to?

[1] https://emp.lbl.gov/news/new-study-refocuses-learning-curve

[2] https://ourworldindata.org/grapher/solar-pv-prices-vs-cumula...

  • Jean-Papoulos 10 hours ago

    Why would a the curve of solar prices be in any way correlated with the curve of AI improvements ?

    The deceleration of pace is visible to anyone capable of using Google.

  • Arkhaine_kupo 6 hours ago

    > It seems reasonable to me that a similar process will apply to AI

    If its reasonable, then reason it. Because it is a highly apples to oranges comparison you are making

  • specialist 4 hours ago

    u/ipaddr is probably referring to

      1) the dearth of new (novel) training data. Hence the mad scramble to hoover up, buy, steal, any potentially plausible new sources.
    
      2) diminishing returns of embiggening compute clusters for training LLMs and size of their foundation models.
    
    (As you know) You're referring to Wright's Law aka experience learning curve.

    So there's a tension.

    Some concerns that we're nearing the ceiling for training.

    While the cost of applications using foundation models (implementing inference engines) is decreasing.

    Someone smarter than me will have to provide the slopes of the (misc) learning curves.

crazygringo 16 hours ago

I don't think anyone really knows, because there's no objective standard for determining progress.

Lots of benchmarks exist where everyone agrees that higher scores are better, but there's no sense in which going from a score of 400 to 500 is the same progress as going from 600 to 700, or less, or more. They only really have directional validity.

I mean, the scores might correspond to real-world productivity rates in some specific domain, but that just begs the question -- productivity rates on a specific task are not intelligence.