Comment by IgorPartola

Comment by IgorPartola 19 hours ago

7 replies

This is the key. The real issue is that you don’t need superhuman intelligence in a phone AI assistant. You don’t need it most of the time in fact. Current SOTA models do a decent job of approximating college grad level human intelligence let’s say 85% of the time which is helpful and cool but clearly could be better. But the pace at which the models are getting smart is accelerating AND they are getting more energy efficient and memory efficient. So if something like DeepSeek is roughly 2 years behind SOTA models from Google and others who have SOTA models then in 2030 you can expect 2028 level performance out open models. There will come a time when a model capable of college grad level intelligence 99.999% of the time will be able to run on a $300 device. If you are Apple you do not need to lead the charge on a SOTA model, you can just wait until one is available for much cheaper. Your product is the devices and services consumers buy. If you are OpenAI you have no other products. You must become THE AI to have in an industry that will in the next few years become dominated by open models that are good enough or to close up shop or come up with another product that has more of a moat.

ipaddr 19 hours ago

"pace at which the models are getting smart is accelerating". The pace is decelerating.

  • slwvx 17 hours ago

    My impression is that solar (and maybe wind?) energy have benefited from learning-by-doing [1][2] that has resulted in lower costs and/or improved performance each year. It seems reasonable to me that a similar process will apply to AI (at least in the long run). The rate of learning could be seen as a "pace" of improvement. I'm curious, do you have a reference for the deceleration of pace that you refer to?

    [1] https://emp.lbl.gov/news/new-study-refocuses-learning-curve

    [2] https://ourworldindata.org/grapher/solar-pv-prices-vs-cumula...

    • Jean-Papoulos 8 hours ago

      Why would a the curve of solar prices be in any way correlated with the curve of AI improvements ?

      The deceleration of pace is visible to anyone capable of using Google.

    • Arkhaine_kupo 5 hours ago

      > It seems reasonable to me that a similar process will apply to AI

      If its reasonable, then reason it. Because it is a highly apples to oranges comparison you are making

    • specialist 3 hours ago

      u/ipaddr is probably referring to

        1) the dearth of new (novel) training data. Hence the mad scramble to hoover up, buy, steal, any potentially plausible new sources.
      
        2) diminishing returns of embiggening compute clusters for training LLMs and size of their foundation models.
      
      (As you know) You're referring to Wright's Law aka experience learning curve.

      So there's a tension.

      Some concerns that we're nearing the ceiling for training.

      While the cost of applications using foundation models (implementing inference engines) is decreasing.

      Someone smarter than me will have to provide the slopes of the (misc) learning curves.

  • crazygringo 15 hours ago

    I don't think anyone really knows, because there's no objective standard for determining progress.

    Lots of benchmarks exist where everyone agrees that higher scores are better, but there's no sense in which going from a score of 400 to 500 is the same progress as going from 600 to 700, or less, or more. They only really have directional validity.

    I mean, the scores might correspond to real-world productivity rates in some specific domain, but that just begs the question -- productivity rates on a specific task are not intelligence.

jimbokun 11 hours ago

$300 college student in your pocket sure sounds like the Singularity to me.