Comment by miki123211

Comment by miki123211 3 days ago

158 replies

I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.

This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.

bryanlarsen 3 days ago

Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.

Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.

  • tombert 3 days ago

    Today I learned that Sears founded Prodigy!

    Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.

    • kens 2 days ago

      On the topic of how Sears used to be high-tech: back in 1981, when IBM introduced the IBM PC, it was the first time that they needed to sell computers through retail. So they partnered with Sears, along with the Computerland chain of computer stores, since Sears was considered a reasonable place for a businessperson to buy a computer. To plan this, meetings were held at the Sears Tower, which was the world's tallest building at the time.

      • NordSteve 2 days ago

        Bought my IBM PC from Sears back in the day. Still have the receipt.

      • duderific 2 days ago

        Wow, I hadn't thought about Computerland for quite a while. That was my go-to to kill some time at the mall when I was a teen.

    • dh2022 3 days ago

      My favorite anecdote about Sears is from Starbucks current HQs - the HQs used to be a warehouse for Sears. Before renovation the first floor walls next to the elevators used to have Sears' "commitment to customers" (or something like that).

      To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges.

      • tombert 3 days ago

        Craftsman tools have almost felt like a life-hack sometimes; their no-questions-asked warranties were just incredible.

        My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.

        I haven't tested these warranties since Craftsman was sold to Black and Decker, but when it was still owned by Sears I almost exclusively bought Craftsman tools as a result of their wonderful warranties.

      • jimbokun 2 days ago

        The Sears Catalog was the Amazon of its day.

    • gcanyon 2 days ago

      :-) Then it's going to blow your mind that CompuServe (while not founded by them) was a product of H&R Block.

    • esaym 2 days ago

      There were quite a few small ISP's in the 1990's. Even Bill Gothard[0] had one.

      [0]https://web.archive.org/web/19990208003742/http://characterl...

      • hollerith 2 days ago

        Prodigy predates ISPs (internet service providers). Before the web had matured a little in 1993 the internet was too technically challenging to interest most consumers except maybe for email, and Prodigy was formed in 1984 -- and although it offered email, it was walled-garden email: a Prodigy user could not exchange email with the internet till the mid-1990s at which time Prodigy might have become an ISP for a few years before going out of business.

      • tombert 2 days ago

        At a previous job I worked under a guy who started his own ISP in the early 90’s. I would have loved to have been part of that scene but I was only like four when that happened.

    • htrp 3 days ago

      Blame short sighted investors asking Sears to "focus"

      • dehrmann 3 days ago

        They weren't wrong. Its core business in what is still a viable-enough sector collapsed. Or if it were truly well-managed, running an ISP and a retailer should have been enough insight to be Amazon.

  • djtango 2 days ago

    This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988

    A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.

    Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish

    • mschuster91 2 days ago

      Reminds me of Elon not taking a no for an answer. He did it twice, with a massive success.

      A true shame to see how he's completely lost track with Tesla, the competition particularly from China is eating them alive. And in space, it's a matter of years until the rest of the world catches up.

      And now, he's ran out of tricks - and more importantly, on public support. He can't pivot any more, his entire brand is too toxic to touch.

      • platevoltage 2 days ago

        Lucky for him, the US government is keeping him from being eaten alive in the USA at least.

        I remember that one time we tried to drastically limit Japanese imports to protect the American car industry, which basically created the Lexus LS400, one of the best cars ever made.

      • nebula8804 2 days ago

        I dont know, you could argue that maybe GM with the EV1 was the 'too early' EV and Tesla was just at the right moment. Same goes for SpaceX, The idea of a reusable launcher was not a new idea and studied by NASA. I think they did some test vehicles.

        • bryanlarsen 2 days ago

          SpaceX is an excellent example of this phenomenon. Reusable rockets were "known" to be financially infeasible because the Space Shuttle was so expensive. NASA & oldspace didn't seriously pursue reusable vehicles because the mostly reusable Space Shuttle cost so much more than conventional disposable vehicles.

          Similar to how Sears didn't put their catalog online in the 90's because putting it online on Prodigy failed so badly in the 80's.

  • tracker1 2 days ago

    On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership...

    They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.

  • Imustaskforhelp 2 days ago

    My cousin used to tell me that things works because they were the right thing at the right time. I think he gave the idea of amazon only.

    But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time.

  • cyanydeez 2 days ago

    the problem is ISP became a Utility, not some fountain of unlimited growth.

    What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.

    I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.

    • bryanlarsen 2 days ago

      Sears started Prodigy to become Amazon, not Comcast.

      • cyanydeez 2 days ago

        The product itself determines wether ita a utility, not the business interest. Assuming democracy works correctly. Only a dysfunctional government ignores natural monopolies.

  • outside1234 3 days ago

    Newton at Apple is another great one, though they of course got there.

    • platevoltage 2 days ago

      They sure did. This reminds me of when I was in the local Mac Dealer right after the iPod came out. The employees were laughing together saying “nobody is going to buy this thing”.

deegles 2 days ago

> We're clearly seeing what AI will eventually be able to do

Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?

Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.

For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.

Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.

  • hnfong 2 days ago

    I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.

    Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.

    • deegles 2 days ago

      You know, you're right. It -also- sucks at those tasks because on top of the issue you mention, unedited LLM text is identifiable if you get used to its patterns.

    • h4ck_th3_pl4n3t 2 days ago

      By definition, transformers can never exceed average.

      That is the thing, and what companies pushing LLMs don't seem to realize yet.

      • janalsncm 2 days ago

        Can you expand on this? For tasks with verifiable rewards you can improve with rejection sampling and search (i.e. test time compute). For things like creative writing it’s harder.

        • miki123211 2 days ago

          For creative writing, you can do the same, you just use human verifiers rather than automatic ones.

          LLMs have encountered the entire spectrum of qualities in its training data, from extremely poor writing and sloppy code, to absolute masterpieces. Part of what Reinforcement Learning techniques do is reinforcing the "produce things that are like the masterpieces" behavior while suppressing the "produce low-quality slop" one.

          Because there are humans in the loop, this is hard to scale. I suspect that the propensity of LLMs for certain kinds of writing (bullet points, bolded text, conclusion) is a direct result of this. If you have to judge 200 LLM outputs per day, you prize different qualities than when you ask for just 3. "Does this look correct at a glance" is then a much more important quality.

    • mdemare 2 days ago

      Exactly. Books are still being translated by human translators.

      I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders.

      GPT-5 output for example:

      Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem. Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart. Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted. They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them. Each bore a respectable, bourgeois name from more carefree days: Welgelegen Buitenrust Nooitgedacht Rustenburg Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary.

      • tschwimmer 2 days ago

        Can you provide a reference translation or at least call out the issues you see with this passage? I see "far far away in the [time period]" which I should imagine should be "a long time ago" What are the other issues?

  • rstuart4133 2 days ago

    > Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end.

    I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.

    Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.

    LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.

    Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.

    • deegles 2 days ago

      > it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.

      That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness.

  • miki123211 2 days ago

    > If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?

    When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future.

    The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong.

    • mrtranscendence 2 days ago

      Microsoft claims that they have an AI setup that outperforms human doctors on diagnosis tasks: https://microsoft.ai/new/the-path-to-medical-superintelligen...

      "MAI-DxO boosted the diagnostic performance of every model we tested. The best performing setup was MAI-DxO paired with OpenAI’s o3, which correctly solved 85.5% of the NEJM benchmark cases. For comparison, we also evaluated 21 practicing physicians from the US and UK, each with 5-20 years of clinical experience. On the same tasks, these experts achieved a mean accuracy of 20% across completed cases."

      Of course, AI "doctors" can't do physical examinations and the best performing models cost thousands to run per case. This is also a test of diagnosis, not of treatment.

    • randomNumber7 2 days ago

      If you consider how little time doctors have to look at you (at least in Germanys half broken public health sector) and how little they actually care ...

      I think x is already higher than y for me.

      • deegles 2 days ago

        That's fair. Reliable access to a 70% expert is better than no access to a 99% expert.

  • [removed] 2 days ago
    [deleted]
me551ah 3 days ago

Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.

So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.

Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years

  • mindcrime 3 days ago

    Scaling AI will require an exponential increase in compute and processing power,

    A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.

    • lawlessone 2 days ago

      Deepmind where experimenting with this https://github.com/google-deepmind/lab a few years ago.

      Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible).

      They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it.

    • fluoridation 2 days ago

      If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on.

      • penteract 2 days ago

        If you account for the fact that biological neurons operate at a much lower frequency than silicon processors, then the raw performance gets much closer. From what I can find, neuron membrane time constant is around 10ms [1], meaning 10 billion neurons could have 1 trillion activations per second, which is in the realm of modern hardware.

        People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one.

        [1] https://spectrum.ieee.org/fast-efficient-neural-networks-cop...

        [2] https://aiimpacts.org/brain-performance-in-flops/

      • mindcrime 2 days ago

        the reason why they're so inefficient is not algorithmic, but purely architectural.

        I would agree with that, with the caveat that in my mind "the architecture" and "the algorithm" are sort of bound up with each other. That is, one implies the other -- to some extent.

        And yes, fair point that building dedicated hardware might just be part of the solution to making something that runs much more efficiently.

        The only other thing I would add, is that - relative to what I said in the post above - when I talk about "algorithmic advances" I'm looking at everything as potentially being on the table - including maybe something different from ANN's altogether.

      • HarHarVeryFunny 2 days ago

        The energy inefficiency of ANNs vs our brain is mostly because our brain operates in async dataflow mode with each neuron mostly consuming energy only when it fires. If a neuron's inputs haven't changed then it doesn't redundantly "recalculate it's output" like an ANN - it just does nothing.

        You could certainly implement an async dataflow type design in software, although maybe not as power efficiently as with custom silicon, but individual ANN node throughput performance would suffer given the need to aggregate neurons needing updates into a group to be fed into one the large matrix multiplies that today's hardware is optimized for, although sparse operations are also a possibility. OTOH conceivably one could save enough FLOPs that it'd still be a win in terms of how fast an input could be processed through an entire neural net.

      • chasd00 2 days ago

        > If we suppose that ANNs are more or less accurate models of real neural networks

        i believe the problem is we don't understand actual neurons let alone actual networks of neurons to even know if any model is accurate or not. The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do.

      • eikenberry 2 days ago

        > If we suppose that ANNs are more or less accurate models of real neural networks [..]

        IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison.

  • foobarian 2 days ago

    > Scaling AI will require an exponential increase in compute and processing power,

    I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.

    • RugnirViking 2 days ago

      I dont fully even get why; inference costs are way lower than training costs no?

  • thfuran 2 days ago

    We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.

    • hyperbovine 2 days ago

      The fact that the human brain, heck all brains, are so much more efficient than “state of the art” nnets, in terms of architecture, power consumption, training cost, what have you … while also being way more versatile and robust … is what convinces me that this is not the path that leads to AGI.

  • miki123211 2 days ago

    > We are already at the limit of how small we can scale chips

    I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.

    Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits.

    > so unless the price of electricity comes down exponentially

    This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated.

    > Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

    "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." Sam Altman, OpenAI CEO[1].

    [1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...

    • thfuran a day ago

      >doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.

      An implementation of inference on some specific ANN in fixed function analog hardware can probably pretty easily beat a commodity GPU by a couple orders of magnitude in perf per watt too.

    • mrtranscendence 2 days ago

      > "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."

      That's OpenAI (though I'd be curious if that statement holds for subscriptions as opposed to API use). What about the downstream companies that use OpenAI models? I'm not sure the picture is as rosy for them.

armada651 2 days ago

> The groundwork has been laid, and it's not too hard to see the shape of things to come.

The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.

  • jimbokun 2 days ago

    Is it still giving people headaches and making them nauseous?

    • armada651 2 days ago

      Yes, it still gives people headaches because the convergence-accommodation conflict remains unsolved. We have a few different technologies to address that, but they're expensive, don't fully address the issue, and none of them have moved beyond the prototype stage.

      Motion sickness can be mostly addressed with game design, but some people will still get sick regardless of what you try. Mind you, some people also get motion sick by watching a first-person shooter on a flat screen, so I'm not sure we'll ever get to a point where no one ever gets motion sick in VR.

      • duderific 2 days ago

        > Mind you, some people also get motion sick by watching a first-person shooter on a flat screen

        Yep I'm that guy. I blame it on being old.

matthewdgreen 2 days ago

As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.

  • tracker1 2 days ago

    Oh, like RealPlayer in the late 90's (buffering... buffering...)

    • matthewdgreen 2 days ago

      RealPlayer in the late 90s turned into (working) Napster, Gnutella and then the iPod in 2001, Podcasts (without the name) immediately after, with the name in 2004, Pandora in 2005, Spotify in 2008. So a decade from crummy idea to the companies we’re familiar with today, but slowed down by tremendous need for new (distributed) broadband infrastructure and complicated by IP arrangements. I guess 10 years seems like a long time from the front end, but looking back it’s nothing. Don’t go buying yourself a Tower Records.

      • tracker1 2 days ago

        While I get the point... to be pedantic though, Napster (first gen), Gnutella and iPod were mostly download and listen offline experiences and not necessarily live streaming.

        Another major difference, is we're near the limits to the approaches being taking for computing capability... most dialup connections, even on "56k" modems were still lucky to get 33.6kbps down and very common in the late 90's, where by the mid-2000's a lot of users had at least 512kbps-10mbps connections (where available) and even then a lot of people didn't see broadband until the 2010's.

        that's at least a 15x improvement, where we are far less likely to see even a 3-5x improvement on computing power over the next decade and a half. That's also a lot of electricity to generate on an ageing infrastructure that barely meets current needs in most of the world... even harder on "green" options.

        • matthewdgreen a day ago

          I moved to NYC in 1999 and got my first cable modem that year. This meant I could stream AAC audio from a jukebox server we maintained at AT&T Labs. So for my unusual case, streaming was a full-fledged reality I could touch back then. Ironically running a free service was easy, but figuring out how to get people (AKA the music industry) to let us charge for the service was impossible. All that extra time was just waiting for infrastructure upgrades to spread across a whole country to the point that there were enough customers that even the music industry couldn’t ignore the economics; none of the fundamental tech was missing. With LLMs I have access to a pretty robust set of models for about $20/mo (I’m assuming these aren’t 10x loss leaders?), plus pretty decent local models for the price of a GPU. What’s missing this time is that the nature of the “business” being offered is much more vague, plus the reliability isn’t quite there yet. But on the bright side, there’s no distributed infrastructure to build.

nutjob2 3 days ago

> We're clearly seeing what AI will eventually be able to do

I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.

Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.

skeezyboy 3 days ago

>I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.

  • dml2135 3 days ago

    It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.

    There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.

    We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.

    Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.

    The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.

    "Progress" moves in fits and starts. It is the furthest thing from inevitable.

    • novembermike 2 days ago

      Most growth is actually logistic. An S shaped curve that starts exponential but slows down rapidly as it reaches some asymptote. In fact basically everything we see as exponential in the real world is logistic.

    • jopsen 2 days ago

      True, but adoption of AI has certainly seen exponential growth.

      Improvement of models may not continue to be exponential.

      But models might be good enough, at this point it seems more like they need integration and context.

      I could be wrong :)

      • tracker1 2 days ago

        At what cost though? Most AI operations are losing money, using a lot of power, including massive infrastructure costs, not to mention the hardware costs to get going, and that isn't even covering the level of usage many/most want, and certainly aren't going to pay even $100s/month per person that it currently costs to operate.

      • BobaFloutist 2 days ago

        > True, but adoption of AI has certainly seen exponential growth.

        I mean, for now. The population of the world is finite, and there's probably a finite number of uses of AI, so it's still probably ultimately logistic

  • echelon 3 days ago

    Speaking of Netflix -

    I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.

    Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.

    I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.

    For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.

    • AnotherGoodName 3 days ago

      Fwiw LLMs are also revolutionary. There's currently more anti-AI hype than AI hype imho. As in there's literally people claiming it's completely useless and not going to change a thing. Which is crazy.

      • lokar 3 days ago

        That’s an anecdote about intensity, not volume. The extremes on both sides are indeed very extreme (no value, replacing most white collar jobs next year).

        IME the volume is overwhelming on the pro-LLM side.

        • whatevertrevor 2 days ago

          Yeah the conversation on both extremes feels almost religious at times. The pro LLM hype feels more disconcerting sometimes because there are literally billions if not trillions of dollars riding on this thing, so people like Sam Altman have a strong incentive to hype the shit out of it.

      • Jensson 2 days ago

        One sides extremes says LLM wont change a thing, the other sides extremes says LLM will end the world.

        I don't think the ones saying it wont change a thing are the most extreme here.

        • wyre 2 days ago

          Luckily for humanity reality is somewhere in between extremes, right?

    • didibus 3 days ago

      You're right, and I also think LLMs have an impact.

      The issue is the way the market is investing they are looking for massive growth, in the multiples.

      That growth can't really come from trading cost. It has to come from creating new demand for new things.

      I think that's what not happened yet.

      Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ?

      • jopsen 2 days ago

        > Is it going to lead to the creation of a whole new consumption medium ?

        Good question? Is that necessary, or is it sufficient for AI to be integrated in every kind of CAD/design software out there?

        Because I think most productivity tools whether CAD, EDA, Office, graphic 2d/3d design, etc will benefit from AI. That's a huge market.

        • didibus 2 days ago

          I guess there are two markets to consider.

          The market of the AI foundation models itself, will they have customers long term willing to pay a lot of money for access to the models?

          I think yes, there will be demand for foundational AI models, and a lot of it.

          The second market is the market of CAD, EDA, Office, graphic 2d/3d design, etc. This market will not grow because they integrate AI into their products, or that is the question, will it? Otherwise, you could almost hypothesize these market will shrink as AI is going to be for them an additional cost of business that customers will expect to be included. Or maybe they manage to sell to their customers a premium for the AI features where they take a cut above that of what they pay the foundational models under the hood, that's a possibility.

    • jaimebuelta 3 days ago

      I see the point at the moment on “low quality advertising”, but we are still far from high quality video generated for AI.

      It’s the equivalent of those cheap digital effects. They look bad for a Hollywood movie, but it allows students to shot their action home movies

      • echelon 3 days ago

        You're looking at individual generations. These tools aren't for casual users expecting to 1-shot things.

        The value is in having a director, editor, VFX compositor pick and choose from amongst the outputs. Each generation is a single take or simulation, and you're going to do hundreds or thousands. You sift through that and explore the latent space, and that's where you find your 5-person Pixar.

        Human curated AI is an exoskeleton that enables small teams to replace huge studios.

    • mh- 3 days ago

      It's quite incredible how fast the generative media stuff is moving.

      The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted).

  • dormento 2 days ago

    > I did think the same thing about the 8bit era of video games.

    Can you elaborate? That sounds interesting.

    • skeezyboy 2 days ago

      too soon to get it to market, though it obviously all sold perfectly well, people were sufficiently wowed by it

Q6T46nT668w6i3m 3 days ago

There’s no evidence that it’ll scale like that. Progress in AI has always been a step function.

  • ghurtado 3 days ago

    There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.

    > Progress in AI has always been a step function.

    There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.

    • ezst 2 days ago

      > There's also no evidence that it won't

      There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter.

      • mschuster91 2 days ago

        Oh, I believe that while LLMs are a dead end now, the applications of AI in vision and physical (i.e. robots with limbs) world will usher in yet another wrecking of the lower classes of society.

        Just as AI has killed off all demand for lower-skill work in copywriting, translation, design and coding, it will do so for manufacturing. And that will be a dangerous bloodbath because there will not be enough juniors any more to replace seniors aging out or quitting in frustration of being reduced to cleaning up AI crap.

    • dml2135 3 days ago

      What is your definition of "evidence" here? The evidence, in my view, are physical (as in, available computing power) and algorithmic limitations.

      We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work.

      Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen.

    • Q6T46nT668w6i3m 2 days ago

      I don’t follow. We have benchmarks that have survived decades and illustrate the steps.

  • the8472 3 days ago

    rodent -> homo sapiens brain scales just fine? It's tenuous evidence, but not zero.

  • ninetyninenine 3 days ago

    Uh it’s been multiple repeated step ups in the last 15 years. The trend line is up up up.

  • eichin 2 days ago

    The innovation here is that the step function didn't traditionally go down

[removed] 2 days ago
[deleted]
i_love_retros 3 days ago

Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities?

Its hard for me to imagine Skynet growing from chatgpt

  • whatevaa 2 days ago

    The old story of paperclip AI shows that AGI is not needed for sufficiently smart computer to be dangerous.

thefourthchime 3 days ago

I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.

  • [removed] 3 days ago
    [deleted]
kokanee 2 days ago

I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.

What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.