Comment by Analemma_

Comment by Analemma_ 16 hours ago

17 replies

Exactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)

The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.

falcor84 16 hours ago

I would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.

  • spwa4 15 hours ago

    I rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.

tim333 13 hours ago

OpenAI is now valued at $500bn though. I doubt the investors are too wrecked yet.

It may be like looking at the early Google and saying they are spending loads on compute and haven't even figured how to monetize search, the investors are doomed.

  • oblio 9 hours ago

    Google was founded in 1998 and IPOed in 2004. If OpenAI was feeling confident they'd find ways to set up a company and IPO, 9 years after founding. It's all mostly fictional money at this point.

    • matwood 5 hours ago

      It's not about confidence. OpenAI would be huge on the public markets, but since they can raise plenty of money in the private market there is no reason to deal with that hassle - yet.

    • aurareturn 3 hours ago

      If OpenAI is a public company today, I would bet almost anything that it'd be a $1+ trillion company immediately on opening day.

CompoundEyes 11 hours ago

A really did few days ago gpt-3.5-fast is a great model for certain tasks and cost wise via the API. Lots of solutions being built on the today’s latest are for tomorrow’s legacy model — if it works just pin the version.

fooker 15 hours ago

> And the Nvidia chips used to train it have barely retained any value either

Oh, I'd love to get a cheap H100! Where can I find one? You'll find it costs almost as much used as it's new.

cj 15 hours ago

> money on fire forever just to jog in place.

Why?

I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?

I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.

We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?

  • MontyCarloHall 11 hours ago

    >I don't see why these companies can't just stop training at some point.

    Because training isn't just about making brand new models with better capabilities, it's also about updating old models to stay current with new information. Even the most sophisticated present-day model with a knowledge cutoff date of 2025 would be severely crippled by 2027 and utterly useless by 2030.

    Unless there is some breakthrough that lets existing models cheaply incrementally update their weights to add new information, I don't see any way around this.

    • fennecbutt an hour ago

      Ain't never hearda rag

      • MontyCarloHall 35 minutes ago

        There is no evidence that RAG delivers equivalent performance to retraining on new data. Merely having information in the context window is very different from having it baked into the model weights. This approach would also degrade with time, as more and more information would have to be incorporated into the context window the further away you are from the knowledge cutoff date.

  • Eisenstein 15 hours ago

    They aren't. They are obsequious. This is much worse than it seems at first glance, and you can tell it is a big deal because a lot of effort going into training the new models is to mitigate it.

mattmanser 16 hours ago

But is it a bit like a game of musical chairs?

At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.

  • potatolicious 16 hours ago

    Not necessarily? That assumes that the first "good enough" model is a defensible moat - i.e., the first ones to get there becomes the sole purveyors of the Good AI.

    In practice that hasn't borne out. You can download and run open weight models now that are spitting distance to state-of-the-art, and open weight models are at best a few months behind the proprietary stuff.

    And even within the realm of proprietary models no player can maintain a lead. Any advances are rapidly matched by the other players.

    More likely at some point the AI becomes "good enough"... and every single player will also get a "good enough" AI shortly thereafter. There doesn't seem like there's a scenario where any player can afford to stop setting cash on fire and start making money.

    • wood_spirit an hour ago

      Perhaps the first thing the owners ask the first true AGI is “how do I dominate the world?” and the AGI outlines how to stop any competitor getting AGI..?