jdoliner 14 hours ago

I've seen a rumor going around that OpenAI hasn't had a successful pre-training run since mid 2024. This seemed insane to me but if you give ChatGPT 5.1 a query about current events and instruct it not to use the internet it will tell you its knowledge cutoff is June 2024. Not sure if maybe that's just the smaller model or what. But I don't think it's a good sign to get that from any frontier model today, that's 18 months ago.

  • alecco 14 hours ago

    SemiAnalysis said it last week and AFAIK it wasn't denied.

    https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...

    • RossBencina 9 hours ago

      The SemiAnalysis article that you linked to stated:

      "OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome."

      Given the overall quality of the article, that is an uncharacteristically convoluted sentence. At the risk of stating the obvious, "that was broadly deployed" (or not) is contingent on many factors, most of which are not of the GPU vs. TPU technical variety.

      • alecco an hour ago

        My reading in between the lines is OpenAI's "GPT-5" is really a GPT-4 generation model. And this is aligned with it being unimpressive. Not the promised leap forward Altman promised.

      • nbardy 6 hours ago

        This is misleading. They had 4.5 which was a new scaled up training run. It was a huge model and only served to pro users, but the biggest models are always used as teacher models for smaller models. Thats how you do distillation. It would be stupid to not use the biggest model you have in distillation and a waste since they have the weights.

        The would have taken some time to calculate the efficiency gains of pretraining vs RL. Resumed the GPT-4.5 for whatever budget made sense and then spent the rest on RL.

        Sure they chose to not serve the large base models anymore for cost reasons.

        But I’d guess Google is doing the same. Gemini 2.5 samples very fast and seems way to small to be their base pre train. The efficiency gains in pertaining scale with model scale so it makes sense to train the largest model possible. But then the models end up super sparse and oversized and make little sense to serve in inference without distillation.

        In RL the efficiency is very different because you have to inference sample the model to draw online samples. So small models start to make more sense to scale.

        Big model => distill => RL

        Makes the most theoretical sense for training now days for efficient spending.

        So they already did train a big model 4.5. Not using it would have been absurd and they have a known recipe they could return scaling on if the returns were justified.

    • binkHN 8 hours ago

      This is a really great breakdown. With TPUs seemingly more efficient and costing less overall, how does this play for Nvidia? What's to stop them from entering the TPU race with their $5 trillion valuation?

      • matwood 3 hours ago

        As others mentioned, 5T isn't money available to NVDA. It could leverage that to buy a TPU company in an all stock deal though.

        The bigger issue is that entering a 'race' implies a race to the bottom.

        I've noted this before, but one of NVDA's biggest risks is that its primary customers are also technical, also make hardware, also have money, and clearly see NVDA's margin (70% gross!!, 50%+ profit) as something they want to eliminate. Google was first to get there (not a surprise), but Meta is also working on its own hardware along with Amazon.

        This isn't a doom post for NVDA the company, but its stock price is riding a knifes edge. Any margin or growth contraction will not be a good day for their stock or the S&P.

      • dragonwriter 4 hours ago

        > What's to stop them from entering the TPU race with their $5 trillion valuation?

        Valuation isn’t available money; they'd have to raise more money in the current, probably tighter for them, investment environment to enter the TPU race, since the money they have already raised that that valuation is based on is already needed to provide runway for what they are already doing without putting money into the TPU race

      • sysguest 5 hours ago

        $5 trillion valuation doesn't mean it has $5 trillion cash in pocket -- so "it depends"

    • rahimnathwani 10 hours ago

      Dylan Patel joined Dwarkesh recently to interview Satya Nadella: https://www.dwarkesh.com/p/satya-nadella-2

      • embedding-shape 9 hours ago

        And this is relevant how? That interview is 1.5 hours, not something you just casually drop a link to and say "here, listen to this to even understand what point I was trying to make"

    • CamperBob2 12 hours ago

      That is.... actually a seriously meaty article from a blog I've never heard of. Thanks for the pointer.

      • Numerlor 12 minutes ago

        This article about them got published just yesterday... https://news.ycombinator.com/item?id=46124883

        There's a lot of misleading information in what they publish, plagiarism, and I believe some information that wouldn't be possible to get without breaking NDAs

      • seatac76 11 hours ago

        Semi analysis is great, they typically do semiconductors but reporting is top notch.

        • lanstin 9 hours ago

          Wow, that was a good article. So much detail from financial to optical linking to build various data flow topologies. Makes me less aghast at the $10M salaries for the masters of these techniques.

      • ipnon 3 hours ago

        Dylan Patel founded Semianalysis and he has a great interview with Satya Nadella on Dwarkesh Patel's podcast.

    • [removed] 8 hours ago
      [deleted]
  • mvkel 9 hours ago

    It's not a rumor, it's confirmed by OpenAI. All "models" since 4o are actually just optimizations in prompting and a new routing engine. The actual -model- you are using with 5.1 is 4. Nothing has been pre-trained from scratch since 4o.

    Their own press releases confirm this. They call 5 their best new "ai system", not a new model

    https://openai.com/index/introducing-gpt-5/

    • krackers 4 hours ago

      I can believe this, Deepseek V3.2 shows that you can get close to "gpt-5" performance with a gpt-4 level base model just with sufficient post-training.

    • Davidzheng 8 hours ago

      I don't think that counts as confirmation. 4.5 we know was a new base-model. I find it very very unlikely the base model of 4 (or 4o) is in gpt5. Also 4o is a different base model from 4 right? it's multimodal etc. Pretty sure people have leaked sizes etc and I don't think it matches up.

    • staticman2 6 hours ago

      New AI system doesn't preclude new models. I thought when GPT 5 launched and users hated it the speculation was GPT 5 was a cost cutting model and the routing engine was routing to smaller, specialized dumber models that cost less on inference?

      It certainly was much dumber than 4o on Perplexity when I tried it.

      • vidarh 2 hours ago

        > and the routing engine was routing to smaller, specialized dumber models that cost less on inference?

        That this was part of it was stated outright, except maybe that they "cost less" which was left for you to infer (sorry), in their launch announcement.

        Paying for pro, and setting it to thinking all the time, I saw what seemed like significant improvements, but if your requests got (mis-)routed to one of the dumber models, it's not surprising if people were disappointed.

        I think they made a big mistake in not clearly labelling the responses with which of the models responded to a given request, as it made people complain about GPT 5 in general, instead of complaining about the routing.

    • m3kw9 8 hours ago

      Well then 5.x is pretty impressive

    • Forgeties79 8 hours ago

      Maybe this is just armchair bs on my part, but it seems to me that the proliferation of AI-spam and just general carpet bombing of low effort SEO fodder would make a lot of info online from the last few years totally worthless.

      Hardly a hot take. People have theorized about the ouroboros effect for years now. But I do wonder if that’s part of the problem

      • irthomasthomas an hour ago

        Gemini 3 has a similar 2024 cutoff and they claim to have trained it from scratch. I wish they would say more about that.

  • p1necone 13 hours ago

    Every so often I try out a GPT model for coding again, and manage to get tricked by the very sparse conversation style into thinking it's great for a couple of days (when it says nothing and then finishes producing code with a 'I did x, y and z' with no stupid 'you're absolutely' right sucking up and it works, it feels very good).

    But I always realize it's just smoke and mirrors - the actual quality of the code and the failure modes and stuff are just so much worse than claude and gemini.

    • kshacker 12 hours ago

      I am a novice programmer -- I have programmed for 35+ years now but I build and lose the skills moving between coder to manager to sales -- multiple times. Fresh IC since last week again :) I have coded starting with Fortran, RPG and COBOL and I have also coded Java and Scala. I know modern architecture but haven't done enough grunt work to make it work or to debug (and fix) a complex problem. Needless to say sometimes my eyes glaze over the code.

      And I write some code for my personal enjoyment, and I gave it to Claude 6-8 months back for improvement, it gave me a massive change log and it was quite risky so abandoned it.

      I tried this again with Gemini last week, I was more prepared and asked it to improve class by class, and for whatever reasons I got better answers -- changed code, with explanations, and when I asked it to split the refactor in smaller steps, it did so. Was a joy working on this over the thanksgiving holidays. It could break the changes in small pieces, talk through them as I evolved concepts learned previously, took my feedback and prioritization, and also gave me nuanced explanation of the business objectives I was trying to achieve.

      This is not to downplay claude, that is just the sequence of events narration. So while it may or may not work well for experienced programmers, it is such a helpful tool for people who know the domain or the concepts (or both) and struggle with details, since the tool can iron out a lot of details for you.

      My goal now is to have another project for winter holidays and then think through 4-6 hour AI assisted refactors over the weekends. Do note that this is a project of personal interest so not spending weekends for the big man.

      • Aurornis 7 hours ago

        > I was more prepared and asked it to improve class by class, and for whatever reasons I got better answers

        There is a learning curve with all of the LLM tools. It's basically required for everyone to go through the trough of disillusionment when you realize that the vibecoding magic isn't quite real in the way the influencers talk about it.

        You still have to be involved in the process, steer it in the right direction, and review the output. Rejecting a lot of output and re-prompting is normal. From reading comments I think it's common for new users to expect perfection and reject the tools when it's not vibecoding the app for them autonomously. To be fair, that's what the hype influencers promised, but it's not real.

        If you use it as an extension of yourself that can type and search faster, while also acknowledging that mistakes are common and you need to be on top of it, there is some interesting value for some tasks.

      • altmanaltman 3 hours ago

        Interesting. From my experience, Claude is much better at stuff involving frontend design somehow compared to other models (GPT is pretty bad). Gemini is also good but often the "thinking" mode just adds stuff to my code that I did not ask it to add or modifies stuff to make it "better". It likes to 1 up on the objective a lot which is not great when you're just looking for it to do what you precisely asked it and nothing else.

      • ikidd 8 hours ago

        My problem with Gemini is how token hungry it is. It does a good job but it ends up being more expensive than any other model because it's so yappy. It sits there and argues with itself and outputs the whole movie.

      • mleo 7 hours ago

        Breaking down requirements, functionality and changes into smaller chunks is going to give you better results with most of the tools. If it can complete smaller tasks in the context window, the quality will likely hold up. My go to has been to develop task documents with multiple pieces of functionality and sub tasks. Build one piece of functionality at a time. Commit, clear context and start the next piece of functionality. If something goes off the rails, back up to the commit, fix and rebase future changes or abandon and branch.

        That’s if I want quality. If I just want to prototype and don’t care, I’ll let it go. See what I like, don’t like and start over as detailed above.

      • bovermyer 12 hours ago

        I have never considered trying to apply Claude/Gemini/etc. to Fortran or COBOL. That would be interesting.

    • tartoran 13 hours ago

      I'm starting with Claude at work but did have an okay experience with OpenAi so far. For clearly delimited tasks it does produce working code more often than not. I've seen some improvement on their side compared to say, last year. For something more complex and not clearly defined in advance, yes, it does produce plausible garbage and it goes off the rails a lot. I was migrating a project and asked ChatGPT to analyze the original code base and produce a migration plan. The result seemed good and encouraging because I didn't know much about that project at that time. But I ended up taking a different route and when I finished the migration (with bits of help from ChatGPT) I looked at the original migration plan out of curiosity since I had become more familiar with the project by now. And the migration plan was an absolutely useless and senseless hallucination.

    • stevedonovan 4 hours ago

      I've been getting great results from Codex. Can be a bit slow, but gets there. Writes good Rust, powers through integration test generation.

      So (again) we are just sharing anecdata

    • herpdyderp 13 hours ago

      On the contrary, I cannot use the top Gemini and Claude models because their outputs are so out place and hard to integrate with my code bases. The GPT 5 models integrate with my code base's existing patterns seamlessly.

      • ta12653421 3 hours ago

        Supply some relevant files of your codebase in the ClaudeAI project area in the right part of the browser. Usually it will understand your architecture, patterns, principles

    • findjashua 12 hours ago

      NME at all - 5.1 codex has been the best by far.

      • manmal 12 hours ago

        How can you stand the excruciating slowness? Claude Code is running circles around codex. The most mundane tasks make it think for a minute before doing anything.

    • jpalomaki 13 hours ago

      Can you give some concrete example of programming problem task GPT fails to solve?

      Interested, because I’ve been getting pretty good results with different tasks using the Codex.

      • gloosx 4 hours ago

        Try to ask it to write some GLSL shaders. Just describe what you want to see and then try to run the shaders it outputs. It can output a UV-map or the simple gradient right, but when it comes to shaders a bit more complex it most of the time will not compile or run properly, sometimes mix GLSL versions, sometimes just straight make up things which don't work or output what you want.

      • throwaway31131 6 hours ago

        I posted this example before but academic papers on algorithms often have pseudo code but no actual code.

        I thought it would be handy to use AI to make the code from the paper so a few months ago I tried to use Claude (not GPT, because I only have access to Claude) to recreate C++ code to implement the algorithms in this paper as practice for me in LLM use and it didn’t go well.

        https://users.cs.duke.edu/~reif/paper/chen/graph/graph.pdf

        • threeducks 20 minutes ago

          I just tried it with GPT-5.1-Codex. The compression ratio is not amazing, so not sure if it really worked, but at least it ran without errors.

          A few ideas how to make it work for you:

          1. You gave a link to a PDF, but you did not describe how you provided the content of the PDF to the model. It might only have read the text with something like pdftotext, which for this PDF results in a garbled mess. It is safer to convert the pages to PNG (e.g. with pdftoppm) and let the model read it from the pages. A prompt like "Transcribe these pages as markdown." should be sufficient. If you can not see what the model did, there is a chance it made things up.

          2. You used C++, but Python is much easier to write. You can tell the model to translate the code to C++ once it works in Python.

          3. Tell the model to write unit tests to verify that the individual components work as intended.

          4. Use Agent Mode and tell the model to print something and to judge whether the output is sensible, so it can debug the code.

      • cmarschner 13 hours ago

        Completely failed for me running the code it changed in a docker container i keep running. Claude did it flawlessly. It absolutely rocks at code reviews but ir‘s terrible in comparison generating code

        • peab 11 hours ago

          It really depends on what kind of code. I've found it incredible for frontend dev, and for scripts. It falls apart in more complex projects and monorepos

    • CheeseFromLidl 3 hours ago

      Same experience here. The more commonly known the stuff it regurgitates is, the fewer errors. But if you venture into RF electronics or embedded land, beware of it turning into a master of bs.

      Which makes sense for something that isn’t AI but LLM.

    • sharyphil 12 hours ago

      You're absolutely right!

      Somehow it doesn't get on my nerves (unlike Gemini with "Of course").

    • logicchains 12 hours ago

      I find for difficult questions math and design questions GPT5 tends to produce better answers than Claude and Gemini.

      • munk-a 12 hours ago

        Could you clarify what you mean by design questions? I do agree that GPT5 tends to have a better agentic dispatch style for math questions but I've found it has really struggled with data model design.

    • bsder 6 hours ago

      At this point you are now forced to use the "AI"s as code search tools--and it annoys me to no end.

      The problem is that the "AI"s can cough up code examples based upon proprietary codebases that you, as an individual, have no access to. That creates a significant quality differential between coders who only use publicly available search (Google, Github, etc.) vs those who use "AI" systems.

  • xnx 6 hours ago

    OpenAI is in the "don't look behind the curtain" stage with both their technology and finances.

  • impulser_ 5 hours ago

    OpenAI is the only SOTA model provider that doesn't have a cutoff date in the current year. That why it preforms bad at writing code for any new libraries or libraries that have had significant updates like Svelte.

    • rvnx 2 minutes ago

      State Of The Art is maybe a bit exaggerated. It's more like an early model that never really adapted, and only got watered down (smaller network, outdated information, and you cannot see thought/reasoning).

      Their models get dumber and dumber over time.

  • nickff 14 hours ago

    I recall reading that Google had similar 'delay' issues when crawling the web in 2000 and early 2001, but they managed to survive. That said, OpenAI seems much less differentiated (now) than Google was back then, so this may be a much riskier situation.

    • echelon 5 hours ago

      Google didn't raise at a $500 billion valuation.

      The 25x revenue multiple wouldn't be so bad if they weren't burning so much cash on R&D and if they actually had a moat.

      Google caught up quick, the Chinese are spinning up open source models left and right, and the world really just isn't ready to adopt AI everywhere yet. We're in the premature/awkward phase.

      They're just too early, and the AGI is just too far away.

      Doesn't look like their "advertising" idea to increase revenue is working, either.

      • shridharxp 5 hours ago

        There is no moat in selling/renting AI models. They are a commoditized product now. I can't imagine with what thought process did investors poured in such money on OpenAI.

    • redbluered 7 hours ago

      The differentiation should be open source, nonprofit, and ethical.

      As a shady for-profit, there is none. That's the problem with this particular fraud.

      • echelon 5 hours ago

        Why is profit bad? You can be open source, ethical, and for-profit.

        • khafra 4 hours ago

          If you start out as a non-profit, and pull a bunch of shady shenanigans in order to convert to a for-profit, claiming to be ethical after that is a bit of a hard sell.

    • savrajsingh 7 hours ago

      Yes, the story was something like Google hadn’t rebuilt their index for something like 8 months if I recall correctly

  • mikepurvis 12 hours ago

    I noticed this recently when I asked it whether I should play Indiana Jones on my PS5 or PC with a 9070 XT. It assumed I had made a typo until I clarified, then it went off to the internet and came back telling me what a sick rig I have.

  • amluto 12 hours ago

    I asked ChatGPT 5.1 to help me solve a silly installation issue with the codex command line tool (I’m not an npm user and the recommended installation method is some kludge using npm), and ChatGPT told me, with a straight face, that codex was discontinued and that I must have meant the “openai” command.

  • hn_throwaway_99 8 hours ago

    Just a minor correction, but I think it's important because some comments here seem to be giving bad information, but on OpenAI's model site it says that the knowledge cutoff for gpt-5 is Sept 30, 2024, https://platform.openai.com/docs/models/compare, which is later than the June 01, 2024 date of GPT-4.1.

    Now I don't know if this means that OpenAI was able to add that 3 months of data to earlier models by tuning or if it was a "from scratch" pre-training run, but it has to be a substantial difference in the models.

  • kristianp 3 hours ago

    I doubt it's that important that their dataset of current events is up to date. At this stage, I believe private and synthetic data comprises a large fraction of pretraining. Web search substitutes for current event pretraining.

    • f311a an hour ago

      I tried OpenAI models for coding in Go, but they constantly say your syntax is not correct. Let me rewrite your whole file without `any`.`any` was introduced in 2022. It takes some time to adopt it in codebases, but they should not be doing stuff like that at the end of 2025.

  • searls 14 hours ago

    Funny, had it tell me the same thing twice yesterday and that was _with_ thinking + search enabled on the request (it apparently refused to carry out the search, which it does once in every blue moon).

    I didn't make this connection that the training data is that old, but that would indeed augur poorly.

  • mr_00ff00 12 hours ago

    What is a pre-training run?

    • nodja 12 hours ago

      Pre-training is just training, it got the name because most models have a post-training stage so to differentiate people call it pre-training.

      Pre-training: You train on a vast amount of data, as varied and high quality as possible, this will determine the distribution the model can operate with, so LLMs are usually trained on a curated dataset of the whole internet, the output of the pre-training is usually called the base model.

      Post-training: You narrow down the task by training on the specific model needs you want. You can do this through several ways:

      - Supervised Finetuning (SFT): Training on a strict high quality dataset of the task you want. For example if you wanted a summarization model, you'd finetune the model on high quality text->summary pairs and the model would be able to summarize much better than the base model.

      - Reinforcement Learning (RL): You train a separate model that ranks outputs, then use it to rate the output of the model, then use that data to train the model.

      - Direct Preference Optimizaton (DPO): You have pairs of good/bad generations and use them to align the model towards/away the kinds of responses you want.

      Post-training is what makes the models able to be easily used, the most common is instruction tuning that teaches to model to talk in turns, but post-training can be used for anything. E.g. if you want a translation model that always translates a certain way, or a model that knows how to use tools, etc. you'd achieve all that through post-training. Post-training is where most of the secret sauce in current models is nowadays.

      • mrweasel an hour ago

        If pre-training is just training, then how on earth can OpenAI not have "a successful pre-training run"? The word successful indicates that they tried, but failed.

        It might be me misunderstanding how this works, but I assumed that the training phase was fairly reproducible. You might get different results on each run, do to changes in the input, but not massively so. If OpenAI can't continuously and reliably train new models, then they are even more overvalued that I previously assumed.

      • cocogoatmain 11 hours ago

        Want to also add that the model doesn’t know how to respond in a user-> assistant style conversation after it’s pretraining, and it’s a pure text predictor (look at the open source base models)

        There’s also what is being called mid-training where the model is trained on high(er) quality traces and acts as a bridge between pre and post training

    • abixb 12 hours ago

      The first step in building a large language model. That's when the model is initiated and trained on a huge dataset to learn patterns and whatnot. The "P" in "GPT" stands for "pre-trained."

    • bckr 12 hours ago

      That’s where they take their big pile of data and train the model to do next-token-prediction.

  • manmal 12 hours ago

    That would explain why it’s so bad with new Swift features and more recent ast-grep rules.

  • mips_avatar 8 hours ago

    Usually current events get taught through mid-training, so even with old pre-training current events still could be added

  • nextworddev 13 hours ago

    Don’t forget SemiAnalysis’s founder Dylan Patel is supposedly roommates with Anthropics RL tech lead Sholto..

    • nickysielicki 13 hours ago

      The fundamental problem with bubbles like this, is that you get people like this who are able to take advantage of the The Gell-Mann amnesia effect, except the details that they’re wrong about are so niche that there’s a vanishingly small group of people who are qualified to call them out on it, and there’s simultaneously so much more attention on what they say because investors and speculators are so desperate and anxious for new information.

      I followed him on Twitter. He said some very interesting things, I thought. Then he started talking about the niche of ML/AI I work near, and he was completely wrong about it. I became enlightened.

  • throwaway314155 13 hours ago

    It has no idea what it's own knowledge cutoff is.

    • octoberfranklin 11 hours ago

      Knowledge cutoff date is usually part of the system prompt.

      Helps you get useful answers like "I don't know that's too recent" when you ask questions like "who won the basketball game last night".

      • vel0city 10 hours ago

        I opened up the Gemini app and asked:

        Who are the Dallas Stars playing tonight? Is there anyone that will be out of the lineup for the Stars? What have their recent injuries been like? Can you get the current score?

        It's response:

        The Dallas Stars are playing the New York Rangers tonight, Tuesday, December 2, 2025, at Madison Square Garden.

        Here is the information available regarding the rest of your questions:

        Out of the Lineup & Recent Injuries *Lian Bichsel (Defenseman): He suffered a lower-body injury (ankle) in the recent game against the Senators and is expected to miss 6-8 weeks and require surgery. His absence will be felt on the blueline.

        *Jake Oettinger (Goaltender): He is dealing with an illness and is not expected to start tonight; Casey DeSmith is projected to start in his place. Oettinger is expected to play tomorrow night in New Jersey.

        *Thomas Harley (Defenseman): He remains week-to-week with a lower-body injury and is not with the team on their current road trip.

        Other injured players who will likely be out of the lineup include Matt Duchene (Upper Body - on IR-LT, but joining the road trip), Adam Erne (Lower Body - on IR-LT), and Nils Lundkvist (Lower Body - on IR-LT).

        Current Score The game is in progress, and the score is currently New York Rangers 1 - 1 Dallas Stars.

        Would you like an update on the final score later, or news on a specific player's status?

        I ask it things like updates on teams all the time, and it usually pulls the right information.

  • simianparrot 13 hours ago

    Any data after that is contaminated with vast amount of AI slop. Is anyone training on anything newer..?

    • [removed] an hour ago
      [deleted]
    • bakugo 9 hours ago

      Both Claude 4.x and Gemini 3 go up to early 2025. Still not that recent, but not as bad as GPT.

  • mingodad 9 hours ago

    Interesting enough today I found that https://huggingface.co/openai/gpt-oss-20b when downloaded an running locally and asked:

    ====

    when have you been built?

    I’m a GPT‑4‑based model that OpenAI released on March 14 2023. The underlying training data goes up to early 2023, and the model was trained in the months leading up to that release. If you’re asking about the ChatGPT product that ships the model to users, it went live in November 2022 and has since received updates (GPT‑3.5, GPT‑4, etc.) that keep it current.

    ====

    But when supposedly running it from https://huggingface.co/chat/models/openai/gpt-oss-20b:

    ====

    when have you been built?

    I’m a language model created by OpenAI. The current generation (GPT‑4) that powers this chat was first released in March 2023 and has been updated and fine‑tuned up through the end of 2024. My training data runs up to the beginning of June 2025, so I’m built on knowledge available up to that point.

    ====

    And that makes me thinking that although https://huggingface.co/chat claims to be using the models available to public at https://huggingface.co , it doesn't seems to be true and I raised this question here https://huggingface.co/ggml-org/gpt-oss-20b-GGUF/discussions... , https://github.com/huggingface/inference-playground/issues/1... and https://github.com/ggml-org/llama.cpp/discussions/15396#disc... .

felixfurtak 15 hours ago

OpenAI is basically just Netscape at this point. An innovative product with no means of significant revenue generation.

One one side it's up against large competitors with an already established user base and product line that can simply bundle their AI offerings into those products. Google will do just what Microsoft did with Internet Explorer and bundle Gemini in for 'Free' with their already other profitable products and established ad-funded revenue streams.

At the same time, Deepseek/Qwen, etc. are open sourcing stuff to undercut them on the other side. It's a classic squeeze on their already fairly dubious business model.

  • edouard-harris 14 hours ago

    > with no means of significant revenue generation.

    OpenAI will top $20 billion in ARR this year, which certainly seems like significant revenue generation. [1]

    [1] https://www.cnbc.com/2025/11/06/sam-altman-says-openai-will-...

    • stack_framer 14 hours ago

      I can generate $20 billion in ARR this year too! I just need you to give me $100 billion and allow me to sell each of your dollars for 0.2 dollars.

      • bgirard 14 hours ago

        It's a fun trope to repeat but that's not what OpenAI is doing. I get a ton of value from ChatGPT and Codex from my subscription. As long as the inference is not done at a lost this analogy doesn't hold. They're not paying me to use it. They are generating output that is very valuable to me. Much more than my subscription cost.

        I've been able to help setup cross app automation for my partner's business, remodel my house, plan a trip of Japan and assist with the cultural barrier, vibe code apps, technical support and so much more.

      • umanwizard 13 hours ago

        This analogy only really works for companies whose gross margin is negative, which as far as I know isn’t the case for OpenAI (though I could be wrong).

        It’s an especially good analogy if there is no plausible path to positive gross margin (e.g. the old MoviePass) which I think is even less likely to be true for OpenAI.

      • eli_gottlieb 12 hours ago

        We should perhaps say profit when we are talking about revenue - cost and revenue when we only mean the first term in the subtraction.

      • postflopclarity 14 hours ago

        very clever! I hadn't seen anybody make this point before in any of these threads /s

        obviously the nature of OpenAIs revenue is very different than selling $1 for $0.2 because their customers are buying an actual service, not anything with resale value or obviously fungible for $

      • m3kw9 8 hours ago

        You sell dollar 1 penny, they sell it for more like 70. Different skill level

      • signatoremo 14 hours ago

        Can you? What are you selling? Who are you and why should I believe in you? What would I get in return?

        • stavros 13 hours ago

          He can. He's selling dollars. He's a person who sells dollars for fewer dollars. You'd get dollars.

    • blitz_skull 7 hours ago

      Revenue != Profit

      OpenAI is hemorrhaging cash at an astronomical rate.

    • brazukadev 32 minutes ago

      No, they won't, fake numbers from his arse. The same way ChatGPT does not have 800million users.

    • riku_iki 14 hours ago

      > Altman says that OpenAI will top $20 billion in ARR this year, which certainly seems like significant revenue generation. [1]

      fixed this for you

      • unsupp0rted 14 hours ago

        Can he safely lie about that? Or would that be a slam-dunk lawsuit against him? He's already got Elon Musk on his enemies list.

    • echelon 14 hours ago

      In 2024, OpenAI claimed the bulk of its revenue was 70-80% through consumer ChatGPT subscriptions. That's wildly impressive.

      But now they've had an order of magnitude revenue growth. That can't still be consumer subscriptions, right? They've had to have saturated that?

      I haven't seen reports of the revenue breakdown, but I imagine it must be enterprise sales.

      If it's enterprise sales, I'd imagine that was sold to F500 companies in bulk during peak AI hype. Most of those integrations are probably of the "the CEO has tasked us with `implementing an AI strategy`" kind. If so, I can't imagine they will survive in the face of a recession or economic downturn. To be frank, most of those projects probably won't pan out even under the rosiest of economic pictures.

      We just don't know how to apply AI to most enterprise automation tasks yet. We have a long way to go.

      I'd be very curious to see what their revenue spread looks like today, because that will be indicative of future growth and the health of the company.

      • cheschire 14 hours ago

        With less than 10% of users paying for a subscription, I doubt they have saturated.

        • debugnik 14 hours ago

          I'm reading 5% on a quick search. Isn't that an unsurprising conversion rate for a successful app with a free tier? Why would it increase further in ChatGPT's case, other than by losing non-paying customers?

      • HDThoreaun 7 hours ago

        consumer subs arent even close to saturated and business subs are where the real money is anyway. Most white collar workers are still on free tier copilot, not paying openai.

  • searls 14 hours ago

    It would be funny if OpenAI turns for-profit, faceplants, and then finds new life (as Mozilla did) as a non-profit sharing its tools for free.

    • felixfurtak 14 hours ago

      This is pretty much all that OpenAI is at the moment.

      Mozilla is a non-profit that is only sustained by the generous wealthy benefactor (Google) to give the illusion that there is competition in the browser market.

      OpenAI is a non-profit funded by a generous wealthy benefactor (Microsoft).

      Ideas of IPO and profitability are all just pipe dreams in Altmans imagination.

      • elAhmo 13 hours ago

        > Mozilla is a non-profit that is only sustained by the generous wealthy benefactor (Google) to give the illusion that there is competition in the browser market.

        Good way of phrasing things. Kinda sad to read this, I tried to react with 'wait there is competition in the browser market', but it is not a great argument to make - without money for using Google as a default search engine, Mozilla would effectively collapse.

      • shridharxp 5 hours ago

        Few months ago, the founder was talking about "AGI" and ridiculous universal basic compute. At this point, I don't even know whom to believe. My first hand experience tells ChatGPT and even ClaudeCode are no where near the expertise they are touted to be. Yet, the marketing by these companies is so immense that you get washed away, you don't know who are agents and who are putting their true opinions.

        • fragmede 4 hours ago

          > My first hand experience tells ChatGPT and even ClaudeCode are no where near the expertise they are touted to be

          Not doubting you, but where specifically have the latest models fallen short for you?

    • [removed] 5 hours ago
      [deleted]
  • bibimsz 13 hours ago

    anecdotal, but my wife wasn't interested in switching to claude from chatgpt. as far as she's concerned chatgpt knows her, and she's got her assistant perfectly tuned to her liking.

    • bncndn0956 6 hours ago

      this is my horror as well. I don't mind my youtube account to be blocked but what about all the recommendations that I have curated to my liking. It will be huge chunk of lost time to rebuild and insert my preferences into the algorithm. increasingly "our preferences shaped by time and influences and encounters both digital and offline" are as much about us as we are physically.

      • curioussquirrel 4 hours ago

        You could ask GPT for what it knows about you and use it to seed your personal preferences to a new model/app. Not perfect and probably quite lossy, but likely much better than starting from scratch.

    • munchler 13 hours ago

      ChatGPT is to AI as Facebook is to social media. OpenAI captured a significant number of users due to first-mover advantage, but that advantage is long gone now.

      • jimbokun 8 hours ago

        1. ChatGPT would be MySpace as the first mover. 2. Facebook has insane lock in: your entire graph of friends and family.

      • felixfurtak 12 hours ago

        And Facebook only makes money because it is essentially just an advertising platform. Same with Google. It's fundametally just ads.

        The only way OpenAI can survive is to replicate this model. But it probably doesn't have the traffic to pull it off unless it can differentiate itself from the already crowded competition.

        • wavemode 8 hours ago

          Ads make sense in an AI search engine product like Perplexity. ChatGPT could try to make a UI like that.

          But the thing is, the world already has an AI search engine. It's called Google, and it's already heavily integrated with Gemini. Why would people switch?

    • tofuahdude 13 hours ago

      Same situation over here. Multiple family members only know chatgpt / think that chatgpt knows them and have never heard of the competitors.

  • dragonwriter 13 hours ago

    > Google will do just what Microsoft did with Internet Explorer and bundle Gemini in for 'Free' with their already other profitable products and established ad-funded revenue streams.

    “will do”? Is there any Google product they haven't done that with already?

  • asdfman123 12 hours ago

    I know it's been said before but it's slightly insane they're trying to compete on a hot new tech with a company with 1) a top notch reputation for AI and 2) the largest money printer that has ever existed on the planet.

    Feel like the end result would always be that while Google is slow to adjust, once they're in the race they're in it it.

    • margorczynski 11 hours ago

      The problem for Google is that there is no sensible way to monetize this tech and it undercuts their main money source which is search.

      On top of that the Chinese seem to be hellbent to destroy any possible moat the US companies might create by flooding the market with SOTA open-source models.

      Although this tech might be good for software companies in general - it does reduce the main cost they have which is personnel. But in the long run Google will need to reinvent itself or die.

      • kelipso 10 hours ago

        Gemini has been in Google search for a while now. I use it somewhat often when I search for something and want follow up questions. I don’t see any ads in Gemini but maybe I would see it if I search for ads relevant things idk. But I definitely use google search more often because Gemini is there and probably that goes for a lot of people.

  • woopwoop 14 hours ago

    Maybe? But you could have written this same thing in 1999 with OpenAI and Google replaced by Google and Yahoo, respectively.

    • raw_anon_1111 14 hours ago

      And Google had profits - not just revenue - early on and wasn’t setting $10 on fire to have a $1 in revenue.

      • dmoy 14 hours ago

        Well maybe not in 1999. Adwords didn't launch until 2000? Google's 1999 revenue was...... I forget, but it was incredibly small. Costs were also incredibly small too though, so this isn't a good analogy given the stated year of 1999.

    • TulliusCicero 3 hours ago

      Google was immediately better than Yahoo, that's why people switched en masse.

      Same thing happen with Internet Explorer and Chrome, or going from Yahoo mail/Hotmail to Gmail.

    • wat10000 14 hours ago

      Google in 1999 was already far superior to Yahoo and other competitors. I don't think OpenAI is in a similar position there. It seems debatable as to whether they're even the best, let alone a massive leap ahead of everyone else the way Google was.

      • ur-whale 14 hours ago

        Agree.

        And GOOG is not a one trick poney any more, by far, especially when it comes to revenue.

        Can't say the same of OpenAI

  • mips_avatar 10 hours ago

    Gemini can't be bundled for free unless they figure out how to make gemini flash 3.0 significantly cheaper to inference than 2.5

    • HDThoreaun 7 hours ago

      It can be bundled for "free" if they raise the price of google workspace. LLMs are right now most valuable as an enterprise productivity software assistant. Very useful to have a full suite of enterprise productivity software in order to sell them.

  • vondur 10 hours ago

    I don't think the Government would let them fail, so long as the specter of the Chinese becoming dominant in AI is a thing.

  • jmyeet 12 hours ago

    Oh God I love the analogy of OpenAI being Netscape. As someone who was an adult in the 1990s, this is so apt. Companies at that time were trying to build a moat around the World Wide Web. They obviously failed. I've thought that OpenAI too would fail but I've never thought about it like Netscape and WWW.

    OpenAI should be looking at how Google built a moat around search. Anyone can write a Web crawler. Lots of people have. But no one else has turned search into the money printing machine that Google has. And they've used that to fund their search advantage.

    I've long thought the moat-buster here will be China because they simply won't want the US to own this future. It's a national security issue. I see things like DeepSeek is moat-busting activity and I expect that to intensify.

    Currently China can't buy the latest NVidia chips or ASML lithography equipment. Why? Because the US said so. I don't expect China to tolerate this long term and of any country, China has desmonstrated the long-term commitment to this kind of project.

  • TacticalCoder 12 hours ago

    > Google will do just what Microsoft did with Internet Explorer and bundle Gemini in for 'Free' with their already other profitable products and established ad-funded revenue streams.

    Just some numbers to show what OpenAI is against:

        GMail users: nearing 2 billion
        Youtube MAU: 2.5 billion
        active Android devices: 4 billion (!)
        Market cap: 3.8 trillion (at a P/E of 31)
    
    So on one side you've got this behemoth with, compared to OpenAI's size, unlimited funding. The $25 bn per year OpenAI is after is basically a parking ticket for Google (only slightly exaggerating). Behemoth who came with Gemini 3 Pro "thinking" and Nano Banana (that name though) who are SOTA.

    And on the other side you've got the open-source weights you mentioned.

    When OpenAI had its big moment HN was full of comments about how it was game over for Google for search was done for. Three years later and the best (arguably the best) model gives the best answer when you search... Using Google search.

    Funny how these things turns out.

    Google is atm the 3rd biggest cap in the world: only Apple and NVidia are slightly ahead. If Google is serious about its AI chips (and it looks like they are) and see the fuck-ups over fuck-ups by Apple, I wouldn't be surprised at all if Alphabet was to regain the number one spot.

    That's the company OpenAI is fighting: a company that's already been the biggest cap in the entire world and that's probably going to regain that spot rather sooner than later and that happens to have crushed every single AI benchmark when Gemini 3 Pro came out.

    I had a ChatGPT subscription. Now I'm using Gemini 3 Pro.

    • redwood 11 hours ago

      You just made it clear who needs to acquire openai.. it's going to be Apple! (Jonny Ive already there).

      And great points on the Google history.. let's not forget they wrote the original Transformers paper after all

      • adgjlsfhk1 6 hours ago

        the branding is all wrong. I could see Apple buying anthropic, but OpenAI is exactly the wrong ai company for Apple. openai is the tacky, slop based ai company. their main value is the brand and the users, but Apple already has a strong brand and billions of users. Apple needs an ai company with deployment experience and a good model, but paying for a brand and users doesn't make sense for them.

  • ascorbic 15 hours ago

    > An innovative product with no means of significant revenue generation.

    OpenAI has annualized revenue of $20bn. That's not Google, but it's not insignificant.

    • ethin 14 hours ago

      It is insignificant when they're spending more than $115bn to offer their service. And yes, I say "more than," not because I have any inside knowledge but because I'm pretty sure $115bn is a "kind" estimate and the expenditure is probably higher. But either way, they're running at a loss. And for a company like them, that loss is huge. Google could take the loss as could Microsoft or Amazon because they have lots of other revenue sources. OAI does not.

    • Spooky23 14 hours ago

      Google is embedding Gemini into Chrome Developer Tools. You can ask for an analysis of individual network calls in your browser by clicking a checkbox. That's just an example of the power of platform. They seem to be better at integration than Microsoft.

      OpenAI has this amazing technology and a great app, but the company feels like some sort of financial engineering nightmare.

      • cruffle_duffle 7 hours ago

        To be fair the CEO of OpenAI is also a crypto bro. Financial engineering is right up their wheelhouse.

    • cmiles8 14 hours ago

      We live in crazy times, but given what they’ve spent and committed to that’s a drop in the bucket relative to what they need to be pulling in. They’re history if they can’t pump up the revenue much much faster.

      Given that we’re likely at peak AI hype at the moment they’re not well positioned at all to survive the coming “trough of disillusionment” that happens like clockwork on every hype cycle. Google, by comparison, is very well positioned to weather a coming storm.

      • XorNot 13 hours ago

        Google survives because I still Google things, and the phone I'm typing this on is a Google product.

        Whereas I haven't opened the ChatFPT bookmark in months and will probably delete it now that I think about it.

        • scrollop 13 hours ago

          RIP privacy.

          Hello Stasi Google and its full personalised file on XorNot.

          Google knows when you're about to sneeze.

    • cheald 14 hours ago

      And a $115b burn rate. They're toast if they can't figure out how to stay on top.

      • nfRfqX5n 14 hours ago

        Could say that about any AI company that isn’t at the top as well

    • echelon 14 hours ago

      Every F500 CEO told their team "have an AI strategy ASAP".

      In a year, when the economy might be in worse shape, they'll ask their team if the AI thing is working out.

      What do you think happens to all the enterprise OpenAI contracts at that point? (Especially if the same tech layperson CEOs keep reading Forbes and hearing Scott Galloway dump on OpenAI and call the AI thing a "bubble"?)

      • raw_anon_1111 14 hours ago

        I will change a few lines of code and use another AI model?

      • riku_iki 14 hours ago

        > What do you think happens to all the enterprise OpenAI contracts at that point?

        they will go to google if it wins the AI race.

twothreeone 15 hours ago

The way I've experienced "Code Red" is mostly as a euphemism for "on-going company-wide lack of focus" and a band-aid for mid-level management having absolutely no clue how to meaningfully make progress, upper management panicking, and ultimately putting engineers and ICs on the spot to bear the brunt of that organizational mess.

Interestingly enough, apart from Google, I've never seen an organization take the actual proper steps (fire mid-management and PMs) to prevent the same thing from happening again. Will be interesting to see how OAI handles this.

  • chem83 12 hours ago

    > fire mid-management and PMs to prevent the same thing from happening again

    Firing PMs and mid-management would not prevent any of code reds you may have read about from Google or OAI lately. This is a very naive perspective of how decision making is done at the scale of those two companies. I'm sorry you had bad experiences working with people in those positions and I wish you have the opportunity to collab with great ones in the future.

  • avrionov 14 hours ago

    "Code Red" if implemented correctly should provide a single priority for the company. Engineers will be moved to the most important project(s).

    • azemetre 13 hours ago

      There should already be a single priority for a company...

      Why is the bar so low for the billionaire magnate fuck ups? Might as well implement workplace democracy and be done with it, it can't be any worse for the company and at least the workers understand what needs to be done.

      • dymk 12 hours ago

        You think a company the size of OAI should have a single priority? That makes no sense, that’s putting all their eggs on one basket.

  • protocolture 12 hours ago

    >I've never seen an organization take the actual proper steps (fire mid-management and PMs) to prevent the same thing from happening again.

    One time, in my entire career have I seen this done, and it is as successful as you imagine it to be. Lots of weird problems coming out from having done it, but those are being treated as "Wow we are so glad we know about this problem" rather than "I hope those idiots come back to keep pulling the wool over my eyes".

  • jimbokun 8 hours ago

    The one successful example I can think of is Bill Gates writing a memo to re-orient Microsoft to put the Internet at the center of everything they were doing.

  • miltonlost 14 hours ago

    Your proper steps are also missing out on firing the higher level executives. But then new ones would be hired, a re-org will occur, and another Code Red will occur in a few months

  • NewEntryHN 12 hours ago

    "Software engineer complains bearing the burden of everything and concludes everything would be fixed by firing everybody except themselves."

  • vkou 15 hours ago

    This code red also has the convenient benefit of giving an excuse to stop work on more monetization features... Which, when implemented, would have the downside of tethering OpenAI's valuation to reality.

    • twothreeone 15 hours ago

      Good point too. Though it makes me wonder if "We declared Code Red" is really enough to justify eye-watering valuations.

    • rvba 15 hours ago

      Isnt CoPilot the de facto OpenAI monetization?

      And Microsoft gets the models for free (?)

      • vkou 14 hours ago

        They have some monetization, but as long as they don't expand into other sectors, they can plausibly claim that, say, their ad business will be bringing in 10 trillion/year in revenue, or whatever other imagined number.