Comment by jdoliner

Comment by jdoliner 17 hours ago

157 replies

I've seen a rumor going around that OpenAI hasn't had a successful pre-training run since mid 2024. This seemed insane to me but if you give ChatGPT 5.1 a query about current events and instruct it not to use the internet it will tell you its knowledge cutoff is June 2024. Not sure if maybe that's just the smaller model or what. But I don't think it's a good sign to get that from any frontier model today, that's 18 months ago.

alecco 16 hours ago

SemiAnalysis said it last week and AFAIK it wasn't denied.

https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...

  • RossBencina 12 hours ago

    The SemiAnalysis article that you linked to stated:

    "OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome."

    Given the overall quality of the article, that is an uncharacteristically convoluted sentence. At the risk of stating the obvious, "that was broadly deployed" (or not) is contingent on many factors, most of which are not of the GPU vs. TPU technical variety.

    • alecco 4 hours ago

      My reading in between the lines is OpenAI's "GPT-5" is really a GPT-4 generation model. And this is aligned with it being unimpressive. Not the promised leap forward Altman promised.

      • [removed] 2 hours ago
        [deleted]
    • nbardy 9 hours ago

      This is misleading. They had 4.5 which was a new scaled up training run. It was a huge model and only served to pro users, but the biggest models are always used as teacher models for smaller models. Thats how you do distillation. It would be stupid to not use the biggest model you have in distillation and a waste since they have the weights.

      The would have taken some time to calculate the efficiency gains of pretraining vs RL. Resumed the GPT-4.5 for whatever budget made sense and then spent the rest on RL.

      Sure they chose to not serve the large base models anymore for cost reasons.

      But I’d guess Google is doing the same. Gemini 2.5 samples very fast and seems way to small to be their base pre train. The efficiency gains in pertaining scale with model scale so it makes sense to train the largest model possible. But then the models end up super sparse and oversized and make little sense to serve in inference without distillation.

      In RL the efficiency is very different because you have to inference sample the model to draw online samples. So small models start to make more sense to scale.

      Big model => distill => RL

      Makes the most theoretical sense for training now days for efficient spending.

      So they already did train a big model 4.5. Not using it would have been absurd and they have a known recipe they could return scaling on if the returns were justified.

      • barrell 2 hours ago

        My understanding of 4.5 was that it was released long, long after the initial training run finished. It also had an older cutoff date than the newer 4o models

        • tim333 2 hours ago

          Cutoff dates seem to be Oct 2024 for GPT-4.5, and Jan 2025 for the Gemini models.

          It kind of explains a coding issue I had with tradingview who update their pinescript thing quite frequently. ChatGPT seemed to have issues with v4 vs v5.

  • binkHN 11 hours ago

    This is a really great breakdown. With TPUs seemingly more efficient and costing less overall, how does this play for Nvidia? What's to stop them from entering the TPU race with their $5 trillion valuation?

    • matwood 6 hours ago

      As others mentioned, 5T isn't money available to NVDA. It could leverage that to buy a TPU company in an all stock deal though.

      The bigger issue is that entering a 'race' implies a race to the bottom.

      I've noted this before, but one of NVDA's biggest risks is that its primary customers are also technical, also make hardware, also have money, and clearly see NVDA's margin (70% gross!!, 50%+ profit) as something they want to eliminate. Google was first to get there (not a surprise), but Meta is also working on its own hardware along with Amazon.

      This isn't a doom post for NVDA the company, but its stock price is riding a knifes edge. Any margin or growth contraction will not be a good day for their stock or the S&P.

      • Glemkloksdjf 2 hours ago

        Nvidia has everything they need to build the most advanced GPU Chip in the world and mass produce it.

        Everything.

        They can easily just do this for more optimized Chips.

        "easily" in sense of that wouldn't require that much investment. Nvidia knows how to invest and has done this for a long time. Their Ominiverse or robots platform isaac are all epxensive. Nvidia has 10x more software engineers than AMD

        • farseer 16 minutes ago

          They still go to TSMC for fab, and so does everyone else.

      • sigmoid10 6 hours ago

        Making the hardware is actually the easy part. Everyone and their uncle who had some cash have tried by now: Microsoft, Meta, Tesla, Huawei, Amazon, Intel - the list goes on and on. But Nvidia is not a chip company. Huang himself said they are mostly a software company. And that is how they were able to build a gigantic moat. Because noone else has even come close on the software side. Google is the only one who has had some success on this side, because they also spent tons of money and time on software refinement by now, while all the other chips vanished into obscurity.

    • captainbland 43 minutes ago

      Nvidia is already in the TPU race aren't they? This is exactly what the tensor cores on their current products are supposed to do, but they're just more heterogeneous GPU based architectures and exist with CUDA cores etc. on the same die. I think it should be within their capability to make a device which devotes an even higher ratio of transistors to tensor processing.

    • randomNumber7 an hour ago

      If you look at the history how GPUs evolved:

      1. there had be fixed function hardware for certain graphics stages

      2. Programmable massively parallel hardware took over. Nvidia was at the forefront of this.

      TPUs seem to me similar to fixed function hardware. For Nvidia it's a step backwards and even though they go into this direction recently I can't see them go all the way.

      Otherwise you don't need cuda, but hardware guy's that write verilog or vhdl. They don't have that much of an edge there.

    • dragonwriter 7 hours ago

      > What's to stop them from entering the TPU race with their $5 trillion valuation?

      Valuation isn’t available money; they'd have to raise more money in the current, probably tighter for them, investment environment to enter the TPU race, since the money they have already raised that that valuation is based on is already needed to provide runway for what they are already doing without putting money into the TPU race

    • sysguest 8 hours ago

      $5 trillion valuation doesn't mean it has $5 trillion cash in pocket -- so "it depends"

  • CamperBob2 15 hours ago

    That is.... actually a seriously meaty article from a blog I've never heard of. Thanks for the pointer.

    • seatac76 14 hours ago

      Semi analysis is great, they typically do semiconductors but reporting is top notch.

      • lanstin 11 hours ago

        Wow, that was a good article. So much detail from financial to optical linking to build various data flow topologies. Makes me less aghast at the $10M salaries for the masters of these techniques.

    • Numerlor 3 hours ago

      This article about them got published just yesterday... https://news.ycombinator.com/item?id=46124883

      There's a lot of misleading information in what they publish, plagiarism, and I believe some information that wouldn't be possible to get without breaking NDAs

      • girvo 14 minutes ago

        > I believe some information that wouldn't be possible to get without breaking NDAs

        …why would I care about this in the slightest?

    • ipnon 5 hours ago

      Dylan Patel founded Semianalysis and he has a great interview with Satya Nadella on Dwarkesh Patel's podcast.

  • rahimnathwani 12 hours ago

    Dylan Patel joined Dwarkesh recently to interview Satya Nadella: https://www.dwarkesh.com/p/satya-nadella-2

    • embedding-shape 12 hours ago

      And this is relevant how? That interview is 1.5 hours, not something you just casually drop a link to and say "here, listen to this to even understand what point I was trying to make"

  • [removed] 11 hours ago
    [deleted]
mvkel 11 hours ago

It's not a rumor, it's confirmed by OpenAI. All "models" since 4o are actually just optimizations in prompting and a new routing engine. The actual -model- you are using with 5.1 is 4. Nothing has been pre-trained from scratch since 4o.

Their own press releases confirm this. They call 5 their best new "ai system", not a new model

https://openai.com/index/introducing-gpt-5/

  • krackers 6 hours ago

    I can believe this, Deepseek V3.2 shows that you can get close to "gpt-5" performance with a gpt-4 level base model just with sufficient post-training.

  • staticman2 9 hours ago

    New AI system doesn't preclude new models. I thought when GPT 5 launched and users hated it the speculation was GPT 5 was a cost cutting model and the routing engine was routing to smaller, specialized dumber models that cost less on inference?

    It certainly was much dumber than 4o on Perplexity when I tried it.

    • vidarh 4 hours ago

      > and the routing engine was routing to smaller, specialized dumber models that cost less on inference?

      That this was part of it was stated outright, except maybe that they "cost less" which was left for you to infer (sorry), in their launch announcement.

      Paying for pro, and setting it to thinking all the time, I saw what seemed like significant improvements, but if your requests got (mis-)routed to one of the dumber models, it's not surprising if people were disappointed.

      I think they made a big mistake in not clearly labelling the responses with which of the models responded to a given request, as it made people complain about GPT 5 in general, instead of complaining about the routing.

  • Davidzheng 10 hours ago

    I don't think that counts as confirmation. 4.5 we know was a new base-model. I find it very very unlikely the base model of 4 (or 4o) is in gpt5. Also 4o is a different base model from 4 right? it's multimodal etc. Pretty sure people have leaked sizes etc and I don't think it matches up.

  • m3kw9 11 hours ago

    Well then 5.x is pretty impressive

  • Forgeties79 11 hours ago

    Maybe this is just armchair bs on my part, but it seems to me that the proliferation of AI-spam and just general carpet bombing of low effort SEO fodder would make a lot of info online from the last few years totally worthless.

    Hardly a hot take. People have theorized about the ouroboros effect for years now. But I do wonder if that’s part of the problem

    • irthomasthomas 4 hours ago

      Gemini 3 has a similar 2024 cutoff and they claim to have trained it from scratch. I wish they would say more about that.

p1necone 16 hours ago

Every so often I try out a GPT model for coding again, and manage to get tricked by the very sparse conversation style into thinking it's great for a couple of days (when it says nothing and then finishes producing code with a 'I did x, y and z' with no stupid 'you're absolutely' right sucking up and it works, it feels very good).

But I always realize it's just smoke and mirrors - the actual quality of the code and the failure modes and stuff are just so much worse than claude and gemini.

  • kshacker 15 hours ago

    I am a novice programmer -- I have programmed for 35+ years now but I build and lose the skills moving between coder to manager to sales -- multiple times. Fresh IC since last week again :) I have coded starting with Fortran, RPG and COBOL and I have also coded Java and Scala. I know modern architecture but haven't done enough grunt work to make it work or to debug (and fix) a complex problem. Needless to say sometimes my eyes glaze over the code.

    And I write some code for my personal enjoyment, and I gave it to Claude 6-8 months back for improvement, it gave me a massive change log and it was quite risky so abandoned it.

    I tried this again with Gemini last week, I was more prepared and asked it to improve class by class, and for whatever reasons I got better answers -- changed code, with explanations, and when I asked it to split the refactor in smaller steps, it did so. Was a joy working on this over the thanksgiving holidays. It could break the changes in small pieces, talk through them as I evolved concepts learned previously, took my feedback and prioritization, and also gave me nuanced explanation of the business objectives I was trying to achieve.

    This is not to downplay claude, that is just the sequence of events narration. So while it may or may not work well for experienced programmers, it is such a helpful tool for people who know the domain or the concepts (or both) and struggle with details, since the tool can iron out a lot of details for you.

    My goal now is to have another project for winter holidays and then think through 4-6 hour AI assisted refactors over the weekends. Do note that this is a project of personal interest so not spending weekends for the big man.

    • Aurornis 9 hours ago

      > I was more prepared and asked it to improve class by class, and for whatever reasons I got better answers

      There is a learning curve with all of the LLM tools. It's basically required for everyone to go through the trough of disillusionment when you realize that the vibecoding magic isn't quite real in the way the influencers talk about it.

      You still have to be involved in the process, steer it in the right direction, and review the output. Rejecting a lot of output and re-prompting is normal. From reading comments I think it's common for new users to expect perfection and reject the tools when it's not vibecoding the app for them autonomously. To be fair, that's what the hype influencers promised, but it's not real.

      If you use it as an extension of yourself that can type and search faster, while also acknowledging that mistakes are common and you need to be on top of it, there is some interesting value for some tasks.

      • vidarh 4 hours ago

        It really depends on what you're building. As an experiment, I started having Claude Code build a real-time strategy game a bit over a week ago, and it's done an amazing job, with me writing no code whatsoever. It's an area with lots of tutorials for code structure etc., and I'm guessing that helps. And so while I've had to read the code and tell it to refactor things, it has managed to do a good job of it with just relatively high level prodding, and produced a well-architected engine with traits based agents for the NPCs and a lot of well-functioning game mechanics. It started as an experiment, but now I'm seriously toying with building an actual (but small) game with it just to see how far it can get.

        In other areas, it is as you say and you need to be on top of it constantly.

        You're absolutely right re: the learning curve, and you're much more likely to hit an area where you need to be on top of it than one that it can do autonomously, at least without a lot of scaffolding in the form of sub-agents, and rules to follow, and agent loops with reviews etc., which takes a lot of time to build up, and often include a lot of things specific to what you want to achieve. Sorting through how much effort is worth it for those things for a given project will take time to establish.

        • FuckButtons 3 hours ago

          I suspect the meta architecture can also be done autonomously though no one has got there yet, figuring out the right fractal dimension for sub agents and the right prompt context can itself be thought of as a learning problem.

      • wiz21c 5 hours ago

        For me the learning curve was learning to choose what is worth asking to Claude. After 3 months on it, I can reap the benefit: Claude produces the code I want right 80% of the time. I usually ask it: to create new functions from scratch (it truly shines at understanding the context of these functions by reusing other parts of the code I wrote), refactor code, create little tools (for example a chart viewer).

      • boie0025 8 hours ago

        I appreciate this narrative; relatable to me in how I have experienced and watched others around me experience the last few years. It's as if we're all kinda-sorta following a similar "Dunning–Kruger effect" curve at the same time. It feels similar to growing up mucking around with a ppp connection and Netscape in some regards. I'll stretch it: "multimodal", meet your distant analog "hypermedia".

    • altmanaltman 6 hours ago

      Interesting. From my experience, Claude is much better at stuff involving frontend design somehow compared to other models (GPT is pretty bad). Gemini is also good but often the "thinking" mode just adds stuff to my code that I did not ask it to add or modifies stuff to make it "better". It likes to 1 up on the objective a lot which is not great when you're just looking for it to do what you precisely asked it and nothing else.

    • ikidd 11 hours ago

      My problem with Gemini is how token hungry it is. It does a good job but it ends up being more expensive than any other model because it's so yappy. It sits there and argues with itself and outputs the whole movie.

    • mleo 9 hours ago

      Breaking down requirements, functionality and changes into smaller chunks is going to give you better results with most of the tools. If it can complete smaller tasks in the context window, the quality will likely hold up. My go to has been to develop task documents with multiple pieces of functionality and sub tasks. Build one piece of functionality at a time. Commit, clear context and start the next piece of functionality. If something goes off the rails, back up to the commit, fix and rebase future changes or abandon and branch.

      That’s if I want quality. If I just want to prototype and don’t care, I’ll let it go. See what I like, don’t like and start over as detailed above.

    • bovermyer 14 hours ago

      I have never considered trying to apply Claude/Gemini/etc. to Fortran or COBOL. That would be interesting.

      • Aurornis 9 hours ago

        You can actually use Claude Code (and presumably the other tools) on non-code projects, too. If you launch claude code in a directory of files you want to work on, like CSVs or other data, you can ask it to do planning and analysis tasks, editing, and other things. It's fun to experiment with, though for obvious reasons I prefer to operate on a copy of the data I'm using rather than let Claude Code go wild.

      • kshacker 14 hours ago

        I was just giving my history :) but yes I am sure this could actually get us out of the COBOL lock-in which requires 70 years old programmers to continue working.

        The last article I could find on this is from 2020 though: https://www.cnbc.com/2020/04/06/new-jersey-seeks-cobol-progr...

        • chasd00 an hour ago

          Or you could just learn cobol. Using an LLM with a language you don’t know is pretty risky. How do you spot the subtle but fatal mistakes they make?

  • tartoran 15 hours ago

    I'm starting with Claude at work but did have an okay experience with OpenAi so far. For clearly delimited tasks it does produce working code more often than not. I've seen some improvement on their side compared to say, last year. For something more complex and not clearly defined in advance, yes, it does produce plausible garbage and it goes off the rails a lot. I was migrating a project and asked ChatGPT to analyze the original code base and produce a migration plan. The result seemed good and encouraging because I didn't know much about that project at that time. But I ended up taking a different route and when I finished the migration (with bits of help from ChatGPT) I looked at the original migration plan out of curiosity since I had become more familiar with the project by now. And the migration plan was an absolutely useless and senseless hallucination.

  • herpdyderp 15 hours ago

    On the contrary, I cannot use the top Gemini and Claude models because their outputs are so out place and hard to integrate with my code bases. The GPT 5 models integrate with my code base's existing patterns seamlessly.

    • ta12653421 5 hours ago

      Supply some relevant files of your codebase in the ClaudeAI project area in the right part of the browser. Usually it will understand your architecture, patterns, principles

    • inquirerGeneral 15 hours ago

      You realize on some level all of these sort of anecdotes, though, are simply random coincidence .

  • stevedonovan 7 hours ago

    I've been getting great results from Codex. Can be a bit slow, but gets there. Writes good Rust, powers through integration test generation.

    So (again) we are just sharing anecdata

  • findjashua 15 hours ago

    NME at all - 5.1 codex has been the best by far.

    • pshirshov 11 hours ago

      By my tests (https://github.com/7mind/jopa) Gemini 3 is somewhat better than Claude with Opus 4.5. Both obliterate Codex with 5.1

      • Incipient 10 hours ago

        What's - roughly - your monthly spend when using ppt models? I only use fixed priced copilot, and my napkin maths says I'd be spending something crazy like $200/mo if I went ppt on the more expensive models.

      • viking123 6 hours ago

        Codex is super cheap though even with the cheapest GPT subscription you get lots of tokens. I use 4.5 opus at work and codex at home tbh the differences are not that big if you know what you are doing.

    • manmal 15 hours ago

      How can you stand the excruciating slowness? Claude Code is running circles around codex. The most mundane tasks make it think for a minute before doing anything.

      • aschobel 14 hours ago

        I use it on medium reasoning and it's decently quick. I only switch to gpt-5.1-codex-max xhigh for the most annoying problems.

      • wahnfrieden 15 hours ago

        By learning to parallelize my work. This also solved my problem with slow Xcode builds.

    • andybak 2 hours ago

      NME = "not my experience" I presume.

      JFC TLA OD...

  • sharyphil 14 hours ago

    You're absolutely right!

    Somehow it doesn't get on my nerves (unlike Gemini with "Of course").

  • jpalomaki 15 hours ago

    Can you give some concrete example of programming problem task GPT fails to solve?

    Interested, because I’ve been getting pretty good results with different tasks using the Codex.

    • kriro 2 hours ago

      Library/API conflicts are the biggest pain point for me usually. Especially breaking changes. RLlib (currently 2.41.0) and Gymnasium (currently 0.29.0+) have ended in circles many times for me because they tend to be out of sync (for multi-agent environments). My go to test now is a simple hello world type card game like war, competitive multi-agent with rllib and gymnasium (pettingzoo tends to cause even more issues).

      Claude Sonnet 4.5 was able to figure out a way to resolve it eventually (around 7 fixes) and I let it create an rllib.md with all the fixes and pitfalls and am curious if feeding this file to the next experiment will lead to a one-shot. GPT-5 struggled more but haven't tried Codex on this yet so it's not exactly fair.

      All done with Copilot in agent mode, just prompting, no specs or anything.

    • gloosx 6 hours ago

      Try to ask it to write some GLSL shaders. Just describe what you want to see and then try to run the shaders it outputs. It can output a UV-map or the simple gradient right, but when it comes to shaders a bit more complex it most of the time will not compile or run properly, sometimes mix GLSL versions, sometimes just straight make up things which don't work or output what you want.

    • throwaway31131 8 hours ago

      I posted this example before but academic papers on algorithms often have pseudo code but no actual code.

      I thought it would be handy to use AI to make the code from the paper so a few months ago I tried to use Claude (not GPT, because I only have access to Claude) to recreate C++ code to implement the algorithms in this paper as practice for me in LLM use and it didn’t go well.

      https://users.cs.duke.edu/~reif/paper/chen/graph/graph.pdf

      • threeducks 3 hours ago

        I just tried it with GPT-5.1-Codex. The compression ratio is not amazing, so not sure if it really worked, but at least it ran without errors.

        A few ideas how to make it work for you:

        1. You gave a link to a PDF, but you did not describe how you provided the content of the PDF to the model. It might only have read the text with something like pdftotext, which for this PDF results in a garbled mess. It is safer to convert the pages to PNG (e.g. with pdftoppm) and let the model read it from the pages. A prompt like "Transcribe these pages as markdown." should be sufficient. If you can not see what the model did, there is a chance it made things up.

        2. You used C++, but Python is much easier to write. You can tell the model to translate the code to C++ once it works in Python.

        3. Tell the model to write unit tests to verify that the individual components work as intended.

        4. Use Agent Mode and tell the model to print something and to judge whether the output is sensible, so it can debug the code.

    • cmarschner 15 hours ago

      Completely failed for me running the code it changed in a docker container i keep running. Claude did it flawlessly. It absolutely rocks at code reviews but ir‘s terrible in comparison generating code

      • peab 13 hours ago

        It really depends on what kind of code. I've found it incredible for frontend dev, and for scripts. It falls apart in more complex projects and monorepos

  • CheeseFromLidl 5 hours ago

    Same experience here. The more commonly known the stuff it regurgitates is, the fewer errors. But if you venture into RF electronics or embedded land, beware of it turning into a master of bs.

    Which makes sense for something that isn’t AI but LLM.

  • logicchains 15 hours ago

    I find for difficult questions math and design questions GPT5 tends to produce better answers than Claude and Gemini.

    • munk-a 15 hours ago

      Could you clarify what you mean by design questions? I do agree that GPT5 tends to have a better agentic dispatch style for math questions but I've found it has really struggled with data model design.

  • bsder 9 hours ago

    At this point you are now forced to use the "AI"s as code search tools--and it annoys me to no end.

    The problem is that the "AI"s can cough up code examples based upon proprietary codebases that you, as an individual, have no access to. That creates a significant quality differential between coders who only use publicly available search (Google, Github, etc.) vs those who use "AI" systems.

xnx 9 hours ago

OpenAI is in the "don't look behind the curtain" stage with both their technology and finances.

impulser_ 8 hours ago

OpenAI is the only SOTA model provider that doesn't have a cutoff date in the current year. That why it preforms bad at writing code for any new libraries or libraries that have had significant updates like Svelte.

  • rvnx 2 hours ago

    State Of The Art is maybe a bit exaggerated. It's more like an early model that never really adapted, and only got watered down (smaller network, outdated information, and you cannot see thought/reasoning).

    Also their models get dumber and dumber over time.

nickff 16 hours ago

I recall reading that Google had similar 'delay' issues when crawling the web in 2000 and early 2001, but they managed to survive. That said, OpenAI seems much less differentiated (now) than Google was back then, so this may be a much riskier situation.

  • redbluered 10 hours ago

    The differentiation should be open source, nonprofit, and ethical.

    As a shady for-profit, there is none. That's the problem with this particular fraud.

    • echelon 8 hours ago

      Why is profit bad? You can be open source, ethical, and for-profit.

      • khafra 7 hours ago

        If you start out as a non-profit, and pull a bunch of shady shenanigans in order to convert to a for-profit, claiming to be ethical after that is a bit of a hard sell.

  • echelon 8 hours ago

    Google didn't raise at a $500 billion valuation.

    The 25x revenue multiple wouldn't be so bad if they weren't burning so much cash on R&D and if they actually had a moat.

    Google caught up quick, the Chinese are spinning up open source models left and right, and the world really just isn't ready to adopt AI everywhere yet. We're in the premature/awkward phase.

    They're just too early, and the AGI is just too far away.

    Doesn't look like their "advertising" idea to increase revenue is working, either.

    • shridharxp 7 hours ago

      There is no moat in selling/renting AI models. They are a commoditized product now. I can't imagine with what thought process did investors poured in such money on OpenAI.

      • fzzzy 2 hours ago

        Tulip mania is a mania because it short circuits thought.

  • savrajsingh 9 hours ago

    Yes, the story was something like Google hadn’t rebuilt their index for something like 8 months if I recall correctly

jimbohn 2 hours ago

I wonder if the failures to pretrain are the result of our understanding of neural networks being more akin to alchemy rather than chemistry

mikepurvis 14 hours ago

I noticed this recently when I asked it whether I should play Indiana Jones on my PS5 or PC with a 9070 XT. It assumed I had made a typo until I clarified, then it went off to the internet and came back telling me what a sick rig I have.

amluto 15 hours ago

I asked ChatGPT 5.1 to help me solve a silly installation issue with the codex command line tool (I’m not an npm user and the recommended installation method is some kludge using npm), and ChatGPT told me, with a straight face, that codex was discontinued and that I must have meant the “openai” command.

hn_throwaway_99 10 hours ago

Just a minor correction, but I think it's important because some comments here seem to be giving bad information, but on OpenAI's model site it says that the knowledge cutoff for gpt-5 is Sept 30, 2024, https://platform.openai.com/docs/models/compare, which is later than the June 01, 2024 date of GPT-4.1.

Now I don't know if this means that OpenAI was able to add that 3 months of data to earlier models by tuning or if it was a "from scratch" pre-training run, but it has to be a substantial difference in the models.

searls 16 hours ago

Funny, had it tell me the same thing twice yesterday and that was _with_ thinking + search enabled on the request (it apparently refused to carry out the search, which it does once in every blue moon).

I didn't make this connection that the training data is that old, but that would indeed augur poorly.

kristianp 5 hours ago

I doubt it's that important that their dataset of current events is up to date. At this stage, I believe private and synthetic data comprises a large fraction of pretraining. Web search substitutes for current event pretraining.

  • f311a 3 hours ago

    I tried OpenAI models for coding in Go, but they constantly say your syntax is not correct. Let me rewrite your whole file without `any`.`any` was introduced in 2022. It takes some time to adopt it in codebases, but they should not be doing stuff like that at the end of 2025.

mr_00ff00 15 hours ago

What is a pre-training run?

  • nodja 14 hours ago

    Pre-training is just training, it got the name because most models have a post-training stage so to differentiate people call it pre-training.

    Pre-training: You train on a vast amount of data, as varied and high quality as possible, this will determine the distribution the model can operate with, so LLMs are usually trained on a curated dataset of the whole internet, the output of the pre-training is usually called the base model.

    Post-training: You narrow down the task by training on the specific model needs you want. You can do this through several ways:

    - Supervised Finetuning (SFT): Training on a strict high quality dataset of the task you want. For example if you wanted a summarization model, you'd finetune the model on high quality text->summary pairs and the model would be able to summarize much better than the base model.

    - Reinforcement Learning (RL): You train a separate model that ranks outputs, then use it to rate the output of the model, then use that data to train the model.

    - Direct Preference Optimizaton (DPO): You have pairs of good/bad generations and use them to align the model towards/away the kinds of responses you want.

    Post-training is what makes the models able to be easily used, the most common is instruction tuning that teaches to model to talk in turns, but post-training can be used for anything. E.g. if you want a translation model that always translates a certain way, or a model that knows how to use tools, etc. you'd achieve all that through post-training. Post-training is where most of the secret sauce in current models is nowadays.

    • cocogoatmain 13 hours ago

      Want to also add that the model doesn’t know how to respond in a user-> assistant style conversation after it’s pretraining, and it’s a pure text predictor (look at the open source base models)

      There’s also what is being called mid-training where the model is trained on high(er) quality traces and acts as a bridge between pre and post training

      • amypetrik8 11 minutes ago

        just to go off of this there is also stochastic random overfit retraining process (SRORP). Idea behind SRORP is to avoid overfitting. SRORP will take data points from -any- aspect of the past process with replacment and create usually 3-9 bootstrap models randomly. The median is then taken from all model weights to wipe out outliers. This SRORP polishing -if done carefully- is usually good for a 3-4% gain in all benchmarks

    • mrweasel 4 hours ago

      If pre-training is just training, then how on earth can OpenAI not have "a successful pre-training run"? The word successful indicates that they tried, but failed.

      It might be me misunderstanding how this works, but I assumed that the training phase was fairly reproducible. You might get different results on each run, do to changes in the input, but not massively so. If OpenAI can't continuously and reliably train new models, then they are even more overvalued that I previously assumed.

      • nodja 3 hours ago

        Because success for them doesn't mean it works, it means it works much better than what they currently have. If a 1% improvement comes at the cost of spending 10x more on training and 2x more on inference then you're failing at runs. (numbers out of ass)

        • mrweasel 2 hours ago

          That makes sense. It's not that the training didn't complete or returned a moronic model, but the capabilities have plateaued.

      • immibis 3 hours ago

        Maybe this has something to do with why they're declaring "code red".

    • fzzzy 2 hours ago

      - Reinforcement learning with verifiable rewards (RLVR): instead of using a grader model you use a domain that can be deterministically graded, such as math problems.

  • abixb 15 hours ago

    The first step in building a large language model. That's when the model is initiated and trained on a huge dataset to learn patterns and whatnot. The "P" in "GPT" stands for "pre-trained."

  • bckr 15 hours ago

    That’s where they take their big pile of data and train the model to do next-token-prediction.

manmal 15 hours ago

That would explain why it’s so bad with new Swift features and more recent ast-grep rules.

mips_avatar 11 hours ago

Usually current events get taught through mid-training, so even with old pre-training current events still could be added

nextworddev 15 hours ago

Don’t forget SemiAnalysis’s founder Dylan Patel is supposedly roommates with Anthropics RL tech lead Sholto..

  • nickysielicki 15 hours ago

    The fundamental problem with bubbles like this, is that you get people like this who are able to take advantage of the The Gell-Mann amnesia effect, except the details that they’re wrong about are so niche that there’s a vanishingly small group of people who are qualified to call them out on it, and there’s simultaneously so much more attention on what they say because investors and speculators are so desperate and anxious for new information.

    I followed him on Twitter. He said some very interesting things, I thought. Then he started talking about the niche of ML/AI I work near, and he was completely wrong about it. I became enlightened.

throwaway314155 15 hours ago

It has no idea what it's own knowledge cutoff is.

  • octoberfranklin 14 hours ago

    Knowledge cutoff date is usually part of the system prompt.

    Helps you get useful answers like "I don't know that's too recent" when you ask questions like "who won the basketball game last night".

    • vel0city 12 hours ago

      I opened up the Gemini app and asked:

      Who are the Dallas Stars playing tonight? Is there anyone that will be out of the lineup for the Stars? What have their recent injuries been like? Can you get the current score?

      It's response:

      The Dallas Stars are playing the New York Rangers tonight, Tuesday, December 2, 2025, at Madison Square Garden.

      Here is the information available regarding the rest of your questions:

      Out of the Lineup & Recent Injuries *Lian Bichsel (Defenseman): He suffered a lower-body injury (ankle) in the recent game against the Senators and is expected to miss 6-8 weeks and require surgery. His absence will be felt on the blueline.

      *Jake Oettinger (Goaltender): He is dealing with an illness and is not expected to start tonight; Casey DeSmith is projected to start in his place. Oettinger is expected to play tomorrow night in New Jersey.

      *Thomas Harley (Defenseman): He remains week-to-week with a lower-body injury and is not with the team on their current road trip.

      Other injured players who will likely be out of the lineup include Matt Duchene (Upper Body - on IR-LT, but joining the road trip), Adam Erne (Lower Body - on IR-LT), and Nils Lundkvist (Lower Body - on IR-LT).

      Current Score The game is in progress, and the score is currently New York Rangers 1 - 1 Dallas Stars.

      Would you like an update on the final score later, or news on a specific player's status?

      I ask it things like updates on teams all the time, and it usually pulls the right information.

      • warkdarrior 11 hours ago

        The consumer apps use RAG and traditional search to give the LLM recent information in the prompt when it answers your query. This basically bridges over the knowledge gap between the end of training and today.

simianparrot 15 hours ago

Any data after that is contaminated with vast amount of AI slop. Is anyone training on anything newer..?

  • [removed] 3 hours ago
    [deleted]
  • bakugo 11 hours ago

    Both Claude 4.x and Gemini 3 go up to early 2025. Still not that recent, but not as bad as GPT.

mingodad 11 hours ago

Interesting enough today I found that https://huggingface.co/openai/gpt-oss-20b when downloaded an running locally and asked:

====

when have you been built?

I’m a GPT‑4‑based model that OpenAI released on March 14 2023. The underlying training data goes up to early 2023, and the model was trained in the months leading up to that release. If you’re asking about the ChatGPT product that ships the model to users, it went live in November 2022 and has since received updates (GPT‑3.5, GPT‑4, etc.) that keep it current.

====

But when supposedly running it from https://huggingface.co/chat/models/openai/gpt-oss-20b:

====

when have you been built?

I’m a language model created by OpenAI. The current generation (GPT‑4) that powers this chat was first released in March 2023 and has been updated and fine‑tuned up through the end of 2024. My training data runs up to the beginning of June 2025, so I’m built on knowledge available up to that point.

====

And that makes me thinking that although https://huggingface.co/chat claims to be using the models available to public at https://huggingface.co , it doesn't seems to be true and I raised this question here https://huggingface.co/ggml-org/gpt-oss-20b-GGUF/discussions... , https://github.com/huggingface/inference-playground/issues/1... and https://github.com/ggml-org/llama.cpp/discussions/15396#disc... .