Comment by mvkel

Comment by mvkel 9 hours ago

10 replies

It's not a rumor, it's confirmed by OpenAI. All "models" since 4o are actually just optimizations in prompting and a new routing engine. The actual -model- you are using with 5.1 is 4. Nothing has been pre-trained from scratch since 4o.

Their own press releases confirm this. They call 5 their best new "ai system", not a new model

https://openai.com/index/introducing-gpt-5/

krackers 4 hours ago

I can believe this, Deepseek V3.2 shows that you can get close to "gpt-5" performance with a gpt-4 level base model just with sufficient post-training.

Davidzheng 8 hours ago

I don't think that counts as confirmation. 4.5 we know was a new base-model. I find it very very unlikely the base model of 4 (or 4o) is in gpt5. Also 4o is a different base model from 4 right? it's multimodal etc. Pretty sure people have leaked sizes etc and I don't think it matches up.

staticman2 7 hours ago

New AI system doesn't preclude new models. I thought when GPT 5 launched and users hated it the speculation was GPT 5 was a cost cutting model and the routing engine was routing to smaller, specialized dumber models that cost less on inference?

It certainly was much dumber than 4o on Perplexity when I tried it.

  • vidarh 2 hours ago

    > and the routing engine was routing to smaller, specialized dumber models that cost less on inference?

    That this was part of it was stated outright, except maybe that they "cost less" which was left for you to infer (sorry), in their launch announcement.

    Paying for pro, and setting it to thinking all the time, I saw what seemed like significant improvements, but if your requests got (mis-)routed to one of the dumber models, it's not surprising if people were disappointed.

    I think they made a big mistake in not clearly labelling the responses with which of the models responded to a given request, as it made people complain about GPT 5 in general, instead of complaining about the routing.

m3kw9 9 hours ago

Well then 5.x is pretty impressive

Forgeties79 9 hours ago

Maybe this is just armchair bs on my part, but it seems to me that the proliferation of AI-spam and just general carpet bombing of low effort SEO fodder would make a lot of info online from the last few years totally worthless.

Hardly a hot take. People have theorized about the ouroboros effect for years now. But I do wonder if that’s part of the problem

  • irthomasthomas 2 hours ago

    Gemini 3 has a similar 2024 cutoff and they claim to have trained it from scratch. I wish they would say more about that.