Comment by Roark66
I'm especially annoyed that this is most likely intentional.
(Not at all)openAI saw they are getting behind their competitors (gpt 5 and 5.1 were progressively worse for my use case - actual problem solving and tweaking existing scripts) are getting better. (Claude and sonnet were miles ahead and I used gpt only due to lower price). Now not only open weights models like Qwen3 and kimik2 exceeded their capability and you can run them at home if you have the hardware or for peanuts on a variety of providers. Cheap-er hardware like strix halo (and Nvidia dgx) made 128gb vram achievable to enthusiast. And Google is eating their punch with Gemini.
All while their CFO starts talking about government bailing them out from spending they cannot possibly fund.
Of course they will attempt to blow up the entire hardware market so if they AI flops they will be able to at least re not you hardware like AWS.
Of course they
Small correction, Strix has at most 96GB available for GPU. But that's still a plenty
But yeah, both AMD and Intel are also pushing NPU builtin into the higher offerings so there is a very good chance that a good portion of AI will be happening closer and closer to users