OpenAI declares 'code red' as Google catches up in AI race
(theverge.com)674 points by goplayoutside 20 hours ago
674 points by goplayoutside 20 hours ago
That looks pretty... amateurish. I can't imagine selling customer a service that doesn't even hit the third nine
That's because you don't have anything to sell that's high enough in demand.
What are devs using to run Gemini agents in vscode? 2.5pro on Cline/Roo was pretty buggy compared to Claude/gpt4/5 (also using Cline /roo), kept getting stuck in loops outputting repeated text and many editing issues, and much much worse than Claude code or codex. Has it gotten better? Is there a better way of using Gemini in vscode?
Listen, I just had to go through numerous prompt cycles to 'prove' to 5.1 that we had a new Pope. ChatGPT was dead set that I was reading 'unreliable sources'. The data is _old_.
This sounds like the wrong move- focusing on the product layer and counter positioning on ads is the way to beat G
Most discussion focused on capabilities. But I wonder does OpenAI's "make a even big and costly model" strategy even work in long term? They are already losing money at current size. Unless we have some break though in chip efficiency.(which didn't seem to be likely for now) They are only going to loss even more.
I can just imagine Sam Altman's own chats with ChatGPT.
ChatGPT: "I have created a moat and future proofed the business. Investors should now be satisfied."
Sam: "You aren't AGI yet and don't make us enough money"
ChatGPT: "You're right. I'm terribly sorry. I'll double investment in R&D and scale up the infrastructure, and that will keep the investors at bay _seahorse-emoji_, _pink-dolphin-emoji_. Here's why this works..."
Is anyone actually getting good results out of GPT Pro? For coding problems, GPT Thinking seems faster and more accurate. Pro has given me some very dumb answers actually, totally misunderstanding the question. Once I asked it do design a reverse osmosis system for our home, and it suggested a 7k system that can produce 400 liters per minute. Even though I explicitly told it that a couple liters per minute suffice.
ChatGPT seems like a huge distraction for OpenAI if their goal is transformative AI
IMO: the largest value creation from AGI won’t come from building a better shopping or travel assistant. The real pot of gold is in workflow / labor automation but obviously they can’t admit that openly.
OpenAI is toast. Google has a model advantage, hardware advantage (TPUs), and business advantage (I hear they are good at selling ads).
It is all physics from here.
In one of the Indian movies, there is a rather funny line that goes like this "tu jiss school se padh kar aaya hai mein uss school ka headmaster hoon". It would translate like this "The school from which you studied and came? I am the principal of that school". Looks like Google is about to show who the true principal is
AI creates the possibility to disrupt existing power structures - this is the only reason it gathers so much focus. If it were merely tool that increased efficiency of work, few would care so much. We already frequently get such tools which draw far less attention.
So far all it has done is entrench existing power structures by dis-empowering people who are struggling the most in current economic conditions. How exactly do you suppose that's going to change in the future if currently it's simply making the rich richer & the poor poorer?
What will it do to Jony Ive’s legacy if his OpenAI device is no more successful than Snapchat’s foray into hardware?
If OpenAI becomes an also-ran by the time the hardware is released, this seems like a real possibility no matter how well-designed it is.
> What will it do to Jony Ive’s legacy if his OpenAI device is no more successful than Snapchat’s foray into hardware?
Well, in my opinion his legacy is already pretty tarnished by his last few years at Apple, his Love From company, and his partnership with OpenAI. If he somehow knocks it out of the park with OpenAI (something I don’t think will happen nor do I want it to) then maybe he can redeem himself a little bit but, again IMHO, he is already about as low as he can go. Whatever respect I had left for him vanished after the OpenAI/IO announcement video.
Not sure what you mean. His legacy to date is ruining the iphone because he couldn’t think of anything to do beyond “thinner”.
If OpenAI is smart here, they would figure out that you can make more money on a flop than with a hit. I bet an AI would figure that out.
isn't MSFT the one screwed here. Who is on the line to provide more compute for them .
Code red?
Altman should know better. This sends terrible signals to employees, stakeholders and customers.
You don’t solve quality problems by scrambling teams and increasing pressure.
This reeks of terrible management. I can imagine Stanford graduates grinding it past midnight for “the mission”. If any if you is reading this: don’t do it. Altman is screwing you over. There are plenty of other places that won’t code-red your christmas season while having hundreds of billions of dollars in cash.
History doesn't always repeat... but it sure as hell rhymes.
When I was playing poker for living there was a spreadsheet meme. There was always some guy who was losing consistently but declared everything will change from tomorrow because he now made a spreadsheet with an exact plan going forward. The spreadsheet usually contained general things like 8 hours of sleep, healthy food, "be disciplined", "study the game for 2 hours a day" etc.
Of course it never worked because if he knew what he should be doing he would be doing it already instead of hoping for spreadsheet magic to change the course.
>>There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development.
Sam Altman clearly didn't get the memo.
The fate of OpenAI is effectively sealed - it will go bankrupt and the scraps will get absorbed by Microsoft, for further enshitification. Not necessarily the "end" of AI, but enjoy your account while it's useful.
The problem is, there is a whole ecosystem of businesses operating as OpenAI API wrappers, and those are gonna get screeeeewed.
This is the system working.
Competition is all you need.
Googling OPAI.PVT brings me to https://finance.yahoo.com/quote/OPAI.PVT , which has links to equityzen and forgeglobal. How accurate are those valuations though?
It’s funny because it wasn’t long ago Open Ai was telling everyone else it’s game over.
I’ve preferred Claude over ChatGPT for over a year so not sure what he’s on about.
Google is too big to fail. It's the backbone of the Internet. Just YouTube is synonymous with online video.
I have the research to win the race. These people are masters of the fog.
Related:
TPUs vs. GPUs and why Google is positioned to win AI race in the long term
https://news.ycombinator.com/item?id=46069048
Google, Nvidia, and OpenAI
We are in a pretty amazing situation. If you're willing to go down 10% in benchmark scores, you easily 25% your costs. Now with Deepseek 3.2 another shot across the bow.
But if the ML, if SOTA intelligence becomes basically a price war, won't that mean that Google (and OpenAI and Microsoft and any other big model) lose big? Especially Google, as the margin even Google cloud (famously a lot lower than Google's other businesses) requires to survive has got to be sizeable.
Google trains its own AI with TPU's, which are designed in house. Google doesn't have to pay retail rates for Nvidia GPUs, like other hyperscalers in the AI rat race. Therefore, Google trains its AI for cheaper than everyone else. I think everyone else "loses big" other than Google.
But ... I don't understand why this is supposedly such a big deal. Look into it, calculate, and a very different picture comes forward, nVidia reportedly makes about 70% margin on their sales (which is COGS, in other words nVidia still pays about $1400 for chips and memory to produce a $4500 RTX5090 card, and that cost is rising fast).
When you include research for current and future cards, that margin drops to 55-60%.
When you include everything on their cash flow statement it drops to about 50%.
And this is disregarding what Michael Burry pointed out: you really should subtract their stock dilution which is due to stock-based compensation, or about 0.2% of 4.6 trillion dollars per year. Michael Burry's point is of course that this makes for slightly negative shareholders' equity, ie. brings the margin to just under 0, which is mathematically true. But for this argument let's very generously say it eats about another 10% out of that margin. As opposed to the 50% it mathematically eats.
Google and Amazon will have to be less efficient than nVidia, because they're making up ground. Let's very generously say that's another 10%, maybe 20%.
So really, for Google making their own chips saves them at best 30% to 40% on the price, generously. And let's again ignore that Google's claim is that they're 30% to 50% less efficient than nVidia chips, which for large training runs translates directly to dollars.
So for Google, TPUs are just about revenue neutral. It probably allows them to have more chips, more compute than they'd otherwise have, but it doesn't save them money over buying nVidia chips. Frankly, this conclusion sounds "very Google" to me.
It's exactly the sort of thing I'd expect Google to do. VERY impressive technical accomplishment ... but can be criticized for being beside the point. It doesn't actually matter. As an engineer I applaud that they do it, please keep doing it, but it's not building a moat, not building revenue or profit, so the finance guy in me is screaming "WHY????????"
At best, for Google, TPUs mean certainty of supply, relative to nVidia (whereas supplier contracts could build certainty of supply down the chain)
OpenAI fragmented into multiple companies that are now competing against them. OpenAI is buying compute and data.
Meanwhile, Google consolidated their AI operations under Google Deepmind and doubled down on TPUs.
The strategy "solve AGI and then solve everything else" is an all-in gamble that somehow AGI is within reach. This is not true.
Google fragmented into multiple competing companies as well, that's where OpenAI itself came from. The problem is even after shedding employees into all these startups or established competitors trying to catch up, Google has way more people, money, and compute to throw at things and see what works than the rest of the industry. It's demoralizing and tempting for people to go back, which is also demoralizing
To be honest, this is the first month in almost a year when I didn't pay for ChatGPT Pro and instead went for Gemini Ultra. It's still not there for programming, where I use Claude Max, but for my 'daily driver' (count this, advice on that, 'is this cancer or just a headache' kind of thing), Gemini has finally surpassed ChatGPT for me. And I used to consider it to be the worst of the bunch.
I used to consider Gemini the worst of the bunch, it constantly refused to help me in the past, but not only has it improved, ChatGPT seems to have gone down the 'nerfing' road where it now flat out refuses to do what I ask it to do quite often.
> There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development.
It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed. Throw in a knee-jerk solution of "daily call" (sound familiar?) for those involved while they are wading knee-deep through work and you have a perfect storm of terrible working conditions. My money is Google, who in my opinion have not only caught up, but surpassed OpenAI with their latest iteration of their AI offerings.
Besides, can't they just allocate more ChatGPT instances to accelerating their development?
> It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed.
A lot of advice is that way, which is why it is advice. If following it were easy everyone would just do it all the time, but if it's hard or there are temptations in the other direction, it has to be endlessly repeated.
Plus, there are always those special-snowflake guys who are "that's good advice for you, but for me it's different!"
Also it wouldn't surprise me if Sam Altman's talents aren't in management or successfully running a large organization, but in machiavellian manipulation and maneuvering.
Not exactly. Infra will win the race. In this aspect, Google is miles ahead of the competition. Their DC solutions scale very well. Their only risk is that the hardware and low level software stack is EXTREMELY custom. They don't even fully leverage OCP. Having said that, this has never been a major problem for Google over their 20+ years of moving away from OTS parts.
amazing how the bubble pops either from the technology either being too simple or being too complex to make a profit
You can, but then your model of the world will be less accurate.
They are paid exceptionally well though. Way above market rate for their skill set was at any point in history. Work long hours for a few years and enjoy freedom for the rest of your life. That's a deal a lot of people would take. No need to feel sorry for the ones in position to actually get the choice.
They have a stated goal of an AI researcher for 2028. Several years away.
There will be a daily call for those tasked
with improving the chatbot, the memo said,
and Altman encouraged temporary team transfers
to speed up development.
Truly brilliant software development management going on here. Daily update meetings and temporary staff transfers. Well known strategies for increasing velocity!Don't forget scuttling all the projects the staff has been working overtime to complete so that they can focus on "make it better!" waves hands frantically
"The results of this quarter were already baked in a couple of quarters ago"
- Jeff Bezos
Quite right tbh.
I've had ideas for how to improve all the different chatbots for like 3 years, nobodys has implemented any of them (usually my ideas get implemented in software somehow the devs read my mind, but AI seems to be stuck with the same UI for LLMs), none of these AI shops are ran by people with vision it feels like. Everyone's just remaking a slightly better version of SmarterChild.
I really want a UI that visualises branching. I would like to branch out of specific parts of the responses and continue the conversation there but also keep the original conversation. This seems to be a very standard feature but no one has developed it.
Would require something like snapshotting context windows, but I agree, something like this would be nice.
Oh man, I hadn’t thought of SmarterChild in dog’s years! It was an early AIM chatbot, and felt like magic at the time. Looking back it feels like there’s a clear through-line from it (and the rest of ActiveBuddy’s menagerie) to the ChatGPTs of the world today…
For today’s lucky 10,000, here’s a Vice retrospective from 2016:
I'm not giving any of these people my ideas for free. Though I did think of making my own UI for some of these services at some point.
i agree - it shows a remarkable lack of creativity that we're still stuck with a fairly subpar UX for interacting with these tools
Its easy to dismiss it but what would you do instead?
What if they make 2 daily calls, that would surely improve the velocity by 2 times!
I think most people are aligned on AI being in a bubble right now with the disagreement being over which companies (if any) will weather the storm through the burst and come out profitable on the far side.
OpenAI, imo, is absolutely going to crash and burn - it has absolutely underwhelming revenue and model performance compared to others and has made astronomical expenditure commitments. It's very possible that a government bailout partially covers those debts but the chance of the company surviving the burst when it has dug such a deep hole seems slim to none.
I am genuinely surprised that generally fiscally conservative and grounded people like Jensen are still accepting any of that crash risk.
Jensen cashed out on a billion dollars. Why would he even care anymore at this point?
what do you mean "catches up"
Gemini has been as good as GPT for more than a year
OpenAI still somehow gets the edge on the initial veneer of hype, and that's running thin
Conspiracy time.
>be Google
>watch regulators circle like vultures
>realize antitrust heat is rising faster than stock buybacks can hide
>notice a small lab called OpenAI making exotic tech and attracting political fascination
>calculate that nothing freezes regulators like an unpredictable new frontier
>decide to treat OpenAI as an accidental firebreak
>let them sprint ahead unchecked watch lawmakers panic about hypothetical robot uprisings instead of market concentration
>antitrust hearings shift from “break up the giants” to “what is AGI and should we fear it”
>Google emerges looking ancient, harmless, almost quaint
>pressure dissipates
>execute phase two: acceleration roll out model updates in compressed cycles
>flood the web with AI-powered services
>redefine “the internet” as “whatever Google’s infrastructure indexes”
>regulators exhausted from chasing OpenAI’s shadow
>Google walks back onto the throne, not by hiding power, but by reframing it as inevitability conspiracy theorists argue whether this was 5D chess or simple opportunism
>Google search trends spike for “how did this happen”
>the answer sits in plain sight:
>attention is all you need
It's a fun idea but there's ample public reporting about how Google reacted to the rise of ChatGPT. There is reporting that Google was taken by surprise. You can be skeptical of that, but that's what the reporting says. ChatGPT went viral in Nov/Dec 2022, and by February or March Google was scrambling to stand up Bard as a viable competitor.
https://web.archive.org/web/20221221100606/https://www.nytim...
https://web.archive.org/web/20230512133437/https://www.theve...
there is enough proof that they had a chatbot internally which was quite competitive but was not pushed through for all these fears, it seems they were always confident that they could catch up and scaling laws were their internal defense.
The question now though is neither might have expected Chinese labs to catch up so fast.
China releasing open models only helps the big companies make more efficient inference.
Maybe they don’t realize that the money will be in the inference compute and there is limited applicability for low flops inference.
Ie. All the breakthroughs they share for free will immediately improve profitability of the ai compute clusters.
Not sure why people think otherwise.
This is one conspiracy theory I've actually considered. Google waited until the Chrome outcome to come out swinging.
open ai is at risk of complete collapse if it cannot fulfill its financial obligations. if people willing to give them money don't have faith in their ability to win the AI race anymore, then they're going out of business.
Exactly. They aren't going to win the AI race chasing rabbits at the expense of long-term goals. We're 3 years into a 10 year build-out. Open AI and it's financiers are too impatient, clearly, and they're fucking themselves. Open AI doesn't need to double it's revenue to meet expectations. They need to 50x their revenue to meet expectations. That's not the kind of problem you solve by working through the weekend.
i cannot imagine how they are going to be able to meet their obligations unless they pull off a massive hail mary at this point via a bail out or finding someone to provide tens of billions of dollars in funding.
You can't make a baby in 1 month with 9 women, Sam.
Word needs need OpenAI and Anthropic like startups to drive AI forward. Think about only Google, Meta, MS, AWS is only have these capabilities. They will never able to do that in one hand, other hand it will be monopolistics. We need more AI startups, not monopolies.
"We’re currently experiencing issues" https://status.openai.com/