OpenAI Grove
(openai.com)95 points by manveerc 10 hours ago
95 points by manveerc 10 hours ago
The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.
What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.
Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.
I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.
Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.
I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?
Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.
Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.
True, though it seems most people on HN think AGI is impossible thus would consider OpenAI's quest a lost cause.
They released a near-SOTA open source model not too long ago
I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.
Can someone give the counter argument to my initial cynical read of this? That read being: OpenAI has more money than it can invest productively within it's own company and is trying to cast a net to find new product ideas via an incubator? I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this and it implies they have run out of ideas internally. But I think I'm probably being too reflexively cynical
I think that MIT study of 95% of internal AI projects failing has scared off a lot of corporations from risking time in it. I think they also see they are hitting a limit of profitable intelligence from their services. (with the growth in inelegance the past 6–8 months being more realistic, not the unbelievable like in the past few years)
I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
The MIT study found 90% of workers were regularly using LLMs.
The gap was that workers were using their own implementation instead of the company's implementation.
The MIT study as released also does not really provide any support for the 95% failure rate claim. Until we have more details, we really don't know where that number came from:
https://www.linkedin.com/feed/update/urn:li:activity:7365026...
Yea from what I understand 'Chats' and AI coding are something they already have market domination/are a leader on and are a good/okay product. It's the other use cases they haven't delievered on in terms of other companies using them as a platform to deliver AI apps, which I would imagine would have been a huge vertical in their pitches to investors and internal plans.
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
I think it’s more like Open AI has the name to throw around and a lot of credibility but not products that are profitable. They are burning cash and need to show a curve that they can reach profitability. Getting 15 people with 15 ideas they can throw their weight behind is worth a lot
Without putting my weight behind them, here's some counterarguments:
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
It seems almost like... an internship program for would-be AI founders?
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
We don't invest in ideas, we invest in founders. That's why OpenAI partnered with Y Combinator to bring you investments at the pre-founder stage.
We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.
Feels like the next logical move to me: they need to build and grow the demand for their product and API.
What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.
This feels like a program to see what sticks.
Isn't that how we got (and eventually lost) most Google products?
There’s a difference between having product ideas rooted in compelling hypotheses on the one hand, and random ideas you throw against a wall and see what sticks.
I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then “all problems are solved”, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.
Was forced to choose between OpenAI and YC by Paul Graham and Jessica. Sama chose OpenAI.
The really odd thing was when he got fired for like 3 days in 2023 because he refused to let Y Combinator have preferential representation of its startups in OpenAI models.
Indeed.
Exactly what I read between the lines on this.
I ask questions like that in my head all the time. My metric is once their AI is smart enough to make their website not throw up an error half the time, I'll have to more deeply consider any AGI claims
In 10 years, people will apply for jobs for their children before conception, and wisely not have kids if they can’t line one up (at least as a backup.)
It looks like application submission isn't functioning.
I first misread it as "OpenAI Grave" where someone would put the list of all discontinued models.
I don't know man?
To me, it sounded like, "let's find all the idea guys who can't afford a tech founder. Then we'll see which ones have the best ideas, and move forward with those. As a bonus, we'll know exactly where we'd be able to acquihire a product manager for it!"
I mean, I get it.
I'm highly capable of building some great things, but at my dayjob I'm filled to brim with things to do and a non-ending list of tasks in front of me.
I've built cool stuff before, and if given a little push and some support could probably come up with something useful - and I can implement much of it myself.
Put me in the room with cool people, throw out some conversation starters, shake it up and I'll come up with something.
Do you have to be in the US or can they help to get in?
Looks like they want to build up and support middle men to do the apps more than them, and act more like a platform or operating system position. Which makes sense giant corporations reporting 95% failed AI projects and the core success cases are specialist companies tuning the platform to a specific problem are successful. Then there are a ton of snake oil AI apps that are over promising under delivering hurting the image of AI's usefulness
This is probably purely a pivot in market strategy to profitability to increase token usage, increase consumer/public's trust more than farming ideas for internal projects.
It's clearly a talent grab. Where talent = creativity.
Most will submit the app with a dime a dozen ideas. (Or, at internet scale, a dime a few hundred thousand I guess?) No need to even consider those guys.
But it will be a pyramid. There will likely be 20-30 submissions that are at once, truly novel, and "why didn't I think of that!"-type ideas.
Finally, a handful of the submissions will be groundbreaking.
Et voilá. Right there you've identified the guys and gals thinking outside the LLM box about LLMs. Or even AI in general.
Capitalists can't solve problems, they can seek out rent and put meters on things. These "builders" and "innovators" are the reason the web you dearly miss is dead.
The entire internet is now structured to sell to you. premium subscriptions for simple things that aren't technical problems, but are instead artificial complexity to monetize your every move. They profit from the fact that its artificially difficult to host your own data, sync your own devices, or connect to each other without an intermediary.
All of this becomes worse with AI stratifying hardware power again. AI is great, but on american capitalists its pearls before swine.
The internet and many adjacent technologies were all created and iterated on inside the DoD and other wings of government research.
The world really benefits from well funded institutions doing research and development. Medicine has also largely advanced due in part to this.
What’s lost is the recapture. I don’t think governments are typically the best candidate to bring a new technology to marketable applications, but I do think they should be able to force terms of licensure and royalties. Keeping both those costs predictable and flat across industry would drive even more innovation to market.
What happens instead is private entities take public research and capture it almost entirely in as few hands as possible.
In short, the loss of civic pride and shared responsibility to society has created the nickel and dime you to death capitalism we are seeing in the rise today. Externalization of all costs possible and capture as much profit as possible. No thought to second order effects or how the system that is being dodged to contribute back to gave way for the ability for people to so grossly take advantage of it in the first place
> The internet and many adjacent technologies were all created and iterated on inside the DoD and other wings of government research.
^ This is the secret sauce. For decades the arrangement was exactly that: defense projects would create new technologies, then once those were finished, they were handed to private industry to figure out how to make a $20,000 MIL-spec LCD screen cheap enough and in vast enough quantities that you can buy 3 of them for less than $1,000 while the manufacturer, distributor, and retailer make a solid profit each. That's not an easy thing to do and it's what corporations have historically been good at. And it makes things better for the defense industry too, because they can then apply those lessons to their own hardware where appropriate. Win/win.
But we don't fund research anymore, or at least not that sort of it. Or perhaps there's just not much else to find. I think it's a bit of both. But in any case nothing new is getting made which is why technology feels so dull right now. The most innovative products right now are just thinner, dumber, lighter versions of things we already have, and that's not nothing but it isn't very interesting either.
Labor, FOSS... can you not imagine anything besides wealthy people creating artificial scarcity to force others to work for them?
Edit: if you don't think this is true, look at the history of truly any country and see what happens when subsistence farmers and indigenous communities refuse to work for capitalists
BRB, waiting for capitalists to solve the housing and healthcare crisis, shouldn't be long...
Capitalists would be over the moon if they could build more housing, I assure you.
I mean they already solved that, they're raking in even more billions. The only issue was their solution was for them, not us.
Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.