Comment by atleastoptimal
Comment by atleastoptimal 3 hours ago
What are the worst things OpenAI has done
Comment by atleastoptimal 3 hours ago
What are the worst things OpenAI has done
It's actually worse than that.
First, when they thought they had a big lead, OpenAI argued for AI regulations (targeting regulatory capture).
Then, when lead evaporated by Anthropic and others, OpenAI argued against AI regulations (so that they can catch up, and presumably argue for regulations again).
Do you believe AI should not be regulated?
Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.
They released a near-SOTA open-source model recently.
Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.
As per your claim that they are scammy, what about them is scammy?
Their contribution to opensouurce and open research is far behind other organisations like Meta and Mistral, as welcome as their recent model release is. Former security researchers like Jan Leike commonly cite a lack of organisational focus on security as a reason for leaving.
Not sure specifically what the commenter is referring to re: scammy, but things like the Scarlett Johansson / Her voice imitation and copyright infringement come to mind for me.
Oh yeah, that reminds me. the company did research on how to train a model that manipulates the metrics, allowing them to tick the open source box with a seemingly good score, while releasing something that serves no real purpose. [1] [2]
GPT-OSS is not a near-state-of-the-art model: it is a model deliberately trained in a way that it appears great in the evaluations, but is unusable and far underperforms actual open source models like Ollama. That's scammy.
[1] https://www.lesswrong.com/posts/pLC3bx77AckafHdkq/gpt-oss-is...
[2] https://huggingface.co/openai/gpt-oss-20b/discussions/14
That explains why gpt-oss wasn't working anywhere near as well for me as other similarly and smaller sized models. gemma3 27b, 12b, and phi4 (14b?) all significantly outperformed it when transforming unstructured data to structured data.
The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.
What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.
Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.