Comment by atleastoptimal

Comment by atleastoptimal 5 hours ago

39 replies

Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.

PostOnce 4 hours ago

People remember things and consistently behaving like an asshole gets you treated like an asshole.

OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.

  • atleastoptimal 3 hours ago

    What are the worst things OpenAI has done

    • echelon 3 hours ago

      The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.

      What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.

      Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.

      • g42gregory 2 hours ago

        It's actually worse than that.

        First, when they thought they had a big lead, OpenAI argued for AI regulations (targeting regulatory capture).

        Then, when lead evaporated by Anthropic and others, OpenAI argued against AI regulations (so that they can catch up, and presumably argue for regulations again).

      • atleastoptimal 3 hours ago

        Do you believe AI should not be regulated?

        Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.

        • moregrist 2 hours ago

          I think you can both think there's a need for some regulation and also want to avoid regulation that effectively locks out competition. When only one company is pushing for regulation, it's a good bet that they see this as a competitive advantage.

    • Lionga 3 hours ago

      Dude, they completely betrayed everything in their "mission". The irony in the name OpenAI for a closed, scammy, for profit company can not be lost on you.

      • atleastoptimal 3 hours ago

        They released a near-SOTA open-source model recently.

        Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.

        As per your claim that they are scammy, what about them is scammy?

bitpush 4 hours ago

> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.

  • atleastoptimal 4 hours ago

    I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.

    • bhhaskin 4 hours ago

      I think the majority of people don't care about YC. It just happens to be the most popular tech forum.

    • [removed] 3 hours ago
      [deleted]
    • MegaButts 4 hours ago

      Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.

      • tptacek 3 hours ago

        Sam Altman wasn't fired from YC.

dcreater 3 hours ago

My takeaway is actually the opposite, major props to YC for allowing this free speech unfettered - I cant think of any other organization or country on the planet where such a free setup exists

cootsnuck 34 minutes ago

I would call it skepticism, not cynicism. And there is a long list of reasons that big tech and big AI companies are met with skepticism when they trot out nice sounding ideas that require everyone to just trust in their sincerity despite prior evidence.

Hadriel 4 hours ago

Why do you assume that a forum run by X needs to or should support X? And why is it unwise - from what metrics do you measure wisdom?

peishang 3 hours ago

I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?

Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.

Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.

  • austhrow743 38 minutes ago

    I don’t think anyone’s disputing that these companies are evil or that when they’re changing the world it’s generally for the worse.

    The question is, why are people who have a problem with that hanging out in evil technologists making the world a worse place for money HQ?

makk 3 hours ago

These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.

  • atleastoptimal 3 hours ago

    True, though it seems most people on HN think AGI is impossible thus would consider OpenAI's quest a lost cause.

    • xpe 2 hours ago

      I don’t think one can validly draw any such conclusion.

typon 4 hours ago

When you call yourself "Open"AI and then turn around and backstab the entire open community, its pretty hard to recover from that.

  • xpe 2 hours ago

    They undermined their not-for-profit mission by changing their governance structure. This changed their very DNA.

  • atleastoptimal 3 hours ago

    They released a near-SOTA open source model not too long ago

    • xpe 2 hours ago

      open weights != open source

mrcwinn 3 hours ago

This. I’ve been on HN for a while. I am barely hanging on to this community. It is near constant negativity and the questioning of every potential motive.

Skepticism is healthy. Cynicism is exhausting.

Thank you for posting this.

  • dcreater 3 hours ago

    In the current echo chamber and unprecedented hype, I'll take cynicism over hollow positivity and sycophancy

dyauspitr 3 hours ago

People here are directly in the line of fire for their jobs. It’s not surprising.

  • chrishare 2 hours ago

    True, but there are many reasons besides. Meta and Anthropic attract less criticism for a reason.

theideaofcoffee 3 hours ago

I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?

Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.

I’d do some self reflection and ask yourself why you need to carry water for them.

  • atleastoptimal 3 hours ago

    I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.

    I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.