energy123 a day ago

It's like AMD open-sourcing FSR or Meta open-sourcing Llama. It's good for us, but it's nothing more than a situational and temporary alignment of self-interest with the public good. When the tables turn (they become the best instead of 4th best, or AMD develops the best upscaler, etc), the decision that aligns with self-interest will change, and people will start complaining that they've lost their moral compass.

  • orbital-decay a day ago

    >situational and temporary alignment of self-interest with the public good

    That's how it supposed to work.

  • re-thc a day ago

    It's not. This isn't about competition in a company sense but sanctions and wider macro issues.

    • energy123 a day ago

      It's like it in the sense that it's done because it aligns with self-interest. Even if the nature of that self-interest differs.

twelvechairs 2 days ago

The bar is incredibly low considering what OpenAI has done as a "not for profit"

  • kopirgan 2 days ago

    You need get a bunch of accountants to agree on what's profit first..

    • komali2 a day ago

      Agree against their best interest, mind you!

echelon 2 days ago

I don't care if this kills Google and OpenAI.

I hope it does, though I'm doubtful because distribution is important. You can't beat "ChatGPT" as a brand in laypeople's minds (unless perhaps you give them a massive "Temu: Shop Like A Billionaire" commercial campaign).

Closed source AI is almost by design morphing into an industrial, infrastructure-heavy rocket science that commoners can't keep up with. The companies pushing it are building an industry we can't participate or share in. They're cordoning off areas of tech and staking ground for themselves. It's placing a steep fence around tech.

I hope every such closed source AI effort is met with equivalent open source and that the investments made into closed AI go to zero.

The most likely outcome is that Google, OpenAI, and Anthropic win and every other "lab"-shaped company dies an expensive death. RunwayML spent hundreds of millions and they're barely noticeable now.

These open source models hasten the deaths of the second tier also-ran companies. As much as I hope for dents in the big three, I'm doubtful.

  • raw_anon_1111 2 days ago

    I can’t think of a single company I’ve worked with as a consultant that I could convince to use DeepSeek because of its ties with China even if I explained that it was hosted on AWS and none of the information would go to China.

    Even when the technical people understood that, it would be too much of a political quagmire within their company when it became known to the higher ups. It just isn’t worth the political capital.

    They would feel the same way about using xAI or maybe even Facebook models.

    • JSR_FDED 2 days ago
      • raw_anon_1111 a day ago

        TIL: That Chinese models are considered better at multiple languages than non Chinese models.

      • tayo42 a day ago

        It's a customer service bot? And Airbnb is a vacation home booking site. It's pretty inconsequential

        • antonvs a day ago

          Airbnb has ~$12 bn annual revenue, and is a counterexample to the idea that no companies can be "convinced to use DeepSeek".

          The fact that it's customer service means it's dealing with text entered by customers, which has privacy and other consequences.

          So no, it's not "pretty inconsequential". Many more companies fit a profile like that than whatever arbitrary criteria you might have in mind for "consequential".

    • StealthyStart 2 days ago

      This is the real cause. At the enterprise level, trust outweighs cost. My company hires agencies and consultants who provide the same advice as our internal team; this is not to imply that our internal team is incorrect; rather, there is credibility that if something goes wrong, the decision consequences can be shifted, and there is a reason why companies continue to hire the same four consulting firms. It's trust, whether it's real or perceived.

      • raw_anon_1111 2 days ago

        I have seen it much more nuanced than that.

        2020 - I was a mid level (L5) cloud consultant at AWS with only two years of total AWS experience and that was only at a small startup before then. Yet every customer took my (what in hindsight might not have been the best) advice all of the time without questioning it as long as it met their business goals. Just because I had @amazon.com as my email address.

        Late 2023 - I was the subject matter expert in a niche of a niche in AWS that the customer focused on and it was still almost impossible to get someone to listen to a consultant from a shitty third rate consulting company.

        2025 - I left the shitty consulting company last year after only a year and now work for one with a much better reputation and I have a better title “staff consultant”. I also play the game and be sure to mention that I’m former “AWS ProServe” when I’m doing introductions. Now people listen to me again.

      • 0xWTF 2 days ago

        Children do the same thing intuitively: parents continually complain that their children don't listen to them. But as soon as someone else tells them to "cover their nose", "chew with their mouth closed", "don't run with scissors", whatever, they listen and integrate that guidance into their behavior. What's harder to observe is all the external guidance they get that they don't integrate until their parents tell them. It's internal vs external validation.

      • coliveira 2 days ago

        So much worse for American companies. This only means that they will be uncompetitive with similar companies that use models with realistic costs.

    • tokioyoyo 2 days ago

      If the Chinese model becomes better than competitors, these worries will suddenly disappear. Also, there are plenty startups and enterprises that are running fine-tuned versions of different OS models.

      • raw_anon_1111 2 days ago

        Yeah that’s not how Big Enterprise works…

        And most startups are just doing prompt engineering that will never go anywhere. The big companies will just throw a couple of developers at the feature and add it to their existing business.

      • hhh 2 days ago

        No… Nobody I work for will touch these models. The fear is real that they have been poisoned or have some underlying bomb. Plus y’know, they’re produced by China, so they would never make it past a review board in most mega enterprises IME.

      • subroutine 2 days ago

        As a government contractor, using a Chinese model is a non-starter.

    • deaux 2 days ago

      > Even when the technical people understood that

      I'm not sure if technical people who don't understand this deserve the moniker technical in this context.

    • nylonstrung a day ago

      The average person has been programmed to be distrustful of open source in general, thinking it is inferior quality or in service of some ulterior motive

    • register 2 days ago

      That might be the perspective of a US based company. But there is also Europe and basically it's a choice between Trump and China.

      • Muromec 2 days ago

        Europe has Mistral. It feels that governments that can do things without fax take this as a sovereignity thing and roll their own or have their provider in their jurisdiction.

    • tehjoker 2 days ago

      really a testament to how easily the us govt has spun a china bad narrative even though it is mostly fiction and american exceptionalism

    • littlestymaar 2 days ago

      > I can’t think of a single company I’ve worked with as a consultant that I could convince to use DeepSeek because of its ties with China even if I explained that it was hosted on AWS and none of the information would go to China.

      Well for non-American companies, you have the choice between Chinese models that don't send data home, and American ones that do, with both countries being more or less equally threatening.

      I think if Mistral can just stay close enough to the race it will win many customers by not doing anything.

    • siliconc0w 2 days ago

      [flagged]

      • deaux 2 days ago

        > For example, a small random percentage of the time, it could add a subtle security vulnerability to any code generation.

        Now on the HN frontpage: "Google Antigravity just wiped my hard drive"

        Sure going to be hard to distinguish these Chinese models' "intentionally malicious actions"!

        And the cherry on top:

        - Written from my iPhone 16 Pro Max (Made in China)

      • nagaiaida 2 days ago

        on what hypothetical grounds would you be more meaningfully able to sue the american maker of a self-hosted statistical language model that you select your own runtime sampling parameters for after random subtle security vulnerabilities came out the other side when you asked it for very secure code?

        put another way, how do you propose to tell this subtle nefarious chinese sabotage you baselessly imply to be commonplace from the very real limitations of this technology in the first place?

      • nylonstrung a day ago

        Literally every time a Chinese model is discussed here we get this completely braindead take

        There has never been a shred of evidence for security researchers, model analysis, benchmarks, etc that supports this.

        It's a complete delusion in every sense.