victorbuilds 2 days ago

Notable: they open-sourced the weights under Apache 2.0, unlike OpenAI and DeepMind whose IMO gold models are still proprietary.

  • PunchyHamster 2 days ago

    I think we should treat copyright for the weights the same way the AI companies treat source material ;)

    • littlestymaar 2 days ago

      We don't even have to do that: weights being entirely machine generated without human intervention, they are likely not copyrightable in the first place.

      In fact, we should collectively refuse to abide to these fantasy license before weight copyrightability gets created out of thin air because it's been commonplace for long enough.

      • mitthrowaway2 2 days ago

        There's an argument by which machine-learned neural network weights are a lossy compression of (as well as a smooth interpolator over) the training set.

        An mp3 file is also a machine-generated lossy compression of a cd-quality .wav file, but it's clearly copyrightable.

        To that extent, the main difference between a neural network and an .mp3 is that the mp3 compression cannot be used to interpolate between two copyrighted works to output something in the middle. This is, on the other hand, perhaps the most common use case for genAI, and it's actually tricky to get it to not output something "in the middle" (but also not impossible).

        I think the copyright argument could really go either way here.

    • larodi 2 days ago

      Of course we should! And everyone who says otherwise must be delusional or sort of a gaslighter, as this whole "innovation" (or remix (or comopression)) is enabled by the creative value of the source product. Given AI companies never ever respected this copyright, we should give them similar treatment.

  • SilverElfin 2 days ago

    If they open source just weights and not the training code and data, then it’s still proprietary.

    • ekianjo 2 days ago

      It's just open weights, the source has no place in this expression

    • mips_avatar 2 days ago

      Yeah but you can distill

      • littlestymaar 2 days ago

        You can distill closed weights models as well. (Just not logit-distillation)

      • amelius 2 days ago

        Is that the equivalent of decompile?

        • c0balt 2 days ago

          No, that is the equivalent of lossy compression.

    • falcor84 2 days ago

      Isn't that a bit like saying that if I open source a tool, but not a full compendium of all the code that I had read, which led me to develop it, then it's not really open source?

      • KaiserPro 2 days ago

        No its like releasing a binary. I can hook into it and its API and make it do other things. But I can't rebuild it from scratch.

      • exe34 2 days ago

        "open source" as a verb is doing too much work here. are you proposing to release the human readable code or the object/machine code?

        if it's the latter, it's not the source. it's free as in beer. not freedom.

        • falcor84 2 days ago

          Yes, I 100% agree. Open Source is a lot more about not paying than about liberty.

          This is exactly the tradeoff that we had made in the industry a couple of decades ago. We could have pushed all-in on Stallman's vision and the FSF's definition of Free Software, but we (collectively) decided that it's more important to get the practical benefits of having all these repos up there on GitHub and us not suing each other over copyright infringement. It's absolutely legitimate to say that we made the wrong choice, and I might agree, but a choice was made, and Open Source != Free Software.

          https://www.gnu.org/philosophy/open-source-misses-the-point....

      • fragmede 2 days ago

        No. In that case, you're providing two things, a binary version of your tool, and the tool's source. That tool's source is available to inspect and build their own copy. However, given just the weights, we don't have the source, and can't inspect what alignment went into it. In the case of DeepSeek, we know they had to purposefully cause their model to consider Tiananmen Square something it shouldn't discuss. But without the source used to create the model, we don't know what else is lurking around inside the model.

      • nextaccountic 2 days ago

        No, it's like saying that if you release under Apache license, it's not open source even though it's under an open source license

        For something to be open source it needs to have sources released. Sources are the things in the preferred format to be edited. So the code used for training is obviously source (people can edit the training code to change something about the released weights). Also the training data, under the same rationale: people can select which data is used for training to change the weights

      • nurettin 2 days ago

        Is this a troll? They don't want to reproduce your open source code, they want to reproduce the weights.

    • amelius 2 days ago

      True. But the headline says open weights.

    • jimmydoe 2 days ago

      you are absolutely right. I'd rather use true closed models, not fake open source ones from China.

ilmj8426 2 days ago

It's impressive to see how fast open-weights models are catching up in specialized domains like math and reasoning. I'm curious if anyone has tested this model for complex logic tasks in coding? Sometimes strong math performance correlates well with debugging or algorithm generation.

WhitneyLand 2 days ago

Shouldn’t there be a lot of skepticism here?

All the problems they claim to have solved are on are the Internet and they explicitly say they crawled them. They do not mention doing any benchmark decontamination or excluding 2024/2025 competition problems from training.

IIRC correctly OpenAI/Google did not have access to the 2025 problems before testing their experimental math models.

terespuwash 2 days ago

Why isn’t OpenAI’s gold medal-winning model available to the public yet?

  • esafak 2 days ago

    'coz it was for advertisement. They'll roll their lessons into the next general purpose model.

simianwords 2 days ago

A bit important that this model is not general purpose whereas the ones Google and OpenAI used were general purpose.

H8crilA 2 days ago

How do you run this kind of a model at home? On a CPU on a machine that has about 1TB of RAM?

  • pixelpoet 2 days ago

    Wow, it's 690GB of downloaded data, so yeah, 1TB sounds about right. Not even my two Strix Halo machines paired can do this, damn.

  • Gracana 2 days ago

    You can do it slowly with ik_llama.cpp, lots of RAM, and one good GPU. Also regular llama.cpp, but the ik fork has some enhancements that make this sort of thing more tolerable.

  • bertili 2 days ago

    Two 512GB Mac Studios connected with thunderbolt 5.

sschueller 2 days ago

How is OpenAI going to be able to serve ads in chatgpt without everyone immediately jumping ship to another model?

  • Coffeewine 2 days ago

    I suppose the hope is that they don’t, and we wind up with commodity frontier models from multiple providers at market rates.

  • miroljub 2 days ago

    I don't care about OpenAI even if they don't serve ads.

    I can't trust any of their output until they become honest enough to change their name to CloseAI.

  • astrange 2 days ago

    ChatGPT is a website. There's nothing unusual about ads on a website.

    People use Instagram too.

  • dist-epoch 2 days ago

    The same way people stayed on Google despite DuckDuckGo existing.

  • PunchyHamster 2 days ago

    by having datacenters with GPUs and API everyone uses.

    So they are either earning money directly or on the API calls.

    Now, competition can come and compete on that, but they will probably still be the first choice for foreseeable future

  • KeplerBoy 2 days ago

    Google served ads for decades and no one ever jumped ship to another search engine.

    • sschueller 2 days ago

      Because Google gave the best results for a long time.

      • PunchyHamster 2 days ago

        and now, when they are not, everyone else's results are also pretty terrible...

    • bootsmann 2 days ago

      They pay $30bn (more than OpenAIs lifetime revenue) each year to make sure noone does.

LZ_Khan 2 days ago

Don't they distill directly off OpenAI/Google outputs?