Comment by victorbuilds
Comment by victorbuilds 2 days ago
Notable: they open-sourced the weights under Apache 2.0, unlike OpenAI and DeepMind whose IMO gold models are still proprietary.
Comment by victorbuilds 2 days ago
Notable: they open-sourced the weights under Apache 2.0, unlike OpenAI and DeepMind whose IMO gold models are still proprietary.
We don't even have to do that: weights being entirely machine generated without human intervention, they are likely not copyrightable in the first place.
In fact, we should collectively refuse to abide to these fantasy license before weight copyrightability gets created out of thin air because it's been commonplace for long enough.
There's an argument by which machine-learned neural network weights are a lossy compression of (as well as a smooth interpolator over) the training set.
An mp3 file is also a machine-generated lossy compression of a cd-quality .wav file, but it's clearly copyrightable.
To that extent, the main difference between a neural network and an .mp3 is that the mp3 compression cannot be used to interpolate between two copyrighted works to output something in the middle. This is, on the other hand, perhaps the most common use case for genAI, and it's actually tricky to get it to not output something "in the middle" (but also not impossible).
I think the copyright argument could really go either way here.
> An mp3 file is also a machine-generated lossy compression of a cd-quality .wav file, but it's clearly copyrightable.
Not the .mp3 itself, the creative piece of art that it encode.
You can't record Taylor Swift at a concert and claim copyright on that. Nor can you claim copyright on mp3 re-encoded old audio footage that belong to the public domain.
Whether LLMs are in the first category (copyright infringement of copyright holders of the training data) or in the second (public domain or fair use) is an open question that jurisprudence is slowly resolving depending on the jurisdiction, but that doesn't address the question of the weight themselves.
Of course we should! And everyone who says otherwise must be delusional or sort of a gaslighter, as this whole "innovation" (or remix (or comopression)) is enabled by the creative value of the source product. Given AI companies never ever respected this copyright, we should give them similar treatment.
If they open source just weights and not the training code and data, then it’s still proprietary.
You can distill closed weights models as well. (Just not logit-distillation)
Isn't that a bit like saying that if I open source a tool, but not a full compendium of all the code that I had read, which led me to develop it, then it's not really open source?
> rebuild it from scratch
That's beyond the definition of Open Source. Doing a bit of license research now, only the GPL has such a requirement - GPLv3:
> The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities.
But all other Open Source compliant licenses I checked don't, and just refer to making whatever is in the repo available to others.
Yes, I 100% agree. Open Source is a lot more about not paying than about liberty.
This is exactly the tradeoff that we had made in the industry a couple of decades ago. We could have pushed all-in on Stallman's vision and the FSF's definition of Free Software, but we (collectively) decided that it's more important to get the practical benefits of having all these repos up there on GitHub and us not suing each other over copyright infringement. It's absolutely legitimate to say that we made the wrong choice, and I might agree, but a choice was made, and Open Source != Free Software.
https://www.gnu.org/philosophy/open-source-misses-the-point....
No. In that case, you're providing two things, a binary version of your tool, and the tool's source. That tool's source is available to inspect and build their own copy. However, given just the weights, we don't have the source, and can't inspect what alignment went into it. In the case of DeepSeek, we know they had to purposefully cause their model to consider Tiananmen Square something it shouldn't discuss. But without the source used to create the model, we don't know what else is lurking around inside the model.
> However, given just the weights, we don't have the source
This is incorrect, given the definitions in the license.
> (Apache 2.0) "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
(emphasis mine)
In LLMs, the weights are the preferred form of making modifications. Weights are not compiled from something else. You start with the weights (randomly initialised) and at every step of training you adjust the weights. That is not akin to compilation, for many reasons (both theoretical and practical).
In general licenses do not give you rights over the "know-how" or "processes" in which the licensed parts were created. What you get is the ability to inspect, modify, redistribute the work as you see fit. And most importantly, you modify the work just like the creators modify the work (hence the preferred form). Just not with the same data (i.e. you can modify the source of chrome all you want, just not with the "know-how and knowledge" of a google engineer - the license can not offer that).
This is also covered in the EU AI act btw.
> General-purpose AI models released under free and open-source licences should be considered to ensure high levels of transparency and openness if their parameters, including the weights, the information on the model architecture, and the information on model usage are made publicly available. The licence should be considered to be free and open-source also when it allows users to run, copy, distribute, study, change and improve software and data, including models under the condition that the original provider of the model is credited, the identical or comparable terms of distribution are respected.
No, it's like saying that if you release under Apache license, it's not open source even though it's under an open source license
For something to be open source it needs to have sources released. Sources are the things in the preferred format to be edited. So the code used for training is obviously source (people can edit the training code to change something about the released weights). Also the training data, under the same rationale: people can select which data is used for training to change the weights
Well, this is just semantics. I can have a repo that includes a collection of json files that I had generated via a semi-manual build process that depends on everything from the state of my microbiome to my cat's scratching pattern during Mercury's last retrograde. If I attach an open source license to it, then that's the source - do with it what you will. Otherwise, I don't see how this discussion doesn't lead to "you must first invent the universe".
What does open sourcing have to do with "reproducing"? Last I checked, open sourcing is about allowing others to modify and to distribute the modified version, which you can do with these. Yes, having the full training data and tooling would make it significantly easier, and it is a requirement for GPL, but not for Open Source licenses in general. You may add this as another argument in favor of going back in time and doing more to support Richard Stallman's vision, but this is the world in which we live now.
I think we should treat copyright for the weights the same way the AI companies treat source material ;)