Comment by caseyy
Comment by caseyy a day ago
There are many things to be said about open-source projects and, more broadly, the capabilities of the open-source community.
The most capable parts are for-profit organizations that release open-source software for their business imperative, public benefit companies that write open-source software for ideological reasons but still operate as businesses, and a tiny number of public benefit organizations with unstable cash flow. Most other efforts are unorganized and plagued by bickering.
Llama itself is challenging to take over. The weights are public, but the training data and process is not. It could be evolved, but not fully iterated by anyone else. For a full iteration, the training process and inputs would need to be replicated, with improvements there.
But could another open-source model, as capable as Llama, be produced? Yes. Just like Meta, other companies, such as Google and Microsoft, have the incentive to create a moat around their AI business by offering a free model to the public, one that's just barely under their commercial model's capabilities. That way, no competitor can organically emerge. After all, who would pay for their product if it's inferior to the open-source one? It's a classic barrier to entry in the market - a thing highly sought after by monopolistic companies.
Public benefit companies leading in privacy could develop a model to run offline for privacy purposes, to avoid mass consumer data harvesting. A new open-source ideological project without a stable business could also, in theory, pop up in the same pattern as the Linux project. But these are like unicorns - "one in a million years (maybe)."
So, to answer your question, yes, Llama weights could be evolved; no, an entirely new version cannot be made outside of Meta. Yes, someone else could create such a wholly new open-source model from scratch, and different open-source groups have different incentives. The most likely incentive is monopolistic, to my mind.
I think you've kind of answered a different question. Yes, more LLM models could be created. But specifically Llama? Since it's an open source model, the assumption is that we could (given access to the same compute of course) train one from scratch ourselves, just like we can build our own binaries of open source software.
But this obviously isn't true for Llama, hence the uncertainty if Llama even is open source in the first place. If we cannot create something ourselves (again, given access to compute), how could it possibly be considered open source by anyone?