Comment by diggan
I think you've kind of answered a different question. Yes, more LLM models could be created. But specifically Llama? Since it's an open source model, the assumption is that we could (given access to the same compute of course) train one from scratch ourselves, just like we can build our own binaries of open source software.
But this obviously isn't true for Llama, hence the uncertainty if Llama even is open source in the first place. If we cannot create something ourselves (again, given access to compute), how could it possibly be considered open source by anyone?
I understand I was supposed to say “no” and question the open-source label. We’ve heard many arguments that if something can’t be reproduced from scratch, it’s not true open-source.
To me, they sound a bit like “no true Scotsman”. Llama is open source, compared to commercial models with closed weights. Even if it could be more open source.
That’s why I looked at it in a broader sense — what could happen in an open-source world to improve or replace Llama. Much could happen, thanks to Llama’s open nature, actually.