Comment by strogonoff
Comment by strogonoff 21 hours ago
> China is doing pretty well on the innovation front, including with AI.
From transistors to transformers, most of it builds on foundation that comes guess from where. The innovative layer you speak of is fairly thin.
> Correlation does not imply causation.
I give you that. However, without being able to re-run history, correlation is all we have.
> I would even suggest that it is the disregarding of IP rights that enables much of the web
I would suggest that much of the tech that powers the Web, including probably the most popular operating system on which most servers run, is enabled by copyleft, and copyleft cannot exist without the ability to defend it granted by IP rights, the very concept under fire.
> the point of copyright is to encourage new works
I agree on this.
> All this becomes somewhat irrelevant when AI can also create new works
I don’t agree with a phrase “AI can create new works” for reasons such as 1) “AI” is a meaningless term (let it be my revenge for consciousness) or 2) a tool without agency or will should not be X in a sentence “X can Y” (sure, we can maybe on occasion say “hammers can break things”, but if hammers having agency and will was a popular misconception then I would definitely prefer to stick to “hammers can be used to break things”). The “create new works” part is also questionable on a few levels, but it might exceed the scope of this argument.
That aside, I believe lack of copyright enforcement discourages the creation of new works even in presence of these tools, through the mechanism known as “why would I put effort into new work if I don’t effectively own the result”.
> Does that mean you accept that, in principle, there could be some AI which would deserve rights in the category currently (but in principle inaccurately) called "human rights"?
I think if we believe an LLM or some other software is sufficiently close to a human that it deserves human-like rights or just strong abuse protections (cf. octopus in some countries)—without saying whether I believe it possible or not, it really is orthogonal—then we could excuse it reciting some part of Harry Potter in the right context (probably not as work for hire), but it would be moot because we would also be ethically compelled to not subject it to the training and use that enables such recitation in the first place.
> From transistors to transformers, most of it builds on the foundation that comes guess from where. The innovative layer you speak of is fairly thin.
China rises to first place in most cited papers: https://www.science.org/content/article/china-rises-first-pl...
> 1) “AI” is a meaningless term (let it be my revenge for consciousness)
Fair.
But let's say "computer program" in that case. I'm not fussed about definitions.
> 2) a tool without agency or will should not be X in a sentence “X can Y”. Sure, we can maybe on occasion say “hammers can break things”, but if hammers having agency and will was a popular misconception then I would definitely prefer to stick to “hammers can be used to break things”.
Careful.
If you say that humans can only create things with copyright (even if to support copyleft), then the proletariat are the tool that the bourgeois use to create things.
I do not think this is what you intended :P
> That aside, I believe lack of copyright enforcement discourages the creation of new works even in presence of these tools, through the mechanism known as “why would I put effort into new work if I don’t effectively own the result”.
Same reason you commission a work, or even just buy it from a shop: because then you have the thing.
I mean, the cost of getting o3 to create a novel worth of text is about the same as the price of a generic book by an unknown author in a second-hand shop: https://openai.com/api/pricing/
I've not tried o3 yet, but I have tried o1, and as I've said on a different thread today, o1's output is merely OK, not worth publishing as a book — and I don't know how long it will take to get there. But it is displacing blog writers and podcast writers: https://news.ycombinator.com/item?id=44287953
> but it would be moot because we would also be ethically compelled to not subject it to the training and use that enables such recitation in the first place.
Surprising.
While I would seriously consider the possibility it may be unethical to force such an AI to work if it didn't want to, I think giving it the capability, the education, to be capable of making that choice rather than just saying "it doesn't matter if I wanted to or not, I can't", is just education, as per our own.
Still, I think that's coherent. I'm not sure I've fully internalised the implications so I will let it be.