Comment by ben_w
> I keep waiting for the day when software stops being compared to a human person (a being with agency, free will, consciousness, and human rights of its own) for the purposes of justifying IP law circumvention.
I mean, "agency" is a goal of some AI; "free will" is incoherent*; the word "consciousness" has about 40 different definitions, some of which are so broad they include thermostats and others so narrow that it's provably impossible for anything (including humans) to have it; and "human rights" are a purely legal concept.
> What’s even worse, is that imaginably they train (or would train) the models to specifically not output those things verbatim specifically to thwart attempts to detect the presence of said works in training dataset (which would naturally reveal the model and its output being a derivative work).
Some of the makers certainly do as you say; but also, the more verbatim quotations a model can produce, the more computational effort that model needs to spend to get the far more useful general purpose results.
* I'm not a fan of Aleister Crowley, but I think he was right to say that there's only one thing you can actually do that's truly your own will and not merely you allowing others to influence you: https://en.wikipedia.org/wiki/True_Will
> and "human rights" are a purely legal concept.
Yep, and if you claim that a thing can reproduce IP like a human then you should explain why you are also not holding its operators to the same legal standard (try to use a human in the same way and it will be considered torture and slavery).