Comment by jbs789
I enjoyed reading this.
If we think about applicability to AI (as the footnote suggests was on the authors mind)… I found myself thinking about the motivations and incentives that existed at the time, to understand why.
Ships - my read is that major naval powers essentially reduced the downside to owners of ships (making them responsible) while giving the owners salvage rights (“curing” the problem of a wreck which may be an impediment to the passage). That seems to make sense if you put your “I’m a shipowner” hat on. Balanced by the governments also saying if your ship sunk and someone else recovers it, they get some cut bc hey, you didn’t recover it.
And the others seem largely about modern governments appeasing religious and indigenous groups. And it’s interesting that this acknowledgement seems to be part of a broader solution (Trust, ongoing governance etc)
The first seems more financially motivated (capping downside and clearing shipwrecks). The latter seems more about protecting a natural resource/asset.
So then you think about who is leading AI and what their incentives may be… Convenient that we do have modern companies that limit liability, do you just use that structure (as they are) or do they seek to go further and say this Agent or LLM is its own thing, as as a company I’m not responsible for it.
maybe that is convenient for the companies, and the gov in the countries leading the charge…? Looks more like ships than rivers..?
I think AI personhood will come about via another path. That is the same as Animal rights.
We humans in general, suffer, when we perceive animals suffering. Its an entirely emotional response. Humans are developing emotional attachments to LLMs. It follows, to an extent, that people will try and shore up the rights of LLMs simply to assuage their emotions. It doesnt actually matter whether or not it can feel pain, but whether it can express pain in a way that causes a sympathetic emotion in a person.