Comment by estebarb

Comment by estebarb 2 hours ago

10 replies

However, the lyrics are shown because the user requested them, shouldn't be the user be liable instead? The same way social networks are not liable for content uploaded by users? I think here there is a somewhat double standard.

Of course, maybe OpenAI et al should have get a license before training on the lyrics or to avoid training on copyrighted content. But the first would be expensive and the latter would require them to develop actual intelligence.

hrimfaxi 2 hours ago

Why should the user be liable? They didn't reproduce the copyrighted work and the machine is totally capable of denying output (like it already does for other categories of material).

At the very least, the users being liable instead of OpenAI makes no sense. Like arresting only drug users and not dealers.

  • estebarb 2 hours ago

    There are countries where drug consumption/posesion is penalized too. There is a similar example in other area: For instance, in Sweeden, Norway and Belize selling sex (aka prostitution) is legal, but buying it is not legal. So, your example actually exists in world legislation.

    I'm just asking where are we going to put the line and why.

    • hrimfaxi an hour ago

      You had originally said the user should be liable instead of OpenAI being liable.

      > However, the lyrics are shown because the user requested them, shouldn't be the user be liable instead?

      I would imagine the sociological rationale for allowing sex work would not map to a multi-billion-dollar company.

      And to add, the social network example doesn't map because the user is producing the content and sharing it with the network. In OpenAI's case, they are creating and distributing copyrighted works.

      • estebarb an hour ago

        No, the edited wording still conveys the same meaning. My edit was to fix another grammar typo.

        The social networks are distributing such content AND benefiting from selling ads on them. Adding ads on top is a derivative work.

        Personally I'm on the side of penalizing the side that provides the input, not the output:

        - OpenAI training on copyrighted works. - Users requesting custom works based on copyrighted IP

        That is my opinion on how it should be layered, that's it. I'm happy to discuss why it should be that way or why not. As I put in other comment, my concern is that mandating copyright filtering o each generative tool would end up propagating to every single digital tool, which as society we don't really want.

        • hrimfaxi 25 minutes ago

          I am curious why you are of the opinion that the user should be in trouble for requesting the copyright material and not the provider of the material. I feel like there is a distinction in something that was local-first compared to a SaaS. Like a local AI model that reproduced copyrighted works for your own use might not be problematic compared to a remote model reproducing a copyrighted work and distributing it over the internet to you. Most jurisdictions treat remote access across jurisdictional boundaries differently than completely local acts.

embedding-shape 2 hours ago

> However, the lyrics are shown because an action is the user so, shouldn't be the user be liable instead?

Same goes for websites where you can watch piracy streams. "The action is the user pressing play" sounds like it might win you an internet argument, but I'm 99% sure none of the courts will play those games, you as the operator who enabled whatever the user could do ends up liable.

  • estebarb 2 hours ago

    I think that is completely different. Piracy websites do only one thing. Chatbots are different.

    My concern is that where are we going to put the line: If I type a copyrighted song in Word is Microsoft liable? If I upload a lyric to ChatGPT and ask it to analyze or translate it, is it a copyright violation?

    I totally understand your line of thinking. However, the one I'm suggesting could be applied as well and it has precedents in law (intellectual authors of crimes are punishable, not only the perpetrators).

    • dpoloncsak 2 hours ago

      > I think that is completely different. Piracy websites do only one thing. Chatbots are different.

      Well...YouTube is liable for any copyrighted material on their site, and do 'more than one thing'

      • estebarb an hour ago

        Not really. Youtube is not liable as long as they remove the content after a copyright complain and other mechanisms.

        The problem is if OpenAI is liable for reproducing copyrighted content, so will be other products such as word processors, video editors and so on. So, as society where we will put the line?

        Are we going to tolerate some copyright infringement in these tools or are we going to pursue copyright infringements even in other tools as we already got the tools to detect it?

        We cannot have double standards, law should be applied equally to everyone.

        I do think that overall making OpenAI liable for output is a bad precedent, because of repercusions beyond AI tools. I'm all fine with making them liable for having trained on copyrighted content and so on...

thisisit 2 hours ago

This is such a bad take.

If that was case then Google wouldn't receive DMCA takedown of piracy links, instead offer up users searching for piracy content. Former is more prevalent than latter because one, it requires invasion of privacy - you have to serve up everyone's search results

two, it requires understanding of intent.

Same is the issue here. OpenAI then needs to share all chats for courts to shift through and second, how to judge intent. If someone asks for a German pop song and OpenAI decides to output Bochum - whose fault is that?