Comment by abtinf

Comment by abtinf 17 hours ago

18 replies

Of course it is.

It's just a form of compression.

If I train an autoencoder on an image, and distribute the weights, that would obviously be the same as distributing the content. Just because the content is commingled with lots of other content doesn't make it disappear.

Besides, where did the sections of text from the input works that show up in the output text come from? Divine inspiration? God whispering to the machine?

aschobel 16 hours ago

Indeed! It is a form of massive lossy compression.

> Llama 3 70B was trained on 15 trillion tokens

That's roughly a 200x "compression" ration; compared to 3-7x for tradtional lossless text compression like bzip and friends.

LLM don't just compress, they generalize. If they could only recite Harry Potter perfectly but couldn’t write code or explain math, they wouldn’t be very useful.

imgabe 17 hours ago

[flagged]

  • tsimionescu 15 hours ago

    > For one thing, they are probabilistic, so you wouldn't get the same content back every time like you would with a compression algorithm.

    There is nothing inherently probabilistic in a neural network. The neural net always outputs the exact same value for the same input. We typically use that value in a larger program as a probability of a certain token, but that is not required to get data out. You could just as easily determinsitically take the output with the highest value, and add some extra rule for when multiple outputs have the exact same (e.g. pick the one from the output neuron with the lowest index).

  • vrighter 15 hours ago

    I have, but I never tried to make any money off of it either

  • xigoi 10 hours ago

    > For one thing, they are probabilistic, so you wouldn't get the same content back every time like you would with a compression algorithm.

    If I make a compression algorithm that randomly changes some pixels, can I use it to distribute pirated movies?

  • bakugo 16 hours ago

    > Have you ever repeated a line from your favorite movie or TV show? Memorized a poem? Guess the rights holders better sue you for stealing their content by encoding it in your wetware neural network.

    I see this absolute non-argument regurgitated ad infinitum in every single discussion on this topic, and at this point I can't help but wonder: doesn't it say more about the person who says it than anything else?

    Do you really consider your own human speech no different than that of a computer algorithm doing a bunch of matrix operations and outputting numbers that then get turned into text? Do you truly believe ChatGPT deserves the same rights to freedom of speech as you do?

    • imgabe 16 hours ago

      Who said anything about freedom of speech? Nobody is claiming the LLM has free speech rights, which don't even apply to infringing copyright anyway. Freedom of speech doesn't give me the right to make copies of copyrighted works.

      The question is whether the model weights constitute of copy of the work. I contend that they do not, or they did, than so do the analogous weights (reinforced neural pathways) in your brain, which is clearly absurd and is intended to demonstrate the absurdity of considering a probabilistic weighting that produces similar text to be a copy.

      • bakugo 16 hours ago

        > Freedom of speech doesn't give me the right to make copies of copyrighted works.

        No, but it gives you the right to quote a line from a movie or TV show without being charged with copyright infringement. You argued that an LLM deserves that same right, even if you didn't realize it.

        > than so do the analogous weights (reinforced neural pathways) in your brain

        Did your brain consume millions of copyrighted books in order to develop into what it is today? Would your brain be unable to exist in its current form if it had not consumed those millions of books?

      • lern_too_spel 14 hours ago

        Making personal copies is generally permitted. If I were to distribute the neural pathways in my brain enabling others to reproduce copyrighted works verbatim, the owners of the copyrighted works would have a case against me.

  • homebrewer 14 hours ago

    Repeating half of the book verbatim is not nearly the same as repeating a line.

    • imgabe 13 hours ago

      If you prompt the LLM to output a book verbatim, then you violated the copyright, not the LLM. Just like if you take a book to a copier and make a copy of it, you are violating the copyright, not Xerox.

      • whattheheckheck 12 hours ago

        What if the printer had a button that printed a copy of the book on demand?

  • invalidusernam3 13 hours ago

    Difference is if it's used commercially or not. Me singing my favourite song at karaoke is fine, but me recording that and releasing it on Spotify is not

  • abtinf 16 hours ago

    [flagged]

    • imgabe 16 hours ago

      No, the second point does not concede the argument. You were talking about the model output infringing the copyright, the second point is talking about the model input infringing the copyright, e.g. if they made unauthorized copies in the process of gathering data to train the model such as by pirating the content. That is unrelated to whether the model output is infringing.

      You don't seem to be in a very good position to judge what is and is not obtuse.