Comment by alephnerd

Comment by alephnerd 18 hours ago

22 replies

> While the Harry Potter series may be fun reading, it doesn't provide information about anything that isn't better covered elsewhere

It has copyright implications - if Claude can recollect 42% of a copyrighted product without attribution or royalties, how did Anthropic train it?

> Train scientific LLMs to the level of a good early 20th century English major and then use science texts and research papers for the remainder

Plenty of in-stealth companies approaching LLMs via this approach ;)

For those of us who studied the natural sciences and CS in the 2000s and early 2010s, there was a bit of a trend where certain PIs would simply translate German and Russian papers from the early-to-mid 20th century and attribute them to themselves in fields like CS (especially in what became ML).

epgui 17 hours ago

> It has copyright implications - if Claude can recollect 42% of a copyrighted product without attribution or royalties, how did Anthropic train it?

Personally I’m assuming the worst.

That being said, Harry Potter was such a big cultural phenomenon that I wonder to what degree might one actually be able to reconstruct the books based solely on publicly accessible derivative material.

weird-eye-issue 18 hours ago

Why are you talking about Claude and Anthropic?

  • cshimmin 17 hours ago

    It’s not unreasonable to suspect they are doing the same. The article starts with a description of a lawsuit NY Times brought against OpenAI for similar reasons. The big difference is that research presented here is only possible with open weight models. OAI and Anthropic don’t make the base models available, so it’s easier to hide the fact that you’ve used copyrighted material by instruction post-training. And I’m not sure you can get the logprobs for specific tokens from their APIs either (which is what the researchers did to make the figures and come up with a concrete number like 42%)

  • alephnerd 16 hours ago

    Good call! I brain farted and wrote Claude/Anthropic instead of Meta/Llama.

ninetyninenine 17 hours ago

So if I memorized Harry Potter the physical encoding which definitely exists in my brain is a copyright violation?

  • dvt 17 hours ago

    > the physical encoding which definitely exists in my brain is a copyright violation

    First of all, we don't really know how the brain works. I get that you're being a snarky physicalist, but there's plenty of substance dualists, panpsychsts, etc. out there. So, some might say, this is a reductive description of what happens in our brains.

    Second of all, yes, if you tried to publish Harry Potter (even if it was from memory), you would get in trouble for copyright violation.

    • ninetyninenine 17 hours ago

      Right but the physical encoding already exists in my brain or how can I reproduce it in the first place? We may not know how the encoding works but we do know that an encoding exists because a decoding is possible.

      My question is… is that in itself a violation of copyright?

      If not then as long as LLMs don’t make a publication it shouldn’t be a copyright violation right? Because we don’t understand how it’s encoded in LLMs either. It is literally the same concept.

      • Jaygles 17 hours ago

        To me the primary difference between the potential "copy" that exists in your brain and a potential "copy" that exists in the LLM, is that you can't make copies and distribute your brain to billions of people.

        If you compressed a copy of HP as a .rar, you couldn't read that as is, but you could press a button and get HP out of it. To distribute that .rar would clearly be a copyright violation.

        Likewise, you can't read whatever of HP exists in the LLM model directly, but you seemingly can press a bunch of buttons and get parts of it out. For some models, maybe you can get the entire thing. And I'm guessing you could train a model whose purpose is to output HP verbatim and get the book out of it as easily as de-compressing a .rar.

        So, the question in my mind is, how similar is distributing the LLM model, or giving access to it, to distributing a .rar of HP. There's likely a spectrum of answers depending on the LLM

      • numpad0 16 hours ago

        copyright is actually not as much about right to copy as it is about redistribution permissions.

        if you trained an LLM on real copyrighted data, benchmarked it, wrote up a report, and then destroyed the weight, that's transformative use and legal in most places.

        if you then put up that gguf on HuggingFace for anyone to download and enjoy, well... IANAL. But maybe that's a bit questionable, especially long term.

      • bitmasher9 17 hours ago

        I don’t think the lawyers are going to buy arguments that compare LLMs with human biology like this.

  • lithiumii 17 hours ago

    You are not selling or distributing copies of your brain.

  • harry8 17 hours ago

    If you perform it from memory in public without paying royalties then yes, yes it is.

    Should it be? Different question.

  • JKCalhoun 17 hours ago

    The end of "Fahrenheit 451" set a horrible precedent. Damn you, Bradbury!

  • beowulfey 17 hours ago

    Only if you charge someone to reproduce it for them