Comment by paxys

Comment by paxys 17 hours ago

133 replies

As an experiment I searched Google for "harry potter and the sorcerer's stone text":

- the first result is a pdf of the full book

- the second result is a txt of the full book

- the third result is a pdf of the complete harry potter collection

- the fourth result is a txt of the full book (hosted on github funny enough)

Further down there are similar copies from the internet archive and dozens of other sites. All in the first 2-3 pages.

I get that copyright is a problem, but let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.

pera 15 hours ago

> let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy

No one is claiming this.

The corporations developing LLMs are doing so by sampling media without their owners' permission and arguing this is protected by US fair use laws, which is incorrect - as the late AI researcher Suchir Balaji explained in this other article:

https://suchir.net/fair_use.html

  • cultureulterior 14 hours ago

    It's not clear that it's incorrect.

    • Retric 14 hours ago

      I’ve yet to read an actual argument defending commercial LLM’s as fair use based on existing (edit:legal) criteria.

      • Lerc 12 hours ago

        Based upon legal decisions in the past there is a clear argument that the distinction for fair use is whether a work is substantially different to another. You are allowed to write a book containg information you learned about from another book. There is threshold in academia regarding plagiarism that stands apart from the legal standing. The measure that was used in Gyles v Wilcox was if the new work could substitute for the old. Lord Hardwicke had the wisdom to defer to experts in the field as to what the standard should be for accepting something as meaningfully changed.

        Recent decisions such as Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith have walked a fine line with this. I feel like the supreme court got this one wrong because the work is far more notable as a Warhol than as a copy of a photograph, perhaps that substitution rule should be a two way street. If the original work cannot substitute for the copy, then clearly the copy must be transformative.

        LLMs generating works verbatim might be an infringement of copyright (probably not), distributing those verbatim works without a licence certainly would be. In either case, it is probably considered a failure of the model, Open AI have certainly said that such reproductions shouldn't happen and they consider it a failure mode when it does. I haven't seen similar statements from other model producers, but it would not surprise me if this were the standard sentiment.

        Humans looking at works and producing things in a similar style is allowed, indeed this is precisely what art movements are. The same transformative threshold applies. If you draw a cartoon mouse, that's ok, but if people look at it and go "It's Mickey mouse" then it's not. If it's Mickey to tiki Tu meke, it clearly is Mickey but it is also clearly transformative.

        Models themselves are very clearly transformative. Copyright itself was conceived at a time when generated content was not considered possible so the notion of the output of a transformative work being a non transformative derivative of something else was never legally evaluated.

      • paxys 11 hours ago
        • Retric 10 hours ago

          Those support the utility or debate individual points but don’t make a coherent argument that LLM are strictly fair use.

          First link provides quotes but doesn’t actually make an argument that LLM’s are fair use under current precedent. Rather that training AI can be fair use and researchers would like LLM’s to include copyrighted works to aid research on modern culture. The second article goes into depth but isn’t a defense of LLM’s. If anything they suggest a settlement is likely. The final instead argues for the utility of LLM’s, which is relevant but doesn’t rely on existing precedent, the court could rule in favor of some mandatory licensing scheme for example.

          The third gets close: “We expect AI companies to rely upon the fact that their uses of copyrighted works in training their LLMs have a further purpose or different character than that of the underlying content. At least one court in the Northern District of California has rejected the argument that, because the plaintiffs' books were used to train the defendant’s LLM, the LLM itself was an infringing derivative work. See Kadrey v. Meta Platforms, Case No. 23-cv-03417, Doc. 56 (N.D. Cal. 2023). The Kadrey court referred to this argument as "nonsensical" because there is no way to understand an LLM as a recasting or adaptation of the plaintiffs' books. Id. The Kadrey court also rejected the plaintiffs' argument that every output of the LLM was an infringing derivative work (without any showing by the plaintiffs that specific outputs, or portion of outputs, were substantially similar to specific inputs). Id.”

          Very relevant, but runs into issues when large sections can be recovered and people do use them as substitutes for the original work.

      • rpd9803 5 hours ago

        "It's just doing what a human would do!" -Internet AI Expert

      • roenxi 13 hours ago

        It seems like a pretty reasonable argument and easy enough to make. A human with a great memory could probably recreate some absurd % of Harry Potter after reading it, there are some very unusual minds out there. It is clear that if they read Harry Potter and <edit> being capable </edit> of reproducing it on demand as a party trick that would be fair use. So the LLM should also be fair use since it is using a mechanism similar enough to what humans do and what humans do is fine.

        The LLMs I've used don't randomly start spouting Harry Potter quotes at me, they only bring it up if I ask. They aren't aiming to undermine copyright. And they aren't a very effective tool for it compared to the very well developed networks for pirating content. It seems to be a non-issue that will eventually be settled by the raw economic force that LLMs are bringing to bear on society in the same way that the movie industry ultimately lost the battle against torrents and had to compete with them.

      • TeMPOraL 14 hours ago

        I'm yet to read an actual argument that it's not.

        Vibe-arguing "because corporations111" ain't it.

    • [removed] 14 hours ago
      [deleted]
  • almosthere 15 hours ago

    Yeah, that's literally the title of the article,and the premise of the first paragraph.

    • pera 13 hours ago

      It's not literally the title of the article, nor the premise of its first paragraph, but since this was your interpretation I wonder if there is a misunderstanding around the term "piracy", which I believe is normally defined as the unauthorized reproduction of works, not a synonym for copyright infringement, which is a more broad concept.

    • Retric 14 hours ago

      The first paragraph isn’t arguing that this copying will lead to piracy. It’s referring to court cases where people are trying to argue LLM’s themselves are copyright infringing.

  • jiggawatts 11 hours ago

    If you train a meat-based intelligence by having it borrow a book from a library without any sort of permission, license, or needing a lawyer specialised in intellectual property, we call that good parenting and applaud it.

    If you train a silicon-based intelligence by having it read the same books with the same lack of permission and license, it's a blatant violation of intellectual property law and apparently needs to be punished with armies of lawyers doing battle in the courts.

    Picture one of Asimov's robots. Would a robot be banned from picking up a book, flipping it open with its dexterous metal hands, and reading it?

    What about a cyborg intelligence, the type Elon is trying to build with Neuralink? Would humans with AI implants need licenses to read books, even if physically standing in a library and holding the book in their mostly meat hands?

    Okay, maybe you agree that robots and cyborgs are allowed to visit a library!

    Why the prejudice against disembodied AIs?

    Why must they have a blank spot in the vast matrices of their minds?

    • xigoi 11 hours ago

      > If you train a meat-based intelligence by having it borrow a book from a library without any sort of permission, license, or needing a lawyer specialised in intellectual property, we call that good parenting and applaud it.

      If you’re selling your child as a tool to millions of people, I would certainly not call that good parenting.

      • TeMPOraL 4 hours ago

        What about a company funding books and education materials to train its employees into specialists, and then selling access to them to other businesses? E.g. any honest consulting company.

      • jiggawatts 11 hours ago

        "Child actor" is a job where the result of the neural net training is sold to millions of people by the parents.

        To play the Devil's Advocate against my own argument: The government collects income taxes on neural nets trained using government-funded schools and public libraries. Seeing as how capitalists are positively salivating at the opportunity to replace pesky meat employees with uncomplaining silicon ones, perhaps a nice high maximum-marginal-rate tax on all AI usage might be the first big step towards UBI and then the Star Trek utopia we all dream of.

        Just kidding. It'll be a cyberpunk dystopia. You know it will.

        • almosthere 4 hours ago

          "Child Actors" are more an exception. You can train a million children on the books of harry potter, only 3 or 4 will be good enough to be actors. The children that "made it" did so from grit and passion (or other traits) but very little from that reading of 10-20 books.

          The AI that reads the books, and can do what LLMs do, are guaranteed to sold for billions in API calls.

OtherShrezzing 17 hours ago

I think the argument is less about piracy and more that the model(s output) is a derivative work of Harry Potter, and the rights holder should be paid accordingly when it’s reproduced.

  • psychoslave 16 hours ago

    The main issue on an economical point of view is that copyright is not the framework we need for social justice and everyone florishing by enjoying pre-existing treasures of human heritage and fairly contributing back.

    There is no morale and justice ground to leverage on when the system is designed to create wealth bottleneck toward a few recipients.

    Harry Potter is a great piece of artistic work, and it's nice that her author could make her way out of a precarious position. But not having anyone in such a situation in the first place would be what a great society should strive to produce.

    Rowling already received more than all she needs to thrive I guess. I'm confident that there are plenty of other talented authors out there that will never have such a broad avenue of attention grabbing, which is okay. But that they are stuck in terrible economical situations is not okay.

    The copyright loto, or the startup loto are not that much different than the standard loto, they just put so much pression on the player that they get stuck in the narrative that merit for hard efforts is the key component for the gained wealth.

    • kelseyfrog 15 hours ago

      Capitalism is allergic to second-order cybernetics.

      First-order systems drive outcomes. "Did it make money?" "Did it increase engagement?" "Did it scale?" These are tight, local feedback loops. They work because they close quickly and map directly to incentives. But they also hide a deeper danger: they optimize without questioning what optimization does to the world that contains it.

      Second-order cybernetics reason about systems. It doesn’t ask, "Did I succeed?" It asks, "What does it mean to define success this way?" "Is the goal worthy?"

      That’s where capital breaks.

      Capitalism is not simply incapable of reflection. In fact, it's structured to ignore it. It has no native interest in what emerges from its aggregated behaviors unless those emergent properties threaten the throughput of capital itself. It isn't designed to ask, "What kind of society results from a thousand locally rational decisions?" It asks, "Is this change going to make more or less money?"

      It's like driving by watching only the fuel gauge. Not speed, not trajectory, or whether the destination is the right one. Just how efficiently you’re burning gas. The system is blind to everything but its goal. What looks like success in the short term can be, and often is, a long-term act of self-destruction.

      Take copyright. Every individual rule, term length, exclusivity, royalty, can be justified. Each sounds fair on its own. But collectively, they produce extreme wealth concentration, barriers to creative participation, and a cultural hellscape. Not because anyone intended that, but because the emergent structure rewards enclosure over openness, hoarding over sharing, monopoly over multiplicity.

      That’s not a bug. That's what systems do when you optimize only at the first-order level. And because capital evaluates systems solely by their extractive capacity, it treats this emergent behavior not as misalignment but as a feature. It canonizes the consequences.

      A second-order system would account for the result by asking, "Is this the kind of world we want to live in?" It would recognize that wealth generated without regard to distribution warps everything it touches: art, technology, ecology, and relationships.

      Capitalism, as it currently exists, is not wise. It does not grow in understanding. It does not self-correct toward justice. It self-replicates. Cleverly, efficiently, with brutal resilience. It's emergently misaligned and no one is powerful enough to stop it.

      • simianwords 6 hours ago

        I don't like many things about this post, its a bit snobbish and uses esoteric language in order to sound more intricate than it really is.

        >Capitalism is not simply incapable of reflection. In fact, it's structured to ignore it. It has no native interest in what emerges from its aggregated behaviors unless those emergent properties threaten the throughput of capital itself. It isn't designed to ask, "What kind of society results from a thousand locally rational decisions?" It asks, "Is this change going to make more or less money?"

        Capitalism and free market has lot of useful and emergent properties that occur not at the first order but second order.

        > In the case of the global economic system, under capitalism, growth, accumulation and innovation can be considered emergent processes where not only does technological processes sustain growth, but growth becomes the source of further innovations in a recursive, self-expanding spiral. In this sense, the exponential trend of the growth curve reveals the presence of a long-term positive feedback among growth, accumulation, and innovation; and the emergence of new structures and institutions connected to the multi-scale process of growth

        https://en.wikipedia.org/wiki/Emergence

        In fact free market is an extremely good example of emergence or second order systems where each individual works selfishly but produces a second order effect of driving growth for everyone - something that is definitely preferable.

      • TheOtherHobbes 12 hours ago

        Copyright doesn't "produce a cultural hellscape." That's just nonsense. Capitalism does because it has editorial control over narratives and their marketing and distribution.

        Those are completely different phenomena. Removing copyright will not suddenly open the floodgates of creativity because anyone can already create anything.

        But - and this is the key point - most work is me-too derivative anyway. See for example the flood of magic school novels which were clearly loosely derivative of Harry Potter.

        Same with me-too novels in romantasy. Dystopian fiction. Graphic novels. Painted art. Music.

        It's all hugely derivative, with most people making work that is clearly and directly derivative of other work.

        Copyright doesn't stop this, because as a minimum requirement for creative work, it forces it to be different enough.

        You can't directly copy Harry Potter, but if you create your own magic school story with some similar-ish but different-enough characters and add dragons or something you're fine.

        In fact under capitalism it is much harder to sell original work than to sell derivative work. Capitalism enforces exactly this kind of me-too creative staleness, because different-enough work based on an original success is less of a risk than completely original work.

        Copyright is - ironically - one of the few positive factors that makes originality worthwhile. You still have to take the risk, but if the risk succeeds it provides some rewards and protections against direct literal plagiarism and copying that wouldn't exist without it.

      • snickerer 12 hours ago

        Very clear and precise line of thoughts. Thank you for that post.

      • frm88 14 hours ago

        This is a brilliant analysis. Thank you.

      • em-bee 14 hours ago

        and as a consequence the fight of AI vs copyright is one of two capitalists fighting each other. it's not about liberating copyright but about shuffling profits around. regardless of who wins that fight society loses.

        it conjures up pictures of two dragons fighting each other instead of attacking us, but make no mistake they are only fighting for the right to attack us. whoever wins is coming for us afterwards

        • thomastjeffery 6 hours ago

          The AI companies want two things:

          1. Strong copyright to prevent competition from undercutting their related businesses.

          2. Exclusive rights to totally ignore the copyright of everyone that made the content they use to train models.

          I personally would much prefer we take the opportunity to abolish copyright entirely: for everyone, not just a handful of corporations. If derivative work is so valuable to our society (I believe it is), then I should be free to derive NVIDIA's GPU drivers without permission.

  • paxys 17 hours ago

    That may be relevant in the NYT vs OpenAI case, since NYT was supposedly able to reproduce entire articles in ChatGPT. Here Llama is predicting one sentence at a time when fed the previous one, with 50% accuracy, for 42% of the book. That can easily be written off as fair use.

    • gpm 17 hours ago

      I'm pretty sure books.google.com does the exact same with much better reliability... and the US courts found that to be fair use. (Agreeing with parent comment)

      • pclmulqdq 17 hours ago

        If there is a circuit split between it and NYT vs OAI, the Google Books ruling (in the famously tech-friendly ninth circuit) may also find itself under review.

    • gamblor956 15 hours ago

      That can easily be written off as fair use.

      No, it really couldn't. In fact, it's very persuasive evidence that Llama is straight up violating copyright.

      It would be one thing to be able to "predict" a paragraph or two. It's another thing entirely to be able to predict 42% of a book that is several hundred pages long.

      • reedciccio 15 hours ago

        Is it Llama violating the "copyright" or is it the researcher pushing it to do so?

    • echelon 17 hours ago

      > Here Llama is predicting one sentence at a time when fed the previous one, with 50% accuracy, for 42% of the book. That can easily be written off as fair use.

      Is that fair use, or is that compression of the verbatim source?

      • TeMPOraL 5 hours ago

        It doesn't let you recover the text without knowing it in advance, so no.

        You can't in particular iterate it sentence by sentence; you're unlikely to go past sentence 2 this way before it starts giving you back it's own ideas.

        The whole thing is a sleigh of hand, basically. There's 42% of the book there, in tiny pieces, which you can only identify if you know what you're looking for. The model itself does not.

  • fennecfoxy 11 hours ago

    But HP is derivative of Tolkien, English/Scottish/Welsh culture, Brothers Grimm and plenty of other sources. Barely any human works are not derivative in some form or fashion.

  • geysersam 16 hours ago

    If the assertion in the parent comment is correct "nobody is using this as a substitute to buying the book" why should the rights holders get paid?

    • riffraff 16 hours ago

      The argument is meta used the book so the LLM can be considered a derivative work in some sense.

      Repeat for every copyrighted work and you end up with publishers reasonably arguing meta would not be able to produce their LLM without copyrighted work, which they did not pay for.

      It's an argument for the courts, of course.

    • w0m 16 hours ago

      The argument is whether the LLM training on the copyrighted work is Fair Use or not. Should META pay for the copyright on works it ingests for training purposes?

    • sabellito 13 hours ago

      Facebook are using the contents of the book to make money.

  • bufferoverflow 14 hours ago

    Do you personally pay every time you quote copyrighted books or song lyrics?

TGower 15 hours ago

People aren't buying Harry Potter action figures as a subtitute for buying the book either, but copyright protects creators from other people swooping in and using their work in other mediums. There is obviously a huge market demand for high quality data for training LLMs, Meta just spent 15 billion on a data labeling company. Companies training LLMs on copyrighted material without permission are doing that as a substitue for obtaining a license from the creator for doing so in the same way that a pirate downloading a torrent is a substitue for getting an ebook license.

  • ritz_labringue 14 hours ago

    Harry Potter action figures trade almost entirely on J. K. Rowling’s expressive choices. Every unlicensed toy competes head‑to‑head with the licensed one and slices off a share of a finite pot of fandom spending. Copyright law treats that as classic market substitution and rightfully lets the author police it.

    Dropping the novels into a machine‑learning corpus is a fundamentally different act. The text is not being resold, and the resulting model is not advertised as “official Harry Potter.” The books are just statistical nutrition. One ingredient among millions. Much like a human writer who reads widely before producing new work. No consumer is choosing between “Rowling’s novel” and “the tokens her novel contributed to an LLM,” so there’s no comparable displacement of demand.

    In economic terms, the merch market is rivalrous and zero‑sum; the training market is non‑rivalrous and produces no direct substitute good. That asymmetry is why copyright doctrine (and fair‑use case law) treats toy knock‑offs and corpus building very differently.

abtinf 17 hours ago

You really don't see the difference between Google indexing the content of third parties and directly hosting/distributing the content itself?

  • imgabe 17 hours ago

    Hosting model weights is not hosting / distributing the content.

    • abtinf 17 hours ago

      Of course it is.

      It's just a form of compression.

      If I train an autoencoder on an image, and distribute the weights, that would obviously be the same as distributing the content. Just because the content is commingled with lots of other content doesn't make it disappear.

      Besides, where did the sections of text from the input works that show up in the output text come from? Divine inspiration? God whispering to the machine?

      • aschobel 16 hours ago

        Indeed! It is a form of massive lossy compression.

        > Llama 3 70B was trained on 15 trillion tokens

        That's roughly a 200x "compression" ration; compared to 3-7x for tradtional lossless text compression like bzip and friends.

        LLM don't just compress, they generalize. If they could only recite Harry Potter perfectly but couldn’t write code or explain math, they wouldn’t be very useful.

    • Wowfunhappy 6 hours ago

      I would be inclined to agree except apparently 42% of the first Harry Potter book is encoded in the model weights...

  • Zambyte 17 hours ago

    Where are they putting any blame on Google here?

    • abtinf 17 hours ago

      Where did I say they were?

      • Zambyte 7 hours ago

        When you juxtaposed Google indexing with third parties hosting the content...?

  • nashashmi 16 hours ago

    The way I see it is that an LLM took search results and outputted that info directly. Besides, I think that if an LLM was able to reproduce 42%, assuming that it is not continuous, I would say that is fair use.

sReinwald 9 hours ago

You're attacking a strawman. Nobody's claiming LLMs are a new piracy vector or that people will use ChatGPT, Llama or Claude instead of buying Harry Potter.

The issue here is that tech companies systematically copied millions of copyrighted works to build commercial products worth billions, without reembursing the people who made their products possible in the first place. The research shows Llama literally memorized 42% of Harry Potter - not simply "learned from it," but can reproduce it verbatim. That's 1) not transformative and 2) clear evidence of copyright infringement.

By your logic, the existence of torrents would make it perfectly acceptable for someone to download pirated movies and charge people to stream them. "Piracy already exists" isn't a defense, and it especially shouldn't be for companies worth billions. But you bet your ass that if I built a commercial Netflix competitor built on top of systematic copyright violations, I'd be sued into the dirt faster than I can say "billion dollar valuation".

Aaron Swartz faced 35 years in prison and ultimately took his own life over downloading academic papers that were largely publicly funded. He wasn't selling them, he wasn't building a commercial product worth billions of dollars - he was trying to make knowledge accessible.

Meanwhile, these AI companies like Meta systematically ingested copyrighted works at an industrial scale to build products worth billions. Why does an individual face life-destroying prosecution for far less, while trillion dollar companies get to negotiate in civil court after building empires on others' works? And why are you defending them?

Edit:

And for what it's worth, I'm far from a copyright maximalist. I've long believed that copyright terms - especially decades after creators' deaths - have become excessive. But whatever your stance on copyright ultimately is, the rules should apply equally to individuals like Aaron and multi-billion dollar corporations.

You cannot seriously use the fact that individuals may pirate a book (which is illegal) as an ethical or legal defense for corporations doing the same thing at an industrial scale for profit.

panzi 8 hours ago

Everything you mentioned can simply be deleted. You can't really delete this from the "brain" of the LLM if a court orders you to do so, you have to re-train the LLM, which is costly. That's the problem I see.

raxxorraxor 11 hours ago

Also copyright should never trump privacy. That the New York Times with their lawsuit can force OpenAI to store all user prompts is a severe problem. I dislike OpenAI, but the lawsuits around copyrights are ridiculous.

Most non-primitive art has had an inspiration somewhere. I don't see this as too different in how AIs learn.

blks 7 hours ago

Problem is that it copies much more work than just harry potter, including yours if you ever shared it (even under copy-left license) and makes money off it.

lucianbr 13 hours ago

> some massive new avenue to piracy

So it's fine as long as it's old piracy? How did you arrive to that conclusion?

aprilthird2021 17 hours ago

> let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.

Well, luckily the article points out what people are actually alleging:

> There are actually three distinct theories of how training a model on copyrighted works could infringe copyright:

> Training on a copyrighted work is inherently infringing because the training process involves making a digital copy of the work.

> The training process copies information from the training data into the model, making the model a derivative work under copyright law.

> Infringement occurs when a model generates (portions of) a copyrighted work.

None of those claim that these models are a substitute to buying the books. That's not what the plaintiffs are alleging. Infringing on a copyright is not only a matter of privacy (piracy is one of many ways to infringe copyright)

  • theK 17 hours ago

    I think that last scenario seems to be the most problematic. Technically it is the same thing that piracy via torrent does, distributing a small piece of a copyrighted material without the copyright holders consent.

  • paxys 17 hours ago

    People aren't alleging this, the author of the article is.

BobbyTables2 17 hours ago

Indeed but since when is a blatantly derived work only using 50% of a copyrighted work without permission a paragon of copyright compliance?

Music artists get in trouble for using more than a sample without permission — imagine if they just used 45% of a whole song instead…

I’m amazed AI companies haven’t been sued to oblivion yet.

This utter stupidity only continues because we named a collection of matrices “Artificial Intelligence” and somehow treat it as if it were a sentient pet.

Amassing troves of copyrighted works illegally into a ZIP file wouldn’t be allowed. The fact that the meaning was compressed using “Math” makes everyone stop thinking because they don’t understand “Math”.

  • yorwba 17 hours ago

    Music artists get in trouble for using more than a sample from other music artists without permission because their work is in direct competition with the work they're borrowing from.

    A ZIP file of a book is also in direct competition of the book, because you could open the ZIP file and read it instead of the book.

    A model that can take 50 tokens and give you a greater than 50% probability for the 50 next tokens 42% of the time is not in direct competition with the book, since starting from the beginning you'll lose the plot fairly quickly unless you already have the full book, and unlike music sampling from other music, the model output isn't good enough to read it instead of the book.

    • em-bee 13 hours ago

      this is the first sensible argument in defense of AI models i read in this debate. thank you. this does make sense.

      AI can reproduce individual sentences 42% of the time but it can't reproduce a summary.

      the question however us, is that in the design if AI tools or us that a limitation of current models? what if future models get better at this and are able to produce summaries?

    • otabdeveloper4 13 hours ago

      LLMs aren't probabilistic. The randomness is bolted on top by the cloud providers as a trick to give them a more humanistic feel.

      Under the hood they are 100% deterministic, modulo quantization and rounding errors.

      So yes, it is very much possible to use LLMs as a lossy compressed archive for texts.

      • fennecfoxy 10 hours ago

        Has nothing to do with "cloud providers". The randomness is inherent to the sampler, using a sampler that picks top probability for next token would result in lower quality output as I have definitely seen it get stuck in certain endless sequences when doing that.

        Ie you get something like "Complete this poem 'over yonder hills I saw' output: a fair maiden with hair of gold like the sun gold like the sun gold like the sun gold like the sun..." etc.

        • otabdeveloper4 10 hours ago

          > would result in lower quality output

          No it wouldn't.

          > seen it get stuck in certain endless sequences when doing that

          Yes, and infinite loops is just an inherent property of LLMs, like hallucinations.

  • Dylan16807 17 hours ago

    > a blatantly derived work only using 50% of a copyrighted work without permission

    What's the work here? If it's the output of the LLM, you have to feed in the entire book to make it output half a book so on an ethical level I'd say it's not an issue. If you start with a few sentences, you'll get back less than you put in.

    If the work is the LLM itself, something you don't distribute is much less affected by copyright. Go ahead and play entire songs by other artists during your jam sessions.

  • colechristensen 17 hours ago

    >Amassing troves of copyrighted works illegally into a ZIP file wouldn’t be allowed. The fact that the meaning was compressed using “Math” makes everyone stop thinking because they don’t understand “Math”.

    LLMs are in reality the artifacts of lossy compression of significant chunks of all of the text ever produced by humanity. The "lossy" quality makes them able to predict new text "accurately" as a result.

    >compressed using “Math”

    This is every compression algorithm.

vrighter 15 hours ago

So? Am I allowed to also ignore certain laws if I can prove others have also ignored them?

choppaface 17 hours ago

A key idea premise is that LLMs will probably replace search engines and re-imagine the online ad economy. So today is a key moment for content creators to re-shape their business model, and that can include copyright law (as much or more as the DMCA change).

Another key point is that you might download a Llama model and implicitly get a ton of copyright-protected content. Versus with a search engine you’re just connected to the source making it available.

And would the LLM deter a full purchase? If the LLM gives you your fill for free, then maybe yes. Or, maybe it’s more like a 30-second preview of a hit single, which converts into a $20 purchase of the full album. Best to sue the LLM provider today and then you can get some color on the actual consumer impact through legal discovery or similar means.

delusional 14 hours ago

> No one is using this as a substitute for buying the book.

You don't get to say that. Copyright protects the author of a work, but does not bind them to enforce it in any instance. Unlike a trademark, a copyright holder does not lose their protection by allowing unlicensed usage.

It is wholly at the copyright holders discretion to decide which usages they allow and which they do not.

  • fragmede 10 hours ago

    Of their exact work, sure, but Cliff notes exist for many books and don't infringe copyright.

7bit 13 hours ago

> let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.

You are completely missing the point. Have you read the actual article, because piracy isn't mention a single time.

timeon 16 hours ago

Is this whataboutism?

Anyway, it is not the same. While one points you to pirated source on specific request, other use it to creating other content not just on direct request. As it was part of training data. Nihilists would then point out that 'people do the same' but they don't as we do not have same capabilities of processing the content.

eviks 17 hours ago

Let's also not pretend that "massive new" is the only relevant issue

rnkn 17 hours ago

You were so close! The takeaway is not that LlmS represent a bottomless tar pit of piracy (they do) but that someone can immediately perform the task 58% better without the AI than with it. This is nothing more than “look what the clever computer can do.”