Comment by miltonlost

Comment by miltonlost 2 hours ago

16 replies

All AI companies know they're breaking the law. They all have prompts effectively saying "Don't show that we broke the law!". That we continue to have tech companies consistently breaking the law and nothing happens is an indictment of our current economy.

admaiora an hour ago

And it's a question of do we accept breaking law for the possibility to have the greatest technological advancement of the 21st century. In my opinion, legal system has become a blocker for a lot of innovation, not only in AI but elsewhere as well.

  • rpdillon 42 minutes ago

    This is a point that I don't see discussed enough. I think anthropic decided to purchase books in bulk, tear them apart to scan them, and then destroy those copies. And that's the only source of copyrighted material I've ever heard of that is actually legal to use for training LLMs.

    Most LLMs were trained on vast troves of pirated copyrighted material. Folks point this out, but they don't ever talk about what the alternative was. The content industries, like music, movies, and books, have done nothing to research or make their works available for analysis and innovation, and have in fact fought industries that seek to do so tooth and nail.

    Further, they use the narrative that people that pirate works are stealing from the artists, where the vast majority of money that a customer pays for a piece of copyrighted content goes to the publishing industry. This is essentially the definition of rent seeking.

    Those industries essentially tried to stop innovation entirely, and they tried to use the law to do that (and still do). So, other companies innovated over the copyright holder's objections, and now we have to sort it out in the courts.

    • visarga 27 minutes ago

      > So, other companies innovated over the copyright holder's objections, and now we have to sort it out in the courts.

      I think they try to expand copyright from "protected expression" to "protected patterns and abstractions", or in other words "infringement without substantial similarity". Otherwise why would they sue AI companies? It makes no sense:

      1. If I wanted a specific author, I would get the original works, it is easy. Even if I am cheap it is still much easier to pirate than use generative models. In fact AI is the worst infringement tool ever invented - it almost never reproduces faithfully, it is slow and expensive to use. Much more expensive than copying which is free, instant and makes perfect replicas.

      2. If I wanted AI, it means I did not want the original, I wanted something Else. So why sue people who don't want the originals? The only reason to use AI is when you want to steer the process to generate something personalized. It is not to replace the original authors, if that is what I needed no amount of AI would be able to compare to the originals. If you look carefully almost all AI outputs get published in closed chat rooms, with a small fraction being shared online, and even then not in the same venues as the original authors. So the market substitution logic is flimsy.

    • sidewndr46 19 minutes ago

      You're using the phrase "actually legal" when the ruling in fact meant it wasn't piracy after the change. Training on the shredded books was not piracy. Training on the books they downloaded was piracy. That is where the damages come from.

      Nothing in the ruling says it is legal to start outputting and selling content based off the results of that training process.

      • rpdillon 4 minutes ago

        I think your first paragraph is entirely congruent with my first two paragraphs.

        Your second paragraph is not what I'm discussing right now, and was not ruled on in the case you're referring to. I fully expect that, generally speaking, infringement will be on the users of the AI, rather than the models themselves, when it all gets sorted out.

      • gruez 15 minutes ago

        >Nothing in the ruling says it is legal to start outputting and selling content based off the results of that training process.

        Nothing says it's illegal, either. If anything the courts are leaning towards it being legal, assuming it's not trained on pirated materials.

        >A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

        https://www.npr.org/2025/09/05/g-s1-87367/anthropic-authors-...

    • Q6T46nT668w6i3m 40 minutes ago

      I don’t follow. You’re punishing the publishing industry by punishing authors?

      • rpdillon 28 minutes ago

        I'm saying that LLMs are worthwhile useful tools, and that I'm glad that we built them, and that the publishing industry, which holds the copyright on the material that we would use to train the LLMs, have had no hand in developing them, have done no research, and have actively tried to fight the process at every turn. I have no sympathy for them.

        The authors have been abused by the publishing industry for many decades. I think they're just caught in the middle, because they were never going to get a payday, whether from AI or selling books. I think the percentage of authors that are commercially successful is sub 1%.

  • Q6T46nT668w6i3m 41 minutes ago

    You’re willing to eliminate the entire concept of intellectual property for a possibility something might be a technological advancement? If creators are the reason you believe this advancement can be achieved, are you willing to provide them the majority of the profits?

    • thedevilslawyer 32 minutes ago

      That's an absolutely good tradeoff. There's no longer any need for copyright. Patents should go next. Only trademarks can stay.

  • saghm an hour ago

    Without agreeing or disagreeing with your view, I feel like the the issue the issue with that paradigm is inconsistency. If an individual "pirates", they get fines and possible jail time, but if a large enough company does it, they get rewarded by stockholders and at most a slap on the wrist by regulators. If as a society we've decided that the restrictions aren't beneficial, they should be lifted for everyone, not just ignored when convenient for large corporations. As it stands right now, the punishments are scaled inversely to the amount of damage that the one breaking the law actually is capable of doing.

lokar an hour ago

The whole industry is based on breaking the law. You don’t get to be Microsoft, Google, Amazon, meta, etc without large amounts of illegality.

And the VC ecosystem and valuations are built around this assumption.

Workaccount2 18 minutes ago

Training on copyright is not illegal. Even in the lawsuit against anthropic it was found to be fair use.

Pirating material is a violation of copyright, which some labs have done, but that has nothing to do with training AI and everything to do with piracy.

mock-possum 2 hours ago

I don’t read this as “don’t show we broke the law,” I read it as “don’t give the user the false impression that there’s any legal issue with this generated content.”

There’s nothing law breaking about quoting publicly available information. Google isn’t breaking the law when it displays previews of indexed content returned by the search algorithm, and that’s clearly the approach being taken here.

  • Q6T46nT668w6i3m 39 minutes ago

    Masked token prediction is reconstruction. It goes far beyond “quoting.”

blibble an hour ago

and training on mountains of open source code with no attribution is exactly the same

the code models should also be banned, and all output they've generated subject to copyright infringement lawsuits

the sloppers (OpenAI, etc) may get away with it in the US, but the developed world has far more stringent copyright laws

and the countries that have massive industries based on copyright aren't about to let them evaporate for the benefit of a handful of US tech-bros