Comment by noduerme

Comment by noduerme 20 hours ago

20 replies

You're conflating trademark with copyright.

Regardless, it's not just copyright laws that are at issue here. This is reproducing human likenesses - like Harrison Ford's - and integrating them into new works.

So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him? I can imagine any court asking "how is this not simply laundering someone's likeness through a third party which claims to not have an image / filter / app / artist reproducing my client's likeness?"

All seemingly complicated scams come down to a very basic, obvious, even primitive grift. Someone somewhere in a regulatory capacity is either fooled or paid into accepting that no crime was committed. It's just that simple. This, however, is so glaring that even a child could understand the illegality of it. I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley. I think there are legal grounds here to force all of these models to be taken offline.

Additionally, "guardrails" that prevent 1:1 copies of film stills from being reprinted are clearly not only insufficient, they are evidence that the pirates in this case seek to obscure the nature of their piracy. They are the evidence that generative AI is not much more than a copyright laundering scheme, and the obsession with these guardrails is evidence of conspiracy, not some kind of public good.

planb 20 hours ago

> So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him?

No, you can't! But it shouldn't be the tool that prohibits this. You are not allowed to use existing images of Harrison Ford for your commercial and you also will be sued into oblivion by Disney if you paint a picture of Mickey Mouse advertising your soap, so why should it be any different if an AI painted this for you?

  • noduerme 19 hours ago

    Well, precisely. What then is the AI company's justification for charging money to paint a picture of Harrison Ford to its users?

    The justification so far seems to have been loosely based on the idea that derivative artworks are protected as free expression. That argument loses currency if these are not considered derivative but more like highly compressed images in a novel, obfuscated compression format. Layers and layers of neurons holding a copy of Harrison Ford's face is novel, but it's hard to see why it's any different legally than running a JPEG of it through some filters and encoding it in base64. You can't just decode it and use it without attribution.

    • planb 19 hours ago

      > Well, precisely. What then is the AI company's justification for charging money to paint a picture of Harrison Ford to its users?

      Formulated this way, I see your point. I see the LLM as a tool, just like photoshop. From a legal standpoint, I even think you're right. But from a moral standpoint, my feeling is that it should even be okay for an artist to sell painted pictures of Harrison Ford. But not to sell the same image as posters on ebay. And now my argument falls apart. Thanks for leading my thoughts in this direction...

      • noduerme 19 hours ago

        You raise a really amazing point! One that should get more attention in these discussions on HN! I'm a painter in my spare time. I think it is okay to sit down and paint a picture of Harrison Ford (on velvet, maybe), and sell it on Etsy or something if you want to. Before you accuse me of hypocrisy, let me stipulate: Either way, it would not be ok for someone to buy that painting and use it in an ad campaign that insinuated that their soap had been endorsed by Harrison Ford. As an art director, it has obviously never been okay to ask someone to paint Harrison Ford and use that picture in a soap ad. I go through all kinds of hoops and do tons of checking on my artists' work to make sure that it doesn't violate anyone else's IP, let alone anyone's human likeness.

        But that's all known. My argument for why me selling that painting is okay, and why an AI company with a neural network doing the same thing and selling it would not be okay, is a lot more subtle and goes to a question that I think has not been addressed properly: What's the difference between my neurons seeing a picture of Harrison Ford, and painting it, and artificial neurons owned by a company doing the same thing? What if I traced a photo of Ford and painted it, versus doing his face from memory?

        (As a side note, my friend in art school had an obsession with Jewel, the singer. He painted her dozens of times from memory. He was not an AI, just a really sweet guy).

        To answer why I think it's ok to paint Jewel or Ford, and sell your painting, I kind of have to fall back on three ideas:

        (1) Interpretation: You are not selling a picture of them, you're selling your personal take on your experience of them. My experience of watching Indiana Jones movies as a kid and then making a painting is not the same thing as holding a compressed JPEG file in my head, to the degree that my own cognitive experience has significantly changed my perceptions in ways that will come out in the final artwork, enough to allow for whatever I paint to be based on some kind of personal evolution. The item for sale is not a picture of Harrison Ford, it's my feelings about Harrison Ford.

        (2) Human-centrism: That my neurons are not 1:1 copies of everything I've witnessed. Human brains aren't simply compression algorithms the way LLMs or diffusers are. AI doesn't bring cognitive experience to its replication of art, and if it seems to do so, we have to ask whether that isn't just a simulacrum of multiple styles it stole from other places laid over the art it's being asked to produce. There's an anti-human argument to be made that we do the exact same thing when we paint Indiana Jones after being exposed to Picasso. But here's a thought: we are not a model. Or rather, each of us is a model. Buying my picture of Indiana Jones is a lot like buying my model and a lot less like buying a platonic picture of Harrison Ford.

        (3) Tools, as you brought up. The more primitive the tools used, the more difficult we can prove it to be to truly copy something. It takes a year to make 4 seconds of animation, it takes an AI no time at all to copy it... one can prove by some function of work times effort that an artwork is, at least, a product of one's own if not completely original.

        I'm throwing these things out here as a bit of a challenge to the HN community, because I think these are attributes that have been under-discussed in terms of the difference between AI-generated artwork and human art (and possibly a starting point for a human-centric way of understanding the difference).

        I'm really glad you made me think about this and raised the point!

        [edit] Upon re-reading, I think points 1 and 2 are mostly congruent. Thanks for your patience.

      • [removed] 18 hours ago
        [deleted]
    • riskable 13 hours ago

      Your argument is valid but it's mostly irrelevant from a copyright perspective.

      If ChatGPT generates an image of Indiana Jones and distributes it to an end user that is precisely one violation of copyright. A violation that no one but ChatGPT and that end user will know about. From a legal perspective, it's the equivalent of taking a screenshot of an Indiana Jones DVD and sending it to a friend.

      ChatGPT can hold within its memory every copyrighted thing that exists and that would not violate anyone's copyright. What does violate someone's copyright is when an exact replica or easily-identifiable derivative work is actually distributed to people.

      Realistically, OpenAI shouldn't be worried about someone generating an image of Indiana Jones using their tools. It's the end user that ultimately needs to be held responsible for how that image gets used after-the-fact.

      It is perfectly legitimate to capture or generate images of Indiana Jones for your own personal use. For example, if you wanted to generate a parody you would need those copyrighted images to do so (the copyright needs to exist before you can parody it).

      If I were Nintendo, Disney, etc I wouldn't be bothered by ChatGPT generating things resembling of my IP. At worst someone will use them commercially and they can be sued for that. More likely, such generated images will only enhance their IP by keeping it active in the minds of people everywhere.

    • jdietrich 17 hours ago

      It's reasonably well established that large neural networks don't contain copies of the training data, therefore their outputs can't be considered copies of anything. The model might contain a conceptual representation of Harrison Ford's face, but that's very different to a verbatim representation of a particular copyrighted image of Harrison Ford. Model weights aren't copyrightable; it's plausible that model outputs aren't copyrightable, but there are some fairly complicated arguments around authorship. Training an AI model on copyrighted work is highly likely to be fair use under US law, but plausibly isn't fair dealing under British law or a permitted use under Article 5 of the EU Copyright and Information Society Directive.

      All of that is entirely separate from trademark law, which would prevent you from using any representation of a trademarked character unless e.g. you can reasonably argue that you are engaged in parody.

      • noduerme 2 hours ago

        From the standpoint of using a human likeness, I don't see the difference between encoding a "conceptual representation" of Ford's face into a model and encoding it into any other digital or analog format from which it can later be decoded into a reasonable facsimile of the original.

        I think that calling it a "conceptual representation" over-complicates the issue. At the very least, the model weights encode a process that can produce a copy of their training date. A 300x300 pixel image of Harrison Ford's face is one of what, like 1.5x10^12 possible images. Obviously, only a tiny fraction of all possible images are encoded in the model. Is encoding those particular weights into a diffuser which can select that face by a process of refinement really much different than, say, encoding the image into a set of fractal algorithms, or a set of vectors?

        I'd argue that the largest models are akin to a compression method that has simply pre-encoded every word and image they've ingested, such that the "compressed file" is the prompt you give to the AI. Even with billions of weights trained on millions of texts and images, they've only encoded an infinitely tiny fraction of the entire space. Semantically you could call it something other than a "copy", but functionally how is it any different?

    • stavros 17 hours ago

      Because I can pay a painter to paint me a picture of Harrison Ford, I just can't then use that to sell things.

  • adrianmsmith 19 hours ago

    > you also will be sued into oblivion by Disney if you paint a picture of Mickey Mouse advertising your soap, so why should it be any different if an AI painted this for you?

    If the AI prompt was "produce a picture of Micky Mouse", I'd agree with you.

    The creators of AI claim their product produces computer-generated images, i.e. generated/created by the computer. Instead it's producing a picture of a real actual person.

    If I contract an artist to produce a picture of a person from their imagination, i.e. not a real person, and they produce a picture of Harrison Ford, then yeah I'd say that's on the artist.

  • Dwedit 10 hours ago

    Mickey Mouse is a bad example because he's now in the public domain, including Color images.

AnthonyMouse 17 hours ago

> This is reproducing human likenesses - like Harrison Ford's - and integrating them into new works.

The thing is though, there is also a human requesting that. The prompt was chosen specifically to get that result on purpose.

The corporate systems are trying to prevent this, but if you use any of the local models, you don't even have to be coy. Ask it for "photo of Harrison Ford as Indiana Jones" and what do you expect? That's what it's supposed to do. It does what you tell it to do. If you turn your steering wheel to the left, the car goes to the left. It's just a machine. The driver is the one choosing where to go.

  • ikanreed 11 hours ago

    No, I think that's unfair. I, as a user, could very reasonably want a parody or knock-off of Indiana Jones. I could want the spelunky protagonist. It's hard to argue that certain prompts the author put into this could be read any other way. But why does Nintendo get a monopoly on plumbers with red hats?

    The way AI is coded and trained pushes it constantly towards a bland-predictable mean, but it doesn't HAVE to be that way.

FeepingCreature 19 hours ago

Human appearance does not have enough dimensions to make likeness a viable thing to protect; I don't see how you could do that without say banning Elvis impersonators.

That said:

> I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley.

If you're framing the sides like that, it's pretty clear which I'm on. :)

  • noduerme 19 hours ago

    Interesting you should bring that up:

    https://www.calcalistech.com/ctechnews/article/1517ldjmv

    Loads of lawsuits have been filed by celebrities and their estates over the unauthorized use of their likeness. And in fact, in 2022, Las Vegas banned Elvis impersonators from performing weddings after a threat from the Presley estate's licensing company:

    https://www.dailymail.co.uk/news/article-10872855/Elvis-imag...

    But there are also a couple key differences between putting on a costume and acting like Elvis, and using a picture of Elvis to sell soap.

    One is that a personal artistic performance could be construed as pastiche or parody. But even more importantly, if there's a financial incentive involved in doing that performance, the financial incentive has to be aligned more with the parody than with drawing an association to the original. In other words, dressing up as Elvis as a joke or an act, or even to sing a song and get paid to perform a wedding is one thing if it's a prank, it's another thing if it's a profession, and yet another thing if it's a mass-marketing advertisement that intends for people to seriously believe that Elvis endorsed this soap.

    • hinkley 10 hours ago

      I can remember two ad campaigns with an Elvis impersonator, and they used multiple people in both of them. I think we can safely assume that if you represent multiple people as a specific public figure, that a reasonable person must assume that none of them are in fact that person.

      Now of course that leaves out concerns over how much of advertisement is making money off of unreasonable people, which is a concern Congress occasionally pays attention to.

IanCal 17 hours ago

> This, however, is so glaring that even a child could understand the illegality of it

If you have to explain "laundering someone's likeness" to them maybe not, I think it's a frankly bizarre phrase.