Comment by Lerc

Comment by Lerc a day ago

13 replies

I'm not sure if this is a problem with overfitting. I'm ok with the model knowing what Indiana Jones or the Predator looks like with well remembered details, it just seems that it's generating images from that knowledge in cases where that isn't appropriate.

I wonder if it's a fine tuning issue where people have overly provided archetypes of the thing that they were training towards. That would be the fastest way for the model to learn the idea but it may also mean the model has implicitly learned to provide not just an instance of a thing but a known archetype of a thing. I'm guessing in most RLHF tests archetypes (regardless of IP status) score quite highly.

masswerk a day ago

What I'm kind of concerned about is that these images will persist and will be reinforced by positive feedback. Meaning, an adventurous archeologist will be the same very image, forever. We're entering the epitome of dogmatic ages. (And it will be the same corporate images and narratives, over and over again.)

  • duskwuff a day ago

    And it's worth considering that this issue isn't unique to image generation, either.

    • Lerc a day ago

      Santa didn't always wear red.

      • 52-6F-62 10 hours ago

        Granted, but not the best example, red and green are the emblematic colours elves wore in northern european cultures. Santa is somewhat syncretic with Robert Goodfellow or Robin Redbreast, Puck, Puca, etc etc. it wasn’t really a cola invention.

    • masswerk 16 hours ago

      E.g., I think, there are now entire generations, who never played anything as a child that wasn't tied in with corporate IP in one way or the other.

  • baq 19 hours ago

    Welcome to the great age of slop feedback loops.

vkou a day ago

> I'm ok with the model knowing what Indiana Jones or the Predator looks like with well remembered details,

ClosedAI doesn't seem to be OK with it, because they are explicitly censoring characters of more popular IPs. Presumably as a fig leaf against accusations of theft.

  • red75prime 17 hours ago

    If you define feeding of copyrighted material into a non-human learning machine as theft, then sure. Anything that mitigates legal consequences will be a fig leaf.

    The question is "should we define it as such?"

    • reginald78 12 hours ago

      The fact that they have guardrails to try and prevent it means OpenAI themselves thinks it is at least shady or outright illegal in someway. Otherwise why bother?

    • vkou 16 hours ago

      If a graphics design company was using human artists to do the same thing that OpenAI is, they'd be sued out of existence.

      But because a computer, and not a human does it, they get to launder their responsibility.

      • red75prime 15 hours ago

        Doing what? Telling their artists to create what they want regardless of copyright and then filtering the output?

        For humans it doesn't make sense because we have generation and filtering in a single package.

        • vkou 8 hours ago

          In this case the output wasn't filtered. They are just producing images of Harrison Ford, and I don't think they are allowed to use his likeness in that way.

  • Lerc 10 hours ago

    There is a difference between knowing what something looks like and generating an image of that thing.