Comment by jeroenhd
People like what they already know. When they prompt something and get a realistic looking Indiana Jones, they're probably happy about it.
To me, this article is further proof that LLMs are a form of lossy storage. People attribute special quality to the loss (the image isn't wrong, it's just got different "features" that got inserted) but at this point there's not a lot distinguishing a seed+prompt file+model from a lossy archive of media, be it text or images, and in the future likely video as well.
The craziest thing is that AI seems to have gathered some kind of special status that earlier forms of digital reproduction didn't have (even though those 64kbps MP3s from napster were far from perfect reproductions), probably because now it's done by large corporations rather than individuals.
If we're accepting AI-washing of copyright, we might as well accept pirated movies, as those are re-encoded from original high-resolution originals as well.
The year is 2030.
A new MCU movie is released, its 60 second trailer posted on Youtube, but I don't feel like watching the movie because I got bored after Endgame.
Youtube has very strict anti-scraping techniques now, so I use deep-scrapper to generate the whole trailer from the thumbnail and title.
I use deep-pirate to generate the whole 3 hour movie from the trailer.
I use deep-watcher to summarize the whole movie in a 60 second video.
I watch the video. It doesn't make any sense. I check the Youtube trailer. It's the same video.