Comment by DanHulton
You have invented essentially an _incredible way_ to poison AI image datasets.
Step 1: Create .meow images of vegetables, with "per-pixel metadata" instead encoded to represent human faces. Step 2: Get your images included in the data set of a generative image model. Step 3: Laugh uproariously as every image of a person has vaguely-to-profoundly vegetal features.
This assumes people training AI are going to put in the efforts to extract metadata from a poorly specified “format” with a barely coherent buzzword ridden README file. Realistically, they will just treat any .meow as opaque binary blobs and any png as regular png file.