Comment by bastawhiz
Modifying the image in any way (cropping, resizing, etc) destroys the metadata. This is necessary in basically every application that interacts with any kind of model that uses images, either for token count reasons, file size reasons, model limits, etc. (Source: I work at a genai startup)
At inference time, you don't control the inputs, so this is moot. At training time, you've already got lots of other metadata that you need to store and preserve that almost certainly won't fit in steganographically encoded format, and you've often got to manipulate the image before feeding it into your training pipeline. Most pipelines don't simply take arbitrary images (nor do you want them: plenty of images need to be modified to, for instance, remove letterboxing).
The other consideration is that steganography is actively introducing artifacts to your assets. If you're training on these images, you'll quickly find that your image generation model, for instance, cannot generate pure black. If you're adding what's effectively visual noise to every image you train on, the model will generate images with noise.
Was just coming here to say this. Most graphic editors can easily preserve EXIF/IPTC data across edits.
Without an entirely dedicated editor or postprocessing plugin, stenography gets destroyed on modification.