Comment by echelon
It's funny to see as a joke, but you can go the other way with this too. Image editing models and LoRAs for "previz-to-render upscaling" workflows are actually incredibly useful.
I was just writing about this (scroll about halfway down to the images of Sam Altman - though if you like that, do watch the second video):
https://getartcraft.com/news/world-models-for-film
The best model I've found for this, that almost bakes in full ControlNet capability, is oddly gpt-image-1.5. It's absolutely OP at understanding how to turn low-fidelity renders into final draft upscales.
Here are some older experiments:
https://imgur.com/a/previz-to-image-gpt-image-1-5-3fq042U
https://imgur.com/gallery/previz-to-image-gpt-image-1-x8t1ij...
I just wish it didn't require invoking such heavy-weight, slow, and expensive models to do this. I'm sure open models will do this work soon, though.
You are able to do this stuff with open models for 1-2 years now, i for example have a comfyui pipeline that achieves a similar setup. It’s of course more work and you have to dig into the details more. I also have to adjust the pipeline and tweak it and use different models for each use case. But overall you can definitely achieve that level of control with open models already, it’s just not that user friendly