Comment by accurrent
Sounds a bit like https://github.com/mitsuba-renderer/mitsuba2
Sounds a bit like https://github.com/mitsuba-renderer/mitsuba2
Shameless plug, we use Mitsuba 3/Dr.JIT for image optimization around volumetric 3D printing https://github.com/rgl-epfl/drtvam
Looks really cool! I look forward to reading your paper. Do you know if a recording of the talk is/will be posted somewhere?
We presented this work at SIGGRAPH ASIA 2024. But I think they do not record it?
Maybe in some time we also do an online workshop about it.
Yes, exactly. I have not looked at Mitsuba 2, but Mitsuba 3 is absolutely along these lines. It is just starting to be picked up by some of the nonimaging/illumination community, e.g. there was a paper last year from Aurele Adam's group at TU Delft where they used it for optimizing a "magic window" [1]. Some tradeoffs and constraints are a bit different when doing optical design versus doing (inverse) rendering, but it definitely shows what is possible.
[1] https://doi.org/10.1364/OE.515422