Comment by strogonoff
Comment by strogonoff 6 months ago
This was what I meant primarily.
Camera sensor can get <1% of what we can see, any display media (whether paper or screen, SDR or HDR, etc.) can show <1% of what camera sensor can get.
(That 1% figure is very rough, it will vary by scene conditions, but it is not very off.)
Add to that, what each of us sees is always subjective and depends on our preceding experience as well as shared cultural baggage.
As a result, it is a creative task. We selectively amplify and suppress aspects of raw data according to what the display space fits, what we think should be seen, what our audience would be expecting to see.
People in this thread claiming there to be some objective standard reference process for compressing/discarding extra data for display space completely miss the fundamental aspect of perception. There is no reference process for even a basic task of determining what counts as neutral grey.
(As a bonus point, think how as more and more of our visual input from the youngest ages comes from looking at bland JPEGs on shining rectangles with tiny dynamic ranges this shapes our common perception of reality, makes it less subjective and more universal. Compare with how before photography we really did not have any equivalent of some “standard”—not really, but we mistake it for such—representation of reality we must all adhere to.)
Ok I get it but I doubt photographers have full control over that 1%, so it’s not just a creative task, we’re constrained by physics too