Comment by ttoinou
Comment by ttoinou 6 months ago
90% really ? What color information get ejected exactly ? For the sensor part are you talking about the fact that the photosites don't cover all the surface ? Or that we only capture a short band of wavelength ? Or that the lens only focuses rays unto specific exact points and make the rest blurry and we loose 3D ?
Cameras capture linear brightness data, proportional to the number of photons that hit each pixel. Human eyes (film cameras too) basically process the logarithm of brightness data. So one of the first things a digital camera can do to throw out a bunch of unneeded data is to take the log of the linear values it records, and save that to disk. You lose a bunch of fine gradations of lightness in the brightest parts of the image. But humans can't tell.
Gamma encoding, which has been around since the earliest CRTs was a very basic solution to this fact. Nowadays it's silly for any high-dynamic image recording format to not encode data in a log format. Because it's so much more representative of human vision.