PDA

View Full Version : Understanding and disambiguation of gamma correction (dear god help please)



XanderLust
03-17-2014, 02:12 PM
For some background:
I'm one of the owners of an aerial video company called CopterOptics. We fly anything from a GoPro up to a RED Dragon. Unlike doing camera work on the ground however, we don't really get to frame our shots till we're in the air. When you're up high, what might be good light settings on the ground, may not work so well. High enough up, everything just looks like white building tops. We are increasingly working with REDs and companies that are interested in doing motion tracking combined with VFX compositing.

Because of this I'm trying really hard to master light settings for our cameras. So this brings me to the real question:

WTF is up with the various explanations of gamma correction?

I have read about five articles on the subject, but I am noticing a disconnect between two very different pieces of information that seem very separate but used interchangeably.

1) A camera captures light in a linear fashion, but our eyes do not. Our eyes capture light in a logarithmic curve, effectively raising the blacks and crushing the whites. When some people talk about gamma correction, they are referring to how an image is adjusted to recreate the light in a way we perceive as correct.

2) There is a input loss from source to output in computer displays. This isn't technically correct for panel monitors, but CRTVs used to apply a gamma correction curve of 2.2 to correct the fact that a photon gun would would create an output curve of .4545 when given a linear power input.

So my question is this: How do these two things interact? Maybe other people don't find this confusing, but one seems to be the way an image is compressed into a codec. And the other seems to be an artificial recreation of a phenomena from older technology by encoding JPGs, PNGs Gifs ETC in .4545, then artificially recreate it again in LCD/Plasma displays by faking a 2.2 to create a linear color output.

Could someone maybe explain this a little better? When it comes to post, light, resolution and compression are so fundamentally important. I would really like to have this knowledge to help me better understand the pipeline and get the guys in the computer cave what they need.

Finn Yarbrough
03-17-2014, 03:04 PM
CRTs (in this country, anyway) have already become a thing of the past.
If you get hold of some color-calibration charts and a test monitor that can be calibrated to rec709, you're all set for broadcast.

You might be over-thinking it a little bit.

j1clark@ucsd.edu
03-18-2014, 09:02 AM
1) A camera captures light in a linear fashion, but our eyes do not. Our eyes capture light in a logarithmic curve, effectively raising the blacks and crushing the whites. When some people talk about gamma correction, they are referring to how an image is adjusted to recreate the light in a way we perceive as correct.

2) There is a input loss from source to output in computer displays. This isn't technically correct for panel monitors, but CRTVs used to apply a gamma correction curve of 2.2 to correct the fact that a photon gun would would create an output curve of .4545 when given a linear power input.

Don't know why you would be looking at CRTs these days...

But yes the actual photo sensor used to capture the 'light' in the camera is a 'linear' device, in other words, double the light, double 'charge' in the sensor, and so on until the sensor is 'filled up'... aka saturated.

Human vision and Silver based Photographic Film had a different response, namely, as more light was on the 'film' at some point it's response to light doubling, was less than 'double'... until finally it had no further response... or in fact started having 'less response'... this is was 'solarization' was... in the olden days pointing the camera directly at the sun could yield a 'black' hole were the sun disk was... but I digress...

Some camera manufacturers compensate for the linear response of the photosensor by 'shaping' the voltage output to yield better rendition of 'high values'. Some fancy/expensive cameras allow the user to plug in arbitrary LUTs (Look Up Tables) which takes the 'linear' values of the sensor and translates them into the output values according to a curve, such as 'S-log', or the like.

The LUTs are often 'designed' in capture to allow for gradiation of values to prevent 'saturation'. The result may be 'flat' when viewed on a display. Film film negatives when scanned are 'flat' and require some form of contrast enhancement to yield an acceptable image for display.

The 'gamma' of 2.2 is to allow for material that has been captured/processed at gamma of .4545 to be presented in a 'linear mode'... that is 2.2 * .45 = 1.0 (ok .999... but rounded to 1...).

XanderLust
03-18-2014, 12:00 PM
I figured it out. The reason why I brought up CRTV's was because the original math was supposed to compensate for the power loss of an electron gun. That's where the .4545 and 2.2 come from. You send something in linear, it gets output at .4545, you add a 2.2 gamma curve to fix it. I couldn't understand why it worked like that now, but I finally found some aricles that explained it.

And that explanation is that the technology overlapped. You had a lot of CRTV's and a lot of flat panels in the market at the same time. You didn't want to make en entirely new system, that would mess things up. So they made our current generation monitors behave the same as CRTV's because people had to be able to do the same things, regardless of which monitor you were on. So now its just the way that it is, even though it doesn't neccecarily need to work that way because of actual physics anymore.

What was screwing me up, was that I thought a bitmap image had gamma hard coded into it somewhere. But what is actually happening is that its recorded in a way that assumes its going to have the sRGB gamma curve applied to it later. So the actual image is not .4545 but .4545 is applied to it by photoshop or picture viewer or a website or whatever.

What was screwing me up was linear workflow and I figured out what was messing me up. Maya, C4d, Max, none of them assume sRGB. They pull in images at 1.0 gamma, display at 1.0 gamma and create materials at 1.0 gamma. So when you render things you get strange things happening. Super deep shadows, and blown out mids and highs. You actually have to go in physically change the gamma output to 2.2 and input to sRGB (.4545).

Hopefully that helps explain where I got screwed up and thanks for the replies guys.