Comprehensive guide to HDR

Your footage looked great! I can see it was graded in Resolve (note: one was 24p the other 23.976. Also the HDR meta data was not showing HDR 1000 - it was blank or 0).

I'd previously taken some stock shots comparing SGamut3 / SGamut3.cine / SCinetone using Davinci Colour Managed Workspace. No grade was applied - this is straight out of the camera to render (with SCinetone set to Bybass for colour mgt as this mean to be a ready to go deliverable). This time I rendered it out to HDR1000 in 4:2:0 HEVC (so you may need to download it to play). https://behome.dyndns.info/index.php/s/FKzS9mAPimF2rAW . I left SCinetone in but it really shows how poor it is if you want HDR (not that it is designed for anything other than SDR deliverables).

I then took the same SGamut3.cine clip and used the method you outlined in that post and the rendered it using the same settings as above. https://behome.dyndns.info/index.php/s/AsZCoFDcWLsNZaq

I don't think the CST method outlined gives an accurate starting point for a grade (as an example look at the purple square on the colour checker, it is way off). And your right, checking OOTF on and OFF makes a big difference (See pic). Checked really pushes a lot of detail down into under 1nit..... but unchecked compresses it too much with 1nit looking like a floor almost.

Note: they are all UHD 50fps @80mbs.
 

Attachments

  • scops.jpg
    scops.jpg
    32.5 KB · Views: 0
From what I understand, the idea of the CMWF is to bring all clips into a wide colour space with an accurate (but neutral) "Technical" grade, ready for the "Creative" to make it look great. I think it does this really well. Even just pushing up Saturation makes the clips pop, and with the HDR Wheels you can bring up/down shadows/highlights etc without weird effects in the rest of the image. I find it the quickest/easierst/most accurate way of grading especially with mixed camera clips.
 
Your footage looked great! I can see it was graded in Resolve (note: one was 24p the other 23.976. Also the HDR meta data was not showing HDR 1000 - it was blank or 0).

I'd previously taken some stock shots comparing SGamut3 / SGamut3.cine / SCinetone using Davinci Colour Managed Workspace. No grade was applied - this is straight out of the camera to render (with SCinetone set to Bybass for colour mgt as this mean to be a ready to go deliverable). This time I rendered it out to HDR1000 in 4:2:0 HEVC (so you may need to download it to play). https://behome.dyndns.info/index.php/s/FKzS9mAPimF2rAW . I left SCinetone in but it really shows how poor it is if you want HDR (not that it is designed for anything other than SDR deliverables).

I then took the same SGamut3.cine clip and used the method you outlined in that post and the rendered it using the same settings as above. https://behome.dyndns.info/index.php/s/AsZCoFDcWLsNZaq

I don't think the CST method outlined gives an accurate starting point for a grade (as an example look at the purple square on the colour checker, it is way off). And your right, checking OOTF on and OFF makes a big difference (See pic). Checked really pushes a lot of detail down into under 1nit..... but unchecked compresses it too much with 1nit looking like a floor almost.

Note: they are all UHD 50fps @80mbs.

Thanks so much for the feedback, Nathan!
 
No probs - what I liked about the SGamut3 vs SGamut3.cine test was that with a CMWF, it does not make it any harder to grade. Both look identical as a starting point but you now have an original source that coves 2020 rather than "just" P3 that SGamut3.cine does. So you may as well shoot in the "best" your camera can do. Same with frame rate and resolution. You might as well capture the highest resolution, frame rate, colour space you can. You can always render out lower resolutions, frame rates, colour space but ..... not so great to go the other way. The only exception I have is that frame rate choice should be a clean multiple of what you want to deliver. Being in PAL land, I'd not shoot 60fps even though I could.
 
The whole S-Gamut3.Cine S-Gamut3 thingy is kind of a mess, at least when it concerns ProRes RAW. If you shoot XAVC internal and import the clips into Resolve, it doesn’t recognize the color space at all, at least not with my a7s III. And when recording PRR externally with the Ninja V, Final Cut Pro stubbornly recognizes the footage as S-Gamut3 regardless of how you monitored with the Atomos. At least we know that gamut isn’t baked into the files, which is a good thing, I suppose. Then, after transcoding to ProRes with Apple Compressor and selecting S-Gamut3.Cine, the metadata still says S-Gamut3 in the Invisor app. And, to top it all off, when returning to the footage a year or two later, I can’t for the life of me recall which shots were captured in one or the other color space (except by toggling back and forth while looking at the color, which is quite different between the two). God only knows how Nikolaj Pognerebko's RAW Convertor is handling gamut, there’s no way to be sure except by asking the developer, I guess. As far as S-Log3/S-Gamut3 goes, I’m not sure that the a7s III sensor can even see the entire color space. I know that several of Sony’s other cinema cameras couldn’t, in which case, it’s just wasted data. And like I said before, I like to use LUTs from fime to time, and most are for S-Gamut3.Cine; and the only HDR compatible ones I know of that were created specifically for DWG are those by Cullen Kelly.
 
Last edited:
Something must be different between the A7s III and the FX6 then. Both S-Gamut3 and S-Gamut3.cine shot internally on my FX6 in XAVC are automatically recognised in Resolve for me. So I don't find it a mess at all as it just works. FWIW, I don't shoot prores anything, so I can not comment on this part or any of the hoops to get it to work with Resolve. I'm also not a fan of LUTS, but I get that if you are then picking what is more popular makes sense.

I've no idea how the sensors in the FX6 and A7s III goes for colour space capture but the following is the intent of the two gamuts taken from Sony Pro doco for the F series which states:

“S-Gamut3.Cine/S-Log3” is designed for more like pure log workflow. Color space is similar to negative film scan which is used for TV production, film out and digital cinema. Color reproduction is designed slightly wider than DCI-P3 to provide ample room of grading. Tone curve is more like pure log encoding characteristics preserving more tonal gradations in the blacks, and has good compatibility with Cineon workflow.

“S-Gamut3/S-Log3” is very close to camera native color, and is very good for archiving as a digital camera negative

So I get S-Gamut3.Cine's advantage with older methods for colour grading as it is similar to the Cineon workflow that many were used to. Now we have colour managed workflow however, that reason diminishes. As you saw in my SGamut3 vs SGamut3.cine test they look exactly the same when rendered out to 1,000 nit P3. I wish I had a 10,000 nit 2020 monitor to see if they looked different .... but that is some time and $$$ off for me. Thing is, tech will improve and get cheaper so I figure I have absolutely nothing to lose shotting in S-Gamut3 (it takes no more disc space) and only upside down the track when we all have 2020 displays vs P3 limited S-Gamut3.cine .
 
Incidentally, the Colorchecker is only accurate for SDR rec.709. Have you ever tried aligning the chips using the vectorscope?
 
No. I don't use the Colorchecker for Resolves colour matching setup (or whatever that is called), but any workflow should render the colours and shades accurately regardless of if it SDR HDR or whatever. They are just colour chips at the end of the day. If something looks weird, then something is wrong.
 
Art Adams had this to say about choosing gamut:

“While the temptation to use full SGamut3 is probably overwhelming to some, it’s best to ask yourself (1) when will the extra color be displayable, (2) will your project still be marketable when that happens, (3) are you shooting anything that takes advantage of that color space, and (4) do you have the talents of a professional and expert colorist at your disposal. If not, SGamut3.cine is clearly the better choice. If your project has a long shelf life and would look great with rich saturated color then SGamut3 will protect all that, but you won’t be doing the Rec 709 or P3 grades yourself: you’ll need skilled professional help.”

And even a skilled professional like Tashi Trieu, who graded Cameron’s $250 million Avatar: Way of Water, which was was worked on in Dolby Vision from day one and recorded in Sony X-OCN, made a show LUT that mapped from S-Gamut3.Cine to P3-D65, which he said,

“left plenty of flexibility to push moments of the film more pastel or into an absolutely photorealistic rendition.”

Unfortunately, you're not going to be able to take advantage of displays with greater than 1,000 nits with your current grades just by changing the output color space. Your projects will all have to be re-graded. HDR10 caps out at 1,000 nits, and Dolby Vision requires a minimum of 12 bits. And brighter displays are arriving much more quickly than displays with a color gamut wider than P3. If you're really concerned about future-proofing, shooting 10-bit Y'CbCr rather than RGB 4:4:4 might not be the way to go. Like I’ve said many times before, everyone should do their own testing and decide for themselves what works best for them, but in the case of S-Gamut*, I would defer to the experts.
 
Last edited:
Thanks for the extra insight and I'm so far from being an expert it is not funny (so take any of may arguments with fistfuls of salt) and I have no reason to doubt them.

My counter point (from my own testing) is that the Gamut choice in camera relates is simply what is used for capture and not the colour space that I'm using for grading. I'm grading in 1,000 nit DWG and Resolve has done all the maths to convert (ICT) the various gamuts into that DWG colour space (eg it completely abstracts the input footage into one colour space regardless of what the gamut of each clip is). I then render out in 1,000 nit P3 (or 709 or one day full 10,000 nit 2020). Again Resolve does all the maths for the OCT to go from DWG to the chosen colour space. From my testing it makes no difference what the input gamut of the clip is in relation to how hard it is to grade in DWG (at least to produce 1,000nit P3). 10,000nit 2020 may be another story but I don't even have the equipment to attempt it).

I think the bigger question is using CMW is how well does the grade done in DWG hold up when rendering out to different formats like HDR and then SDR. So far I've focused on 1,000 nit HDR P3 and it works really well. I then push out an SDR 709 "web" version for streaming without touching the grade but I'm pretty sure that a "real" 709 specific grade could be made to look better. So far I've not cared enough to test a side by side on this.

Anyway, I'm off to India and Bhutan for a few weeks and will shoot in SGamut3 so wish me luck (I'll edit/grade when I get back).
 
What becomes of skin tones when a clip that is mastered to 1,000 nits is viewed on a 4,000-nit display?

a) Luminance levels will all remain unchanged
b) Luminance levels will all increase exponentially
c) Skin tones will remain unchanged, highlights and shadows will expand and fur will become more saturated
d) The picture will fry the viewer's eyeballs
e) None of the above

According to the Ultra HD Forum, skin tones should be rendered at the same absolute luminance:

"The PQ signal is “display-referred”, meaning that the pixel-encoded values represent specific values of luminance for displayed pixels. The intent is that only the luminance values near the minimum or maximum luminance capability of a display are necessarily adjusted to utilize the available dynamic range of the display. Some implementations may apply a “knee” at the compensation points in order to provide a smoother transition from the coded values to the display capabilities; e.g., to avoid “clipping”.

When default display settings are engaged, PQ enables pixel values in the mid-range, including skin tones, to be rendered on a display at the same (absolute) luminance level that was determined at production.

For example, if a scene was graded on a 1000 cd/m2 grading monitor and then displayed on a 4000 cd/m2 display, the skin tones can be rendered at the same luminance values on the 4000-nit display as on the 1000-nit monitor per the grader’s intent, while the speculars and darker tones can be smoothly extended to take full advantage of the 4000-nit display."

Regrettably, I don't happen to have a 4,000-nit Dolby Pulsar lying around, but I did manage to compare the brightness levels of a few videos on my MacBook Pro and iPhone 12 Pro Max. First, with True Tone and Auto Brightness disabled, I toggled between display presets Apple XDR Display (P3-1600 nits) and HDR Video (P3 ST2084) on the MacBook Pro (2021) while viewing YouTube HDR10 and Netflix Dolby Vision content. Skin tones and highlights both became brighter and shadows became crushed when switching from P3-ST 2084 to P3-1600 nits. Comparing the MacBook Pro with reference mode P3 ST2084 to the iPhone 12 Pro Max, highlights and skin tones on the 1,200-nit 6.7-inch OLED display were marginally brighter. It's possible that in all instances, highlights are being stretched out more than skin tones, but it's hard to quantify. That's one of the reasons I tend to grade on the dark side. I'm also pretty certain that many would consider the brighter, tone-mapped version on their Android or Apple phone more impactful.

Further on, the document reads:

"Note that future displays may become available with higher peak brightness capability compared with those available today. Content that is expected to be of interest to consumers for many years to come may benefit from retaining an archive copy coded in “absolute values” of light (e.g., PQ) or original camera capture format (e.g., RAW, log) so that future grades of higher luminance range can be produced and delivered to viewers.”

Meanwhile, many devices ignore the metadata we insert to preserve the creator’s intent altogether, so it's all a little confusing!
 
Last edited:
I agree with that last part on the future. As you posted previously, our own testing makes sense. Take two clips of the same scene in both SGamut3 and SGamut3.cine. Grade the .cine on a DWG 1000nit timeline. Apply the same grade to the none cine version and render out 1000nit HDR P3. Both should look identical (they do for me), are just as easy to grade for today's common HDR spec, but now we also have a better source if we ever need to re render out for some better future spec.

Can't comment on the first part regarding changes to skin tones etc.
 
I agree with that last part on the future. As you posted previously, our own testing makes sense. Take two clips of the same scene in both SGamut3 and SGamut3.cine. Grade the .cine on a DWG 1000nit timeline. Apply the same grade to the none cine version and render out 1000nit HDR P3. Both should look identical (they do for me), are just as easy to grade for today's common HDR spec, but now we also have a better source if we ever need to re render out for some better future spec.

Can't comment on the first part regarding changes to skin tones etc.
I'm afraid you're mistaken. I never said a damn thing about grading S-Gamut3.Cine and S-Gamut3 in DWG or anything of the kind. You’re introducing all sorts of variables! Every school boy knows that. Certainly, everyone should be doing their own tests, but yours are faulty to the nth degree and contradict the findings of renowned colorists and experts in the field, including those of key mastering engineers at Sony Pictures in Culver City. None of us is going to be wasting our time re-grading any of our footage 10, 20 or 50 years from now anyhow. Most natural colors fall within rec.709 and your camera isn’t even capable of seeing the entire S-Log3/S-Gamut3 color space, so you’re just wasting data. You also appear to be suffering from the delusion that twenty years from now, you’ll just be able to change your output color space to rec.2020 and your footage will magically have better color, when in all probability, you’ll have to color everything over again. Even if the transform was mathematically accurate, the colors could be perceptually different because of something called the Hunt effect. Colorists grade in rec.709 and transform to P3 for SDR all the time, but going from P3 to rec.2020 > 1,000 nits could be a different story altogether. You’d get immediate tangible benefits today by recording 12-bit RAW (68,719,476,736 colors!) rather than low bit rate 10-bit Y’CbCr that includes a desaturation function but it’s obvious you've got no idea what you're doing, and as far as I'm concerned, this conversation is over.

From Sony Professional:

”There is a lot of confusion regarding S-Gamut3 and S-Gamut3.Cine. Essentially the difference is S-Gamut3 is the native (very wide) color space of the camera, and is wider than REC-2020. It is very good for archiving as a digital camera negative, as transcoded / debayered up to16 bit code values. However, it is much more involved to grade footage than if shot using S-Gamut3.Cine. Whereas S-Gamut3.Cine is natural color reproduction with minimum grading needed in comparison to SGamut3, it is still a wide color space, beyond DCI-P3, and much wider than REC-709. We recommend shooting S-Gamut3.Cine most of the time. The Look Profiles, and 709 3DLUT’s provided in the camera, and in Sony's RAW Viewer are designed specifically for S-Gamut3.Cine, not for S-Gamut3. If you choose to shoot S-Gamut3 you need to convert to S-Gamut3.Cine with the 3DLUT provided below, “SLog3SG3toSG3Cine.cube”, then in addition apply Look Profiles or other type 3DLUT provided with the camera and in Sony's RAW Viewer. In other words you need to apply two 3DLUT’s. If you are experienced at Color Grading, you can indeed shoot S-Gamut3, but as mentioned above it is much more work than if shooting (and thus grading) S-Gamut3.Cine”.

https://us.community.sony.com/s/ques...language=en_US
 
Last edited:
Back
Top