White balance on the run

I paid $5400 for a used Komodo back in May of '23. Then I picked up various add-on's etc. to build out a working package that probably amounted to another couple of grand.

I've certainly got my use out of it in the years since, so I still consider it a good purchase. But I'm reluctant to turn and sell everything now for just a couple of thousand dollars when I can still rent it out on the occasions that I need what it can offer.

I'm not usually one to hang on to "old" tech, but I've found that the gear I have in my possession right now can still get the job done at a high level. It's cliche to say, but almost everything you can buy nowadays is REALLY good, but more importantly has been for years now.

Lately I've been spending more of my time researching and finding gear that can improve my workflows, or make life on set easier or faster or lighter for that matter.

Sometimes that can indeed involve a new camera, so I'm always paying attention to what's out there and being released.
Yeah, I mean the Komodo has great image and DR so I think it will be usable for years to come.
 
I paid $5400 for a used Komodo back in May of '23. Then I picked up various add-on's etc. to build out a working package that probably amounted to another couple of grand.

I've certainly got my use out of it in the years since, so I still consider it a good purchase. But I'm reluctant to turn and sell everything now for just a couple of thousand dollars when I can still rent it out on the occasions that I need what it can offer.

I'm not usually one to hang on to "old" tech, but I've found that the gear I have in my possession right now can still get the job done at a high level. It's cliche to say, but almost everything you can buy nowadays is REALLY good, but more importantly has been for years now.

Lately I've been spending more of my time researching and finding gear that can improve my workflows, or make life on set easier or faster or lighter for that matter.

Sometimes that can indeed involve a new camera, so I'm always paying attention to what's out there and being released.
All good observations. Most of today's cams and those from the past few years can deliver good material. The finished look is much more influenced in post these days than on set in many cases.

Chris Young
 
Sadly the camera companies don't care about those things. They throw in features that will get attention or sales even if they don't make sense. They intentionally hamstring models or won't improve them to maintain the hierarchy they've set. Many years go by things that matter not improving. Often the best option is learn to be happy with what you have and improve non camera aspects of your production.
 
Interestingly, This just popped up. It went up yesterday.

This, the 2nd, CST workflow method, is exactly what I have been doing with the ZR footage I have here. I think this is the best way to work with the R3D footage from the ZR... so far!

In addition to what is demonstrated in this video, I add one or two nodes at the beginning of the node tree when I need either temporal or spatial or both forms of noise reduction.

Chris Young

 
what the difference between CST and LUT, - seems like BS.
The greater the dynamic range you are trying to manipulate will have a much wider range of gradients. Putting it simply, a 33 point 3D LUT is a Look Up Table of Hue, Saturation and Brightness. It has the ability to handle 33 x 33 x 33 values. That's a total of 35,937 values. The 10 bit image we enjoy from our camera sensor captures 1024 x 1024 x 1024 = 1,073,741,824 color shade values. That is why a 33 point 3D. LUT is a crude approximation of what is coming off your camera's sensor. It is far less accurate than a CST as that that converts all 1,073,741,824 values to linear, then from linear to your chosen output color space and gamma. Be that 709 or HDR. Not all LUTs are equal. The greater dynamic range you capture, the more gradient values need to be processed. If using 3D LUTs in post on your timeline, use 65 point LUTs. At least they have 274,625 look up values.

Bluntly, a beautiful very wide dynamic range 10-bit image can contain many thousands of different shades greater than the 274,625 values available from a 65 point LUT to convert the image to your chosen color space. If there are not enough point values to faithfully reproduce the range of values captured by the sensor, the 3D LUT then chooses the next nearest value to convert the information to that color space. On a lower dynamic range image, far fewer values are needed to convert to your chosen color space. Lower dynamic range end delivery 709 images are the most forgiving using LUTs. But a true 15 stop dynamic range image going out to an HDR color space is severely impacted by the lack of translation values. This is not the case with Color Space Transforms. They are not a limited range of approximations, guesses in other words. They are an accurate linear mathematical conversion formula using all the 1,073,741,824 values from your sensor to get to your output color space and gamma.

To sum up. LUTs are NOT exact, LUTs are lookup tables that interpolate transforms of code values. Color space transforms on the other hand use mathematical formulas to transform code values as accurate as the given target bit-depth allows. Usually calculated in 32-bit floating-point.

Maybe think of it this way. LUTs are JPG images and CSTs are vector images. Most common LUTs only provide you with 33 or 65 points in a cube. What happens between the points is not defined in itself, that's why there are different ways to interpolate LUT. CSTs on the other hand, are basically mathematical visualisation functions. Like vector images, the CST process is not destructive.

Chris Young

For those technically inclined? A brief overview can be found here. Pay particular note to the 'Banding' comments.:

https://blog.frame.io/2020/04/27/luts-vs-transforms/

Another quick explanation. The important information is from 6:41 onward:

 
Out of curiosity I thought I would ask Chat GPT

What is the difference between 3d LUTs and color space transforms. Its reply! Interesting?

Chris Young

"Great question — this is a key distinction in color management and grading workflows.

Let’s break it down clearly:
3D LUTs (Look-Up Tables)

A 3D LUT (three-dimensional look-up table) is a data-driven mapping that takes an input color and outputs a transformed color. It’s essentially a precomputed color remapping cube.

What it does
Maps input RGB values to output RGB values through interpolation in a 3D color cube. It can emulate creative looks (e.g. “film looks,” show LUTs, camera-to-display transformations). It’s not necessarily bound to specific color science rules — it’s more of a “baked-in” transformation.

Pros
Fast and predictable — the look is exactly as designed.
Portable — can be used in many tools (Resolve, Nuke, Premiere, etc.).
Can include creative stylisation and tone mapping.

Cons
Limited precision: It only remaps the sampled cube points; results can band or clip.
Not aware of scene-linear math: It can distort if used in the wrong color space.
Not reversible: You can’t perfectly “undo” a 3D LUT.

Color Space Transforms (CSTs)
A Color Space Transform is a mathematical conversion from one color encoding system to another, based on known color science.

What it does
Converts between defined color spaces (e.g. S-Log3 → DaVinci Wide Gamut → Rec.709).
Handles gamma, gamut, and white point conversions using defined formulas.
Usually deterministic and reversible (within floating-point limits).

Pros
Mathematically precise and consistent.
Reversible (no baked look).
Keeps scene-linear data integrity.
Color-managed workflows (like ACES or DaVinci Color Management) rely on these.

Cons
Not creative — it only standardises color interpretation.
Requires accurate metadata (knowing source and target color spaces).

In Practice
Purpose 3D LUT
Type Data-driven look-up Math-based color model conversion. Used for Creative looks, display rendering Technical color management. Editable? Not easily. Reversible? No
Examples “Kodak 2383 Film Look” LUT S-Log3 → Rec.709 CST in Resolve

Purpose CST
Common Workflow Example
In DaVinci Resolve, a typical color-managed pipeline might look like:
1. Input CST: Convert camera Log (e.g. S-Log3) → Working Space (e.g. DaVinci Wide Gamut).
2. Grade in Working Space: Apply corrections.
3. Creative LUT: Add film look or show LUT (3D LUT).
4. Output CST: Convert Working Space → Rec.709 for final display."
 
I don't know if this relates to your conversation (because it's over my head), but I was reading that some feel that Nikon intentionally implemented the other codecs in a lower IQ to make the R3D codec shine since camera seems to be intended to get ppl into the RED eco system by making its codec shine. Or they're doing line skipping or pixel binning the compressed codecs to prevent it from over heating? Would recording a Raw codec take less processing power generate less heat?
 
I don't know if this relates to your conversation (because it's over my head), but I was reading that some feel that Nikon intentionally implemented the other codecs in the camera to make R3D codec shine or to prevent the camera from overheating is doing things prevent the compression codecs from over heating the camera. Would recording a Raw codec take less processing power generate less heat?
You are probably correct. Less heat generated recording RAW. Basically, RAW is just that. RAW data. No color IC processor calculations going on doing intense RGB color conversion calculations. Though I would imagine that heat level would be pretty minor, and taken into calculation at design time? Interesting thoughts though. A discussion going on here about the same topic. Also, a comment on an overheating shut down in H.265 which as we know is very computationally intensive during encoding.

Chris Young

 
I
You are probably correct. Less heat generated recording RAW. Basically, RAW is just that. RAW data. No color IC processor calculations going on doing intense RGB color conversion calculations. Though I would imagine that heat level would be pretty minor, and taken into calculation at design time? Interesting thoughts though. A discussion going on here about the same topic. Also, a comment on an overheating shut down in H.265 which as we know is very computationally intensive during encoding.

Chris Young

It's literally rinse and repeat. There's never any issues in the pre release sponsored reviews and then once it's in the hands of paying customers do these type of issues get revealed. I don't trust these camera manufactures to ever voluntarily disclose anything. I'm not really mad because this is what I've come to expect. They spend years developing these cameras so they obviously know the thermals without active cooling. Is it any wonder why professional reach for the Sony FX3?

I can hear their excuse... this is a vlogging camera, it not meant for long recording, the internal battery will run out before it over heats. But the funny thing is the most likely codec the average user would use with this camera is the H265. Another common use would be for streaming which commonly runs for long time.
 
Last edited:
I

It's literally rinse and repeat. There's never any issues in the pre release sponsored reviews and then once it's in the hands of paying customers do these type of issues get revealed. I don't trust these camera manufactures to ever voluntarily disclose anything. I'm not really mad because this is what I've come to expect. They spend years developing these cameras so they obviously know the thermals without active cooling. Is it any wonder why professional reach for the Sony FX3?

I can hear their excuse... this is a vlogging camera, it not meant for long recording, the internal battery will run out before it over heats. But the funny thing is the most likely codec the average user would use with this camera is the H265. Another common use would be for streaming which commonly runs for long time.
Indeed! Indeed!

Chris Young
 
Out of curiosity I

Im lost. The recording bucket of a camera (10bit) is 0-1024 in each channel

The only way a lut could band (no clip) would be to translate two values to one value..

eg.. map 221 and 222 to 235.

Does this happen? If it doesnt happen it is not destructive!

I can see that if the lut is too 'small' easpecially with 12 or 14bit footage that this could happen.

I dont really 'buy' colour space or gamut or any of that sht. Yes when I used to send CMYK files to the printer yes. But now not so much.

Just make it look good on mums phone?
 
In a attempt not be be a total pedant I tried a CST on my timeline.

We can see that it significantly flattenes a liner greay scale.

Not a bad thing but something I prefer to do myself and would mess up many folk of lesser calibre than me (!) cst.JPG
 
I really can't comment on other people's workflows. I just know after years of editing what works for me. The resulting differences in image quality between using LUTs and CST can sometimes be very noticeable. If you want to work with LUTs, and you want to know if they are going to deliver on the footage you are using them on, then the best thing to do is to stress test them first. This is quite easily done. Have a look at Cinema Tools article on stress testing LUTs. Especially if you are not sure of their Cube point depth or how well they have been produced. Or by whom for the matter. There are a ton of crap LUTs being sold out there in that good old wide world of the web.

Stress Test chart can be downloaded here:

https://truecolor.us/downloads/lut-stress-test-image/

Cinema Tools demo on how to stress test your LUTs. Just check out how Nikon's original official N-LOG LUT was a gradient killer! Once you have the stress test chart downloaded, drop it on a timeline and, and if in Resolve, go to the Color Page and drop your favourite LUTs onto the stress chart. Now just watch the truncation of the gradients on the vertical color ramps and on the color Hexagons. Harsh gradient truncations, resulting in the loss of smooth, subtle gradient transitions.

Now do the same thing but using CST. Two nodes. Now a CST onto node #1. For this exercise, set the input of node one as follows.

Node #1
Input color space = Sony S-Gamut3
Input Gamma = Sony S-Log3
Output Color Space = Davinci Wide Gamut
Output Gamma = Davince Intermediate

Node #2
Input color space = Davinci Wide Gamut
Input Gamma = Davinci Intermediate
Output Color Space = Rec 709
Output Gamma = Gamma 2.2 (For CRTs this was always 2.4)

Now, on a known quality good Rec 709 monitor, look at the gradients on your chart. Far smoother and less truncated. A far more accurate and nicer looking result than even a 65 point LUT. Now if you want to grade, bang in extra LUTs between your CST nodes and manipulate.

https://www.cinematools.co/blog/how-to-test-quality-of-a-lut

Chris Young
 
Last edited:
Back
Top