PDA

View Full Version : Sony FS100 AVCHD holds better in color grading than anything else



Postmaster
08-09-2011, 03:38 AM
Okay, ether I´m missing something here or this is a real bummer.
But from my first tests, it looks like the AVCHD codec of the Sony FS100 holds up better in color grading than anything else.

I couldn’t believe what I saw. This turns a lot, of what I know about codecs upside down.
Maybe David from Cineform want´s to chime in here – I have no idea,what´s going on.

Here is my article: http://frankglencairn.wordpress.com/2011/08/09/sony-fs100-avchd-holds-better-in-color-grading-than-anything-else/

Frank

FelixGER
08-09-2011, 04:51 AM
Yep. I also made the experience that a conversion to cineform (original MTS) introduces a lot of banding which worsens when grading.
When I got the camera, the first thing I did was testing how it handles red tones. I converted my stuff with cineform as always before even watching the original files and was shocked!
The worst banding I had ever seen! The original files were fine, almost perfect.

Using cineform to render a master file out of the NLE works fine though.

PabloOzzy
08-09-2011, 06:04 AM
Hey Frank - did you see someone posted a comment on your blog about how the cineform conversion does not pass over the information recorded in the super whites? Not sure how you would fix that but yes David may have a solution.

cuervo
08-09-2011, 06:57 AM
Frank,

I believe you use Sony Vegas for your NLE. Be aware that Vegas doesn't really play very well with Cineform. The guys in Madison put a restriction on the Cineform guys about conversions to RGB color space. Indeed, the superwhites and superblacks get truncated. Also, when processing in vegas, did you turn on floating point? If not, for sure you're losing some shadow and hi-lite information. I changed over from vegas to Media Composer, a few years back, for this very reason.

And, what does this say about acquisition with third party encoders, like the BMD Hyperdeck? I know that there are issues with data coming out of the HDMI port on cameras like the VG10. The moire out the HDMI port is worse than what is acquired on the native recording media because of in camera processing.

Dermot
08-09-2011, 07:24 AM
It looks to me like you are really talking about your worklflow and issues created.. it might be more correct to say native is better than BMD & Cineform, there's many choices beyond those two - not all codec's are going to show the issues you are seeing.

There's a few codec's that i stay far, far away from, and you are using two of them to compare to native, and you have ran into the reasons i don't touch BMD & Cineform codec's.

Try Avid's DNxHD sometime, it's a free download

I've been working with the native re-wrapped into h.264, and then linking to them, temp renders in DNx, and final renders uncompressed, i keep all the information, always

d/

FelixGER
08-09-2011, 08:02 AM
[QUOTE=cuervo;2407008 Indeed, the superwhites and superblacks get truncated[/QUOTE]

Nope, Vegas shows the whole 0-255 range, but cineform decodes to 16-235

Postmaster
08-09-2011, 08:47 AM
Frank,

I believe you use Sony Vegas for your NLE.

Nope, Premiere CS5.5




Try Avid's DNxHD sometime, it's a free download

Tried it a minute ago - just for the sake of it - looks exactly like Cineform and the BM results.

Frank

PDR
08-09-2011, 09:31 AM
How are you converting to the various other formats ? I suspect the method you are using is causing the clipping

They all are capable of retaining superbrights, but I suspect there is a problem with the workflow you are using

Postmaster
08-09-2011, 09:48 AM
Maybe I wasn´t clear enough.

The effects you see, are after heavy duty level tweaking.
I wanted to see, when and if those codecs fall apart and what the differences are.

In normal grading you would never go that far.

After the conversion, they all look fine and there is no visual difference to the original material.
The difference you see is the result of a stress test.

Workflow: I load the original AVCHD clip on the premiere timeline, and export it with different codecs.
I import those clips back into Premiere and apply the same (excessive) color correction.


Frank

PDR
08-09-2011, 09:58 AM
But what your pictures demonstrate is a Rec601/709 conversion (somewhere along the chain, Y'CbCr has been converted to RGB in limited range, thus you lose Y' <16 and Y' >235)

The reason they "look" the same, is you are viewing an RGB converted representation of the Y'CbCr data. You're not seeing the whole picture. The program monitor preview in PP is converting the Y'CbCr data to RGB in limited range so you can display it on your monitor.

Open the YC wave form (without any other filters) and you will see those formats which Premiere treats as Y'CbCr having values >100 IRE. Those are superbrights that can be salvaged with "YUV" labelled filters (In earlier PP versions, "fast color corrector" can salvage them) . Those formats which Premiere treats as RGB, or have incurred a limited range RGB conversion somewhere along the workflow will show a clipped waveform - you cannot salvage that lost data.

Native AVCHD is treated as Y'CbCr in PP - thus you can use those YUV labelled filters to recover those superbrights. Theoretically v210 (or what you call 10bit YUV) should give you similar results - but your screenshots do not show this - they suggest data has been clipped

Dermot
08-09-2011, 09:58 AM
and you also export into the FS's native h264 and re-import?

i suspect there's something going wrong in the export/re-import process

d

nyvz
08-09-2011, 10:07 AM
This is partly why I always shoot with a Picture Profile that does not use superwhites or superblacks. Probably 50% of the time when I hand off MPEG4-based footage (h.264 from 5d/7d, avccam from af100 etc) to an editor, not knowing what workflow in particular they are using (usually FCP, though), the final edit ends up with blacks and whites clipped and sometimes gamma misinterpreted. Simple solution, set your knee up to clip white at 235 and black level at 16 or above. We are fortunate that the FS100 can do this, since the AF100 and 7D/5D cannot.

cuervo
08-09-2011, 10:09 AM
OK, I don't know Adobe PP much, but, I do know After Effects v5.5. If you don't set up the project in 32-bit space, After Effects will definitely truncate super b and super w.

Postmaster
08-09-2011, 10:29 AM
Premiere handles all files at 32-bit space - as far as I know, not all effects use that though.

I used the fast color corrector.

Frank

PDR
08-09-2011, 10:34 AM
Yes CS5 uses 32-bit float precision, but that's not the problem here . Y'CbCr and RGB are color models that do not overlap completely, and you have gone into "RGB" space without doing a full range conversion, thus losing some data along the way

Premiere handles v210 and native AVCHD as Y'CbCr , so you have access to all the data before you grade if you use "YUV" labelled filters - that's the difference you are seeing here . Those "YUV" labelled filters are applied before PP's 32-bit RGB conversion. Your 10-bit YUV results should be identical to the AVCHD results, but they are not because you have incurred an intermediate RGB converison somewhere. SO when you re-import the file, you have already truncated some data even before you've applied any filters (you cannot recover that lost data)


Convert it using a workflow that preserves Y'CbCr and you will see the difference . For example use ffmpeg to convert to v210 .

Postmaster
08-09-2011, 12:01 PM
I see what you are talking about, but would that also apply to the BM YUV 10 bit fle?

PDR
08-09-2011, 12:11 PM
I see what you are talking about, but would that also apply to the BM YUV 10 bit fle?

In my experience, BM codecs incur an intermediate limited range RGB conversion before the YUV export from a Y'CbCr source (I'm using "YUV" and "Y'CbCr" interchangabley here, but strictly speaking, everything digital should be called Y'CbCr)

There is no way to preserve the superbright and superdark data, even through other programs or workflows - at least I haven't found a way. I recommend not to use them -someone else mentioned the same thing earlier


It's important to note, that it's from the limited range RGB conversion. You can convert to RGB in full range (Y' 0-255 gets "mapped" to RGB 0,0,0 - 255,255,255 , instead of Y' 16-235 gettting "mapped" to RGB 0,0,0 -255,255,255 for 8-bit formats, it would be 0-1024 for 10-bit full range formats) , but BM doesn't do this.

In my experience, cineform gets clipped as well in premiere (but not in vegas). That is, it is treated as linmted range RGB (it incurs the standard Rec601/709 conversion). In vegas, cineform gets converted using studio RGB, which is the eqivalent to a full range conversion - superbrights and superdarks are preserved. I'm suprised someone had problems earlier with it, maybe he had is setup differently, or converted it with a different workflow

Because most grading programs and filters actually work in RGB, In order to perserve the Y'CbCr data (and superbrights/darks) you have to either:

1) convert to RGB using full range

2) use a format that your software treats it as Y'CbCr , or has filters that can access Y'CbCr before RGB conversion (such as PP's YUV filters) - so it can access all the data. -> In this case, because you already incurred a limited RGB conversion, you cannot salvage the data that had already been lost even with YUV filters



Note this applies to chroma channels as well. Cb and Cr can contain data in 0-255 range, but "standard" range clips it to 16-240 (not 16-235). When you see some examples with chroma clipping, it's sometimes a workflow problem or software problem

cuervo
08-09-2011, 12:49 PM
In my experience, cineform gets clipped as well in premiere (but not in vegas). That is, it is treated as linmted range RGB (it incurs the standard Rec601/709 conversion). In vegas, cineform gets converted using studio RGB, which is the eqivalent to a full range conversion - superbrights and superdarks are preserved. I'm suprised someone had problems earlier with it, maybe he had is setup differently, or converted it with a different workflow

Because most grading programs and filters actually work in RGB, In order to perserve the Y'CbCr data (and superbrights/darks) you have to either:

1) convert to RGB using full range

2) use a format that your software treats it as Y'CbCr , or has filters that can access Y'CbCr before RGB conversion (such as PP's YUV filters) - so it can access all the data. -> In this case, because you already incurred a limited RGB conversion, you cannot salvage the data that had already been lost even with YUV filters


Thanx for explaining this. It all makes a lot of sense to me. One of the few good things found in Grass valley's Edius was that it works in Y'CbCr space, preserving those extra bits of info. My understanding and experience says Avid's Media Composer, as well as After Effects (in 32-bit float mode) also performs the conversion preserving those bits. Have you experienced any other transcoding apps that preserve all the data? The reason I am asking is because my workflow requires that I transcode from my capture format to DNxHD prior to injesting to Media Composer. I've been using AEx in 32 bit float mode to do this. I was using Cineform as a good digital intermediate, but, have abandonded CF in favor of DNxHD 10-bit.

PDR
08-09-2011, 01:03 PM
Thanx for explaining this. It all makes a lot of sense to me. One of the few good things found in Grass valley's Edius was that it works in Y'CbCr space, preserving those extra bits of info. My understanding and experience says Avid's Media Composer, as well as After Effects (in 32-bit float mode) also performs the conversion preserving those bits. Have you experienced any other transcoding apps that preserve all the data? The reason I am asking is because my workflow requires that I transcode from my capture format to DNxHD prior to injesting to Media Composer. I've been using AEx in 32 bit float mode to do this. I was using Cineform as a good digital intermediate, but, have abandonded CF in favor of DNxHD 10-bit.

There's the curveball...



Just because a certain format is Y'CbCr , or you use a workflow that supposedly perserves Y'CbCr, doesn't mean that other software "sees" it as Y'CbCr.

For example, vegas won't treat properly converted v210 (uncompressed 10-bit 422), the same way premiere treats it. Vegas actually clips v210 and treats it as RGB using "computer RGB" or standard range conversion. Uncompressed 8-bit YUV "IYUV" is passed through vegas (full range studio RGB), but gets clipped in Premiere. There are different types of uncompressed 8-bit YUV - they byte order can be slightly different, for example fourcc "YV12" will get clipped by vegas

The point is, each software has little "quirks" that you have to figure out for yourself. If there is an RGB conversion step somewhere in your workflow, most programs will do a limited range conversion via Rec601/709 and clip superwhite/darks

I don't have Edius or Avid MC, but I suggest using doing some tests on full range Y'CbCr material

Can't Avid MC ingest AVCHD now? I thought it could ? I thought it converts to DNxHD

I use FFMPEG for a lot of batch work to v210, because you can do batch scripts, and it accepts avisynth.

Dermot
08-09-2011, 01:52 PM
There is no way to preserve the superbright and superdark data, even through other programs or workflows - at least I haven't found a way. I recommend not to use them -someone else mentioned the same thing earlier

Oh but there is - DS imports / captures /exports full range (0-1023) if you ask it too...

Currently i'm finishing a feature shot on 16mm, xfered Dmin/Dmax on a Sprit so the HDSR tape is full range, and after i have it finished (three reels done now, two more to go) I'm exporting full range DXP (0-1023) for filmout on a ArriLazer

It's done all the time.. really... and just about every DI tool set on the planet can do it

d

cuervo
08-09-2011, 02:04 PM
Can't Avid MC ingest AVCHD now? I thought it could ? I thought it converts to DNxHD


Well, Avid claims MC will do AVCHD. It won't do it in AMA(fast import) mode, and there's a bug in the standard import code.
What front end GUI are you using with FFMPEG?

PDR
08-09-2011, 02:10 PM
Well, Avid claims MC will do AVCHD. It won't do it in AMA(fast import) mode, and there's a bug in the standard import code.
What front end GUI are you using with FFMPEG?

I don't use a GUI, I use it in a batch file

BTW, FFMPEG can't do the 10bit variety of DNxHD, only 8-bit .

So you're looking for a alternate batch methods of converting to DNxHD ?

PDR
08-09-2011, 02:13 PM
Oh but there is - DS imports / captures /exports full range (0-1023) if you ask it too...

Currently i'm finishing a feature shot on 16mm, xfered Dmin/Dmax on a Sprit so the HDSR tape is full range, and after i have it finished (three reels done now, two more to go) I'm exporting full range DXP (0-1023) for filmout on a ArriLazer

It's done all the time.. really... and just about every DI tool set on the planet can do it

d


Sorry what program is DS ? We're talking about BM v210 here , correct ? That's what my comments were specifically referring to in reponse to Frank's question about his BM YUV 10bit results

Of course there are formats and workflows that can store full range 10-bit YUV ...

cuervo
08-09-2011, 02:27 PM
So you're looking for a alternate batch methods of converting to DNxHD ?

yep.....so I don't tie up my Avid doing transcoding. I can batch the conversions on another machine.

PDR
08-09-2011, 02:37 PM
So you're looking for a alternate batch methods of converting to DNxHD ?

yep.....so I don't tie up my Avid doing transcoding. I can batch the conversions on another machine.


FFMPEG can only do the 8bit variety of DNxHD, not sure if you want/like v210, the filesizes are huge compared to DNxHD.

I dislike DNxHD because MOV container causes so many issues with different programs, gamma problems, levels shifts

If you are on a PC, DNxHD is usually routed through quicktime. Big headache.

Dermot
08-09-2011, 03:02 PM
Sorry what program is DS ? We're talking about BM v210 here , correct ? That's what my comments were specifically referring to in reponse to Frank's question about his BM YUV 10bit results

Of course there are formats and workflows that can store full range 10-bit YUV ...
DS = Avid DS = Avid's 4K/2k/HD finishing system;
http://www.avid.com/US/products/dssoftware

I don't use BMD's codecs, and i don't use Cineform codec's either.. neither of them are a match for my workflow.

I have no problems round tripping with 10bit DNxHD, they seem fine for broadcast use, i don't use any codec's for filmout or DCP projects, they stay uncompressed, even if shot on a FS100

back to Frank's issues.. seems more about his workflow than about the strength of the native h.264 codec the camera shoots in... and to say h.264 better than ANYTHING is perhaps overstating it a bit?

d/



\

Dermot
08-09-2011, 03:05 PM
FFMPEG can only do the 8bit variety of DNxHD, not sure if you want/like v210, the filesizes are huge compared to DNxHD.

I dislike DNxHD because MOV container causes so many issues with different programs, gamma problems, levels shifts

If you are on a PC, DNxHD is usually routed through quicktime. Big headache.
MetaFuze is free man... what more can you ask for?

re-wrap to h.264
use MetaFuze to make MXF media & ALE from the h.264's
copy MXF media into an indexed directory
open ALE in Avid
done like dirt, get to work....

d

cuervo
08-09-2011, 03:16 PM
Metafuze? Sorry, but the last time I looked, Metafuze can only do 24fps. Has it been updated?
edit: ok, I checked Metafuze out. It seems it will do a whole range of formats...totally cool!
Unfortunately, a scan of my folders and Metafuze won't recognize my Nanoflash XDCAM 422 files(mpeg2 in a mxf wrapper)

Postmaster
08-09-2011, 04:11 PM
Thanks for you input guys.

I was always under the impression that converting to a „higher“ codec – especially an uncompressed one – would (of course) not make the image better or ad information out of nothing, but would at least not loose information.

It looks like I was wrong and the YUV-RGB conversions probably also have something to do with it.

Well, that only strengthens my believe in an uncompressed workflow from recording till delivery.

I gonna do some tests - AVCHD in camera vs. uncompressed Hypershuttle tomorrow and see what I can get.

Frank

Ralph B
08-10-2011, 01:39 AM
Hey PDR - this is somewhat related to this thread.

When I create a pure computer graphic with a smooth gradient (in RGB space), and then render it to video, I always get very noticable banding. It's clearly a color space conversion issue. It doesn't matter what codec I render to, it doesn't matter if it's 422 or 420, it's always there. Do you have any thoughts on how to deal with this?

cuervo
08-10-2011, 04:05 AM
Hey PDR - this is somewhat related to this thread.

When I create a pure computer graphic with a smooth gradient (in RGB space), and then render it to video, I always get very noticable banding. It's clearly a color space conversion issue. It doesn't matter what codec I render to, it doesn't matter if it's 422 or 420, it's always there. Do you have any thoughts on how to deal with this?
If you're working in 8-bit color depth, chances are it's because of the round off errors with 8-bit color. That's the reason for moving to a 10-bit workflow.

PDR
08-10-2011, 06:44 AM
Hey PDR - this is somewhat related to this thread.

When I create a pure computer graphic with a smooth gradient (in RGB space), and then render it to video, I always get very noticable banding. It's clearly a color space conversion issue. It doesn't matter what codec I render to, it doesn't matter if it's 422 or 420, it's always there. Do you have any thoughts on how to deal with this?

If there is no banding even in an 8-bit RGB format, it's likely going from to RGB to "legal range" 8-bit YUV (Y' 16-235 ,and 16-240 for CbCr) . Since there is less range of expression (black to white only goes from 16-235 instead of 0-255 in RGB), banding is worse . Working at higher bit depth for intermediate calculations as cuervo suggested above will help, but ultimately going to a 8-bit YUV delivery format will cause banding in gradients, especially computer generated ones


Do you recall our discussion about full range YUV flash vs. limited range, where if you control the implementation, the banding greatly improves?

(i.e. you control the conversion back to RGB for display using full range Y'CbCr [0,255] => RGB [0,255] using the full range flag - something you cannot do with vimeo, youtube, but you can with your own website) . Of course , this isn't something you can do with say blu-ray or dvd if those are your destination formats

In practical sense for common delivery formats (DVD, blu-ray), most people dither or add noise to the content . This is very hard on compression, but if you use ordered dithering, it's easier on the compression. Also dithering through masks (it's applied selectively only to areas that need it)

Ralph B
08-10-2011, 09:13 AM
but ultimately going to a 8-bit YUV delivery format will cause banding in gradients, especially computer generated ones.
This is exactly the problem. Am I wrong, but it doesn't appear that a 10 bit workflow or going RGB lossless through post will save you from this ultimate day of reckoning.



In practical sense for common delivery formats (DVD, blu-ray), most people dither or add noise to the content . This is very hard on compression, but if you use ordered dithering, it's easier on the compression. Also dithering through masks (it's applied selectively only to areas that need it)
Ordered dithering? Dithering through masks? This is all new to me. Can you elaborate.

PDR
08-10-2011, 10:39 AM
This is exactly the problem. Am I wrong, but it doesn't appear that a 10 bit workflow or going RGB lossless through post will save you from this ultimate day of reckoning.


It does help to prevent/introduce new banding. Higher bit depth effects and higher precision, less rounding errors . This is easily demonstratable, the difference between working at higher bit depth and lower bit depth with things like ramps, gradients




Ordered dithering? Dithering through masks? This is all new to me. Can you elaborate.

Dithering is essentially noise that hide the banding

http://en.wikipedia.org/wiki/Dither#Digital_photography_and_image_processing
http://en.wikipedia.org/wiki/Ordered_dithering

Ordered dithering is preferred (compression wise), because of long GOP compression. In common delivery formats, long GOP compression is used to store differences between frames . Instead of "random" patterns, there is a more defined pattern, which ends up consuming less bitrate because there are less difference between frames with respect to the noise pattern (so more bitrate gets allocated to the image you're trying to show, rather than wasted on dithering)

There are some 3rd party plugins e.g .Genarts Sapphire has one called deband , I'm sure there are a few others

Of course, avisynth has quite a few, the "Gradfun" family (eg. gradfun2dbmod, gradfun3), dither, flash3kyuu, a few others

cuervo
08-10-2011, 11:54 AM
This is a curious discussion. Things are rarely as they seem on the surface. Things, in this case, really depend on the logic of the compression algorithm. Take for example, the Sony MPED2 HD codec. Once the bitrate is selected, it is the limiting factor. If the level of detail goes up in the image frame, the codec spends more time compressing the data to keep the bitrate you selected. If there is motion in the frame, the codec does the same thing. Increasing detail and increasing full frame motion stress the bandwidth limit of the codec. The more stress, the more quantization artifacts you're gonna see.

Adding noise is a curious phenomenon. The grain introduced with noise appears to the compression routine like increasing detail. So, at some point, adding noise begins to affect the blocking artifacts while reducing the stairrstepping. It's all a curious game of balancing competing evils.....LOL

maarek
08-11-2011, 12:18 AM
And when you encode to h264, it will increase the banding more as it compresses the 8-bits even more down. The only real way to combat this is to add noise. And then you upload it to vimeo which will encode it again and thus increasing banding again! Yay