Page 8 of 9 FirstFirst ... 456789 LastLast
Results 71 to 80 of 83
  1. Collapse Details
    #71
    Senior Member
    Join Date
    Mar 2012
    Location
    Beverly Hills, CA
    Posts
    2,054
    Default
    Quote Originally Posted by combatentropy View Post
    Although I'm not sure about the numbers, jcs.

    First of all I don't understand what you're doing with your simulation.

    But Roger N. Clark's test seems to say 3 times, not 1.5 times, the resolution. So DCI 4K would need a sensor with 12,288 pixels, for perfect, alias-free capture, against even the worst pinstripes, at any angle, without an optical low-pass filter.
    It's a simple solution: consider line pairs alternating black and white. That's effectively Nyquist 2x sampling. If we add another white line between the black lines, so black white white black white white etc., that's now 3x sampling (less total black lines). For my simulation, I started out with 64 black lines and ended up with 43, 64/43 ~1.5. ARRI knew what they were doing with 1920*3/2 = 2880.


    Reply With Quote
     

  2. Collapse Details
    #72
    Senior Member
    Join Date
    Apr 2011
    Posts
    160
    Default
    Quote Originally Posted by jcs View Post
    Current sensors are discrete- we don't have fractional sensels/pixels, so the next whole number after 2 is 3.
    We actually do have fractional sensels. A pixel that comes from a Bayer pattern mosaic is basically sampled from the middle of four sensels (where the two green, 1 red, and 1 blue sensel meet). So, each pixel that comes from a Bayer mosaic actually has 1/2 green, 1/4 red, and 1/4 blue data. That is why a Bayer pattern sensor has extra sensels all around it - it needs that extra fractional information to form a full pixel.

    On top of that, different demosaicing will look at that fractional info differently in order to get better/faster/different results. There are some good visual examples at the bottom of the page here: https://rawpedia.rawtherapee.com/Demosaicing

    My point is that you don't need to step up to another whole number to deal with aliasing.

    That 5.7k number comes from the fact that to double the amount of megapixels in an image, you only need to multiply each side by a factor of the square root of 2 (AKA √2 AKA 1.414).

    4096 x √2 = 5793 | 2160 x √2 = 3055

    Panasonic probably rounded down from that because 5720 and 3016 are both divisible by eight which probably makes processing easier.

    So, 4096x2160 = 8.847 megapixels. 5720x3016=17.252 megapixels. Just about double the megapixels. And that 2x info avoids most aliasing (luminance aliasing).

    [I will say that I am unsure if Nyquist technically requires 2x in each direction, but every camera manufacturer (Red, Arri, Panasonic) appears to have come to the conclusion that 2x megapixels is enough to have true 4k luminance resolution.]

    Also, that Clarkvision article is helpful but not fully accurate for a bunch of reasons. One is that he is using line pairs, so 100 line pairs per inch at bare minimum (not even getting into Nyquist) needs 200 pixels per inch to represent them. (One line pair = a black line plus the white space next to it, so you have to have at least two pixels, one for black and one for white.) So, Nyquist (2x that) would be 400 pixels per inch. What he arrived at, then, was technically 1.5x Nyquist.

    However, once he rotated the lines, those lines no longer represent a pure sine wave for the purposes of his example. I believe it would be considered infinite resolution at that point. So that extra resolution is what is contributing to the aliasing he is getting. On a sensor, though, the OLPF would be blurring out that extra resolution so that you would not be seeing that aliasing.

    Quote Originally Posted by Mitch Gross View Post
    Technically, to get full resolution 4K from Bayer you need 5.7K, not 6K
    Technically that is mostly right. I would say it is probably accurate if we were talking solely about luminance (black and white) resolution.

    However, color resolution is still under-sampled. At a 5.7k downsample, there is still only 1/2 worth of red pixels and 1/2 worth of blue pixels. We do have 1/1 worth of green pixels, but that is still not technically enough for Nyquist - we need at least double that (maybe even 4x?)

    So, If we wanted to technically get to 2x sampling on all colors (and just in overall pixels, not in each direction) we would need about 17 million red and 17 million blue pixels. That won't happen until we hit at least 11k resolution.

    However, all that being said, I think debayering has gotten good enough to where a 2x megapixel oversample is getting us almost all the way there, especially since the image is going to be seen in a compressed state by the end user and because we are not shooting pure color test charts. (And how many of us are recording uncompressed raw or 4:4:4 anyways? Color info is being thrown away at the source!)

    Also, watching Steve Yedlin's Resolution Demo made me realize how unimportant resolution can be once a certain quality level is reached.


    1 out of 1 members found this post helpful.
    Reply With Quote
     

  3. Collapse Details
    #73
    Senior Member puredrifting's Avatar
    Join Date
    Nov 2004
    Location
    Los Angeles, Ca.
    Posts
    9,761
    Default
    Quote Originally Posted by DLD View Post
    Canon C200 shoots an internal Raw (6:1 compression, I believe)...
    Cinema RAW Light, so far, is a fixed 5:1 compression at 1Gbps. At least that's what the engineers at Canon USA told me in 2017.
    It's a business first and a creative outlet second.
    G.A.S. destroys lives. Stop buying gear that doesn't make you money.


    Reply With Quote
     

  4. Collapse Details
    #74
    Senior Member
    Join Date
    Oct 2014
    Posts
    5,733
    Default
    Quote Originally Posted by puredrifting View Post
    Cinema RAW Light, so far, is a fixed 5:1 compression at 1Gbps. At least that's what the engineers at Canon USA told me in 2017.
    Well, I was off only slightly.

    Side note - F65 was a 17.6 MPX camera. 4K, 4:2:2.


    Reply With Quote
     

  5. Collapse Details
    #75
    Senior Member
    Join Date
    Feb 2009
    Posts
    6,601
    Default
    Quote Originally Posted by Joshua Cadmium View Post
    We actually do have fractional sensels. A pixel that comes from a Bayer pattern mosaic is basically sampled from the middle of four sensels (where the two green, 1 red, and 1 blue sensel meet). So, each pixel that comes from a Bayer mosaic actually has 1/2 green, 1/4 red, and 1/4 blue data. That is why a Bayer pattern sensor has extra sensels all around it - it needs that extra fractional information to form a full pixel.

    On top of that, different demosaicing will look at that fractional info differently in order to get better/faster/different results. There are some good visual examples at the bottom of the page here: https://rawpedia.rawtherapee.com/Demosaicing

    My point is that you don't need to step up to another whole number to deal with aliasing.

    That 5.7k number comes from the fact that to double the amount of megapixels in an image, you only need to multiply each side by a factor of the square root of 2 (AKA √2 AKA 1.414).

    4096 x √2 = 5793 | 2160 x √2 = 3055

    Panasonic probably rounded down from that because 5720 and 3016 are both divisible by eight which probably makes processing easier.

    So, 4096x2160 = 8.847 megapixels. 5720x3016=17.252 megapixels. Just about double the megapixels. And that 2x info avoids most aliasing (luminance aliasing).

    [I will say that I am unsure if Nyquist technically requires 2x in each direction, but every camera manufacturer (Red, Arri, Panasonic) appears to have come to the conclusion that 2x megapixels is enough to have true 4k luminance resolution.]

    Also, that Clarkvision article is helpful but not fully accurate for a bunch of reasons. One is that he is using line pairs, so 100 line pairs per inch at bare minimum (not even getting into Nyquist) needs 200 pixels per inch to represent them. (One line pair = a black line plus the white space next to it, so you have to have at least two pixels, one for black and one for white.) So, Nyquist (2x that) would be 400 pixels per inch. What he arrived at, then, was technically 1.5x Nyquist.

    However, once he rotated the lines, those lines no longer represent a pure sine wave for the purposes of his example. I believe it would be considered infinite resolution at that point. So that extra resolution is what is contributing to the aliasing he is getting. On a sensor, though, the OLPF would be blurring out that extra resolution so that you would not be seeing that aliasing.



    Technically that is mostly right. I would say it is probably accurate if we were talking solely about luminance (black and white) resolution.

    However, color resolution is still under-sampled. At a 5.7k downsample, there is still only 1/2 worth of red pixels and 1/2 worth of blue pixels. We do have 1/1 worth of green pixels, but that is still not technically enough for Nyquist - we need at least double that (maybe even 4x?)

    So, If we wanted to technically get to 2x sampling on all colors (and just in overall pixels, not in each direction) we would need about 17 million red and 17 million blue pixels. That won't happen until we hit at least 11k resolution.

    However, all that being said, I think debayering has gotten good enough to where a 2x megapixel oversample is getting us almost all the way there, especially since the image is going to be seen in a compressed state by the end user and because we are not shooting pure color test charts. (And how many of us are recording uncompressed raw or 4:4:4 anyways? Color info is being thrown away at the source!)

    Also, watching Steve Yedlin's Resolution Demo made me realize how unimportant resolution can be once a certain quality level is reached.
    This is far more accurate math and thanks for taking the time to write it out (so I didn't have to!). Yes, I was referring to spatial resolution, not color resolution.

    The proof is in the pudding kids, the EVA1 can clearly define 2000 line pairs on a proper test chart while "regular 4K" cameras top out at around 1650 - 1800 lp before aliasing and/or mush sets in.
    Last edited by Mitch Gross; 08-25-2019 at 09:38 AM. Reason: Spelling
    Mitch Gross
    Cinema Product Manager
    Panasonic System Solutions Company


    Reply With Quote
     

  6. Collapse Details
    #76
    Senior Member
    Join Date
    Apr 2013
    Posts
    245
    Default
    Quote Originally Posted by puredrifting View Post
    Cinema RAW Light, so far, is a fixed 5:1 compression at 1Gbps. At least that's what the engineers at Canon USA told me in 2017.
    Actually Canon says CRL's compression ratio varies from 1/3 to 1/5 which makes sense given that data rate is constant at 1Gbs even when the bit depth and frame rate vary. Uncompressed 12-bit 4K 30p has a data rate 25% greater than that of 12-bit 4K 24p but with CRL that data rate is apparently the held constant for both at 1Gbs, so it stands to reason that the compression ratio must vary.

    https://www.canon-europe.com/pro/sto...ema-raw-light/

    The Cinema RAW Light format creates files approximately 1/3 to 1/5 the size of a Cinema RAW file
    Last edited by Gary T; 08-23-2019 at 12:47 PM.


    Reply With Quote
     

  7. Collapse Details
    #77
    Senior Member
    Join Date
    Apr 2013
    Posts
    245
    Default
    Quote Originally Posted by Grug View Post
    One of the most frustrating angles of all of these new 6k-ish cameras, is that a good number of them offer 4k supersampled from the full 6k sensor (C700 FF, Venice, Red) but none of them actually use that supersampling to record a full 4:4:4 RGB image. Everything tops out at Prores422 HQ.

    As much as people go on about raw, I'm still struggling (in 2019) to get producers to pay for it. So most of what I'm shooting is either 12-bit 2k 4:4:4 or 10-bit 4k 4:2:2. Both of which I find are (ultimately) VERY comparable in overall image quality.

    If the companies would use their higher resolution sensors to pad out the limitations of bayer sensors and give us 12-bit 4k 4:4:4. Then at least we'd be seeing a genuine leap forward in image quality that I can actually put to use.
    When Panasonic updates their Varicam line with a 6Kish sensor you may get your wish since the Varicam 35 already supports 12 bit AVC-Intra4K444. I don't have experience with it but it's good enough for studios to use for theatrical release.

    Since the EVA1 already delivers a supersampled 4K image from it's 5.7K sensor it seems reasonable to expect an updated Varicam would be able to do the same but with 12-bit AVC-Intra4K444 files. Mitch knows what's coming but of course he won't tell.

    img_avcultra.jpg


    Reply With Quote
     

  8. Collapse Details
    #78
    Senior Member Grug's Avatar
    Join Date
    Apr 2007
    Location
    Melbourne, Australia
    Posts
    3,551
    Default
    Quote Originally Posted by Gary T View Post
    When Panasonic updates their Varicam line with a 6Kish sensor you may get your wish since the Varicam 35 already supports 12 bit AVC-Intra4K444. I don't have experience with it but it's good enough for studios to use for theatrical release.

    Since the EVA1 already delivers a supersampled 4K image from it's 5.7K sensor it seems reasonable to expect an updated Varicam would be able to do the same but with 12-bit AVC-Intra4K444 files. Mitch knows what's coming but of course he won't tell.
    Fingers firmly crossed.


    Reply With Quote
     

  9. Collapse Details
    #79
    Senior Member Bern Caughey's Avatar
    Join Date
    Jun 2007
    Location
    Los Angeles
    Posts
    4,193
    Default
    Quote Originally Posted by Greg_E View Post
    What happened to Foveon?
    Why Foveon X3 sensor is not popular
    https://www.cambridgeincolour.com/fo...hread64445.htm

    The end of the Bayer sensor could be upon us
    by Phil Rhodes
    https://www.redsharknews.com/product...uld-be-upon-us


    Reply With Quote
     

  10. Collapse Details
    #80
    Senior Member puredrifting's Avatar
    Join Date
    Nov 2004
    Location
    Los Angeles, Ca.
    Posts
    9,761
    Default
    Quote Originally Posted by Gary T View Post
    Actually Canon says CRL's compression ratio varies from 1/3 to 1/5 which makes sense given that data rate is constant at 1Gbs even when the bit depth and frame rate vary. Uncompressed 12-bit 4K 30p has a data rate 25% greater than that of 12-bit 4K 24p but with CRL that data rate is apparently the held constant for both at 1Gbs, so it stands to reason that the compression ratio must vary.

    https://www.canon-europe.com/pro/sto...ema-raw-light/
    See I wish I was as knowledgable about this stuff. But really, I don't care enough to learn the finer points. Makes sense what you say. But does "1/3 to 1/5" equate to 3:1 and 5:1? Once again, my terrible math skills can't translate fractions to ratios
    It's a business first and a creative outlet second.
    G.A.S. destroys lives. Stop buying gear that doesn't make you money.


    Reply With Quote
     

Page 8 of 9 FirstFirst ... 456789 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •