Page 47 of 51 FirstFirst ... 37434445464748495051 LastLast
Results 461 to 470 of 504
  1. Collapse Details
    Senior Member
    Join Date
    Apr 2011
    Posts
    267
    Default
    Quote Originally Posted by ahalpert View Post
    Interesting analysis, Joshua. Perhaps the implication is that JB might typically use a fatter stop than the moire sufferers?

    Relying on diffraction to suppress moire would seem to defeat the purpose of having such a high-resolution sensor. Though perhaps an OLPF does as well.

    At any rate, the matter is probably best settled as most lens/camera issues are - by testing and seeing if you're happy with the outcome and the gestalt.
    Yes, testing is essential especially because this sensor is not Bayer, so we can't really compare this sensor to anything else when it comes to moire. It has more red and blue pixels than an equivalent Bayer sensor, but if the color sampling is not as evenly spaced as it is on a Bayer sensor, color moire might even be worse. But if John's not seeing it that much, it shouldn't be as big of a deal.

    I brought up using diffraction to suppress moire for two reasons. One, it seemed like a potential solution if it ever showed up. Because the pixels are so small, they are going to blur together more than on any other cinema camera.

    The second reason I posted so much info, was that it has implications in terms of capturing all of the resolution of the 12K. If you want to punch in to 4K, or even HD, you need to realize that diffraction is going to cause those punch ins to be less sharp than they could be if you are at anything slower than maybe f2 (but people need to test).

    If you are just going to be downsampling 12K to 4K or 8K, it's not going to matter as much, but it is something to be aware of.


    Reply With Quote
     

  2. Collapse Details
    Senior Member
    Join Date
    Apr 2011
    Posts
    267
    Default
    Quote Originally Posted by James0b57 View Post
    Do you think that since the 12K sensor can have a lower chroma subsample, that may actually moire more than a similarly specíd bayer (pronounced Ďbuyerí) colour array? Not saying that it isnít easily mitigated though, and the extra pixel density will always have some unique properties. But adding enough dithering to prevent moire, would lower the light sensitivity and resolution to the point that there is little advantage over an 8K bayer sensor?
    We don't know the pattern of the sensor, so I don't think there is a way to definitely answer that - as I said in the previous post it could moire more or less, but maybe less.

    Quote Originally Posted by James0b57 View Post
    Was the pixel density an important part of the 12K design, or could it be an advantage to have gone LF with the sensor?
    The small pixel size and the CFA pattern go hand in hand, as smaller photosites have a smaller bucket to hold on to photons they capture and can lead to less dynamic range, which is why they are using white photosites to mitigate that. Arri is still using 8.25um photosites because they are huge buckets to hold on to lots of photons. Blackmagic could have gone with bigger pixels at a 12K LF size, which would have been better from a dynamic range standpoint, but then you wouldn't get the amount of lens options you get with this sensor - there are a lot of choices in S35. But, they could also go another direction (with I bet they'll do) and do a LF size in 16K. It makes more sense from a business standpoint since they already developed the sensor tech at 2.2um.

    Quote Originally Posted by James0b57 View Post
    I donít mind if an OLPF reduces resolution, as the extra chroma and luma sampling will still take place, making for some lovely tones and gradations. However if an olpf dithers enough light, does that effect the senstitivity? Or is that a non-issue.
    If by sensitivity you mean dynamic range, it wouldn't - it's just spreading the point source of light out a bit, but it's doing so everywhere. Yes, even though resolution takes a hit, the extra sampling still takes place.


    Reply With Quote
     

  3. Collapse Details
    Senior Member
    Join Date
    Oct 2009
    Location
    Los Angeles
    Posts
    727
    Default
    Josh.

    Like a circle of confusion isn’t absolute, when making decisions about depth of field calculations, there are generally accepted numbers that are de facto standard.

    2.5x is the defacto number for the Airy disk calculations.

    Also, BMD lay out the array on their patent. Also think about the difference between pixels and photo sites. And how these are changed and re-arranged per the BMD patent with BMD raw.

    https://patentimages.storage.googlea...90306472A1.pdf

    I typically shoot around T2-T4.

    Most of my demo material that was released that you all can download is shot around T2.8.

    JB
    John Brawley ACS
    Cinematographer
    Los Angeles
    www.johnbrawley.com
    I also have a blog


    Reply With Quote
     

  4. Collapse Details
    Senior Member James0b57's Avatar
    Join Date
    Dec 2008
    Posts
    6,059
    Default
    Quote Originally Posted by Joshua Cadmium View Post
    If by sensitivity you mean dynamic range, it wouldn't - it's just spreading the point source of light out a bit, but it's doing so everywhere. Yes, even though resolution takes a hit, the extra sampling still takes place.
    I meant, as in sensitivity, like asa or iso rating. Wondering how much diffusion slows down the light.


    Reply With Quote
     

  5. Collapse Details
    Senior Member
    Join Date
    Apr 2011
    Posts
    267
    Default
    Quote Originally Posted by John Brawley View Post
    Itís a contender for sure, as an A camera option. The main things holding it back are those high end utility things. Not anything to do with IQ.
    That's great to hear.

    Quote Originally Posted by John Brawley View Post
    The mini of course has internal switchable NDs and thatís what Iím more referring to.
    Gotcha. I forgot about that. I was thinking of the internal filter module for the older Alexas.

    Quote Originally Posted by John Brawley View Post
    This idea that IR is simple is just...wrong.

    Like with audio EQ. You canít really just totally cut below a certain frequency. You can HEAR the difference. Same with IR. You have to roll it off to some degree.

    In our early days of ND they were super aggressive. Everyone accepted very GREEN ND filters because there was no IR and that was the price you paid.
    What I meant was not IR, per se, which does have to be rolled off at the sensor level and is not a simple adjustment. I was actually referring to completely neutral NDs. It SEEMS like it would be an easy thing to just truncate every nanometer evenly across the board - just stop a certain amount of light evenly across the whole visible spectrum, but it is not easy. Of course thinking it is easy is wrong - that's not what I was saying if that's what you were thinking. What I was trying to communicate was that I find it ironic that one of the simplest things you have to do to control the image, and the filters you always have to have on hand - ND - has to be such a complicated affair. For photography, it is such a simple thing to just change the shutter speed, but our hands are mostly tied regarding that.

    Oh, yeah those green Tiffen IRNDs didn't look great. You could balance them back to white, just like the bluish Schneider Platinum NDs could be balanced back to white, but it was for sure a compromise. Of course you had and still have Pancro NDs, which have been around longer than any digital camera Arri or Red have put out. They were, and maybe still are the best, but they are a reflecting mirror and don't get into deep ND - you have to custom order anything beyond a 1.5. On this video that Nisi put out a few years ago, I thought the Pancro may have looked the best, especially with tungsten light: https://www.youtube.com/watch?v=3Hpx9X-ePow .

    Quote Originally Posted by John Brawley View Post
    The idea of vintage lenses and diffusion mitigating moire is a bit of a red herring. An OLPF is designed to reject detail that is too fine for the sensor to accurately reproduce. Diffusion filters are the opposite. The soften the detail thatís within what the sensor CAN resolve. Same with vintage lenses.
    I do have to disagree with this as well. You can have a lens that is so poor in resolution that it will limit aliasing. There is a mathmatical example here, with a Siemans Star comparison at the end of the article: https://www.lensrentals.com/blog/201...-good-sensors/ .

    The same thing is true with some, but definitely not all, diffusion filters. Anything that blurs real resolution is also going to blur false resolution, otherwise an OLPF wouldn't work in the first place. To what extent you get that blurring would need to be tested, but I would bet something like a Tiffen Double Fog or a Mitchell Diffusion filter would cut down on some level of moire, almost certainly not all, but some. A Pearlescent or Radiant Soft filter? Maybe not so much at all.

    Quote Originally Posted by John Brawley View Post
    The only thing I can think of that really affects this in camera is the shooting stop (and thus dof).
    If you mean shooting wide open, then yes, there can be less resolution there, but some lenses are effectively diffraction limited and will still give enough resolution to show moire at lower f stops. And, the more the photosite size shrinks, the more this is going to happen.

    Quote Originally Posted by John Brawley View Post
    Thatís exactly whatís great about the machine I listed. Itís very consistent. And that leaves the variables to bean...roast...batch etc....
    The complexity of coffee is lovely, isn't it? Species, terrior, and the Maillard reaction produce such stark differences in flavor.


    Reply With Quote
     

  6. Collapse Details
    Senior Member James0b57's Avatar
    Join Date
    Dec 2008
    Posts
    6,059
    Default
    Quote Originally Posted by Joshua Cadmium View Post
    What I was trying to communicate was that I find it ironic that one of the simplest things you have to do to control the image, and the filters you always have to have on hand - ND - has to be such a complicated affair.
    +1

    It would seem that for professional cameras, adding a couple grand for the option of having the best filtration internal would be ideal.


    Reply With Quote
     

  7. Collapse Details
    Default
    Quote Originally Posted by Teddy_Dem View Post
    12K URSA Mini Pro VS 8K Canon R5 VS 6K C500 MKII:

    https://www.youtube.com/watch?v=3JGvfbJXZkk
    What is causing the yellow cast on the model’s hair filmed on the Ursa at 6:36? The canon cameras don’t have this.
    Last edited by rizibo; 10-20-2020 at 12:47 AM.


    Reply With Quote
     

  8. Collapse Details
    Senior Member
    Join Date
    Apr 2011
    Posts
    267
    Default
    Quote Originally Posted by John Brawley View Post
    Like a circle of confusion isn’t absolute, when making decisions about depth of field calculations, there are generally accepted numbers that are de facto standard.

    2.5x is the defacto number for the Airy disk calculations.

    As I mentioned, one can still use 2.5x photosite widths, just like you could use d/1500 as a circle of confusion number. However, knowing where that 2.5x number comes from is important, because diffraction will start before that 2.5x number, as Airy Disks are starting to blur together as soon as they cross the individual photosite threshold.


    Even using that 2.5x number, it puts the 12K sensor into diffraction at a width of 2.5 * 2.2um or 5.5um. That means that the 12K sensor will be in visible diffraction at f4.1 (2.44 * 550nm * f4.1 = 5.5um.) That means at about f4, you are definitely going to see diffraction, especially if you punch in. That is still much less than I think anyone would realize.


    But that 2.5x ratio is misleading.


    Why? Let's look at some real numbers again and for that I'll use that Zeiss Batis 85mm again, since it best proves my point: https://www.opticallimits.com/sonyal...s85f18?start=1 . On the A7RII using 2.5x photosite widths, that defacto number is saying that diffraction will only be experienced at an Airy Disk width of 2.5 * 4.5um = 11.25um. That means that the A7RII will be in visible diffraction at f8.4 (2.44 * 550nm * f8.4 = 11.25um.)


    This online calculator, which is the first result when I search for "diffraction calculator" tells me that the A7RII at 42.4MP is diffraction limited at f9 (they rounded up): https://www.photopills.com/calculators/diffraction . If I choose f8, it specifically says that I am not diffraction limited.


    So, you think, well, this defacto number is telling me that I will be in visible diffraction at f8.4 (according to the math, or f9 according to that online calculator.) I'll just stop down to f8 - that's less than f8.4.


    What happens to resolution at f8 on that Zeiss Batis? The resolution in the center is 4773, which is 5.7% less resolution than the maximum center resolution of 5063 at f2.8. At the near center, there is 6.0% less resolution. At the border there is 7.8% less resolution and at the edges there is 6.3% less resolution. Doing a rough estimate, let's just say that there is 6% less resolution everywhere. Well that scales in both directions, so total resolution loss would be 1 - (0.94 * 0.94) or 11.64% less resolution everywhere. That's a loss of about 5 megapixels on at 42.4 megapixel sensor.


    [Also, we were comparing f8 to f2.8, when the sharpest this lens would have gotten would most likely be where the Airy Disk meets the photosite size, which it would at f3.35 (2.44 * 550nm * f3.35 = 4.5um).]



    --------



    So, what if you looked at that f8.4 number and said, well, that's awfully close to f8, so I'll just avoid f8 all together and live at a f5.6 limit. This calculator even tells me for full frame 42.4 MP, I won't see diffraction until f6.8 (2.0x width): https://www.cambridgeincolour.com/tu...tography-2.htm


    At f5.6, you would lose a little over 4% of resolution in the center (in one direction - about 8% total) and about 2% in the edges (about 4% total) as compared to the maximum resolution the lens can see at f2.8.


    Can you see how the 2.5x and even 2.0x ratios can be misleading? They are not really where diffraction starts. I want to know where I am maximizing resolution and then go from there.


    Looking at the 12K, if you took that 2.5x width as gospel and shot at f4, you may see somewhere between 10-15% less resolution than you would otherwise get. That's about 10 megapixels off the top. Resolution isn't everything, but if you wanted to maximize resolution, you may want to know where that actually happens. Or, the other side of the coin - you may want to know where could you stop down and maybe reduce moire because total resolution is dropping.



    --------



    Another way to illuminate the situation is by looking at a poorer lens that can't resolve as well. Here is the Sony FE 50mm f1.8: https://www.opticallimits.com/sonyal...e50f18?start=1 . Not a great looking resolution chart. This lens has the center peak at f4, since the aberrations being corrected from stopping down does more to boost resolution upwards than the level of diffraction pulling the resolution downwards. It's the same with the edges - they peak at f5.6 because they have such strong aberrations they need to be stopped down a lot in order to correct them - so stopping down draws resolution upwards, more than what diffraction is pushing downwards.


    If we went by that 2.5x, it would say we should see diffraction at f8. And we do. But the diffraction started in the central portion earlier than that - at f5.6. If the lens had poorer central resolution, then diffraction would not be limiting it at f5.6. But, since it does have a relatively sharp center, that means that diffraction is kicking in earlier than it does to the poorer edges. And if we compare the center portion between f8 and f4, it's a loss of about 20% of resolution in the center with the edges nearly the same.


    So, even if a lens will not resolve the total sensor, you are still going to lose resolution at that 2.5x photosite width (except maybe if you are using really poor resolving lenses.)


    This is also why I think the 2.5x used to be relevant - we were dealing with lenses that weren't nearly as sharp. If that Sony 50mm had a weak center then f8 is where diffraction would show and the 2.5x would hold. (But, even if the 2.5x held, that does not say exactly where diffraction kicks in.)



    --------



    So, to put this all another way - the 2.5x ratio for sure tells you that you are in diffraction - but you have hit that diffraction mark way before you think you did - and the online calculators are all misleading in that regard. It is certainly not the very edge of diffraction, which is what I was trying to determine. Once you are at 2.5x, you are automatically losing info, so if you are trying to preserve resolution, you should stay away from that ratio.


    To put this a second way - at what point do you know you are maximizing the total resolution of the system? That point is when the central portion of the Airy Disk equals one photosite width. When you stop down a lens, the Airy Disk gets bigger and spills over into other photosites, blurring the Airy Disk. You might have a little wiggle room, but not much.


    So, to sum up, on modern super sharp lenses, you are going to maximize total resolution on the 12K sensor somewhere between f1.64 and f2. With poor lenses, you are going to maximize total resolution somewhere after that, maybe around f2.8. Diffraction kicks in after that. This will all be relatively easy to test.


    But, you don't have to. This is all physics and can be seen on thousands of lens tests out there. We're not really breaking new ground here, just interpreting existing info.



    --------



    Quote Originally Posted by John Brawley View Post
    Also, BMD lay out the array on their patent.

    They have two different CFAs on the patent, so I'm still not sure which one they're actually using.


    Quote Originally Posted by John Brawley View Post
    Also think about the difference between pixels and photo sites. And how these are changed and re-arranged per the BMD patent with BMD raw.

    Diffraction doesn't go away just because there is a CFA. The photosite is the physical boundary for the Airy Disk. If you want to capture 12K worth of Airy Disk central points of light, you need 12K worth of photosites. If the Airy Disk central points of light get bigger, you no longer can capture as many of them. (And if the Airy Disk central points of light aberrate from a perfect point source to a something not a perfect point [AKA what all lens aberrations do] you also can no longer capture as many of them.) Constructing pixels from photosites isn't going to increase the physical boundary of the photosite.

    Now, you might be talking about color resolution, which is a different tangent, but my argument has been about total resolution. Color is subsampled and is going to have a lower resolution, but that's where the pattern of the CFA plays a role, which, again, I'm not completely sure about. I still don't even know if they are using the white pixel to interpolate color, and that's going to play a big role in determining overall color resolution.


    1 out of 1 members found this post helpful.
    Reply With Quote
     

  9. Collapse Details
    Senior Member James0b57's Avatar
    Join Date
    Dec 2008
    Posts
    6,059
    Default
    Quote Originally Posted by Joshua Cadmium View Post
    ...Now, you might be talking about color resolution, which is a different tangent, but my argument has been about total resolution. Color is subsampled and is going to have a lower resolution, but that's where the pattern of the CFA plays a role, which, again, I'm not completely sure about. I still don't even know if they are using the white pixel to interpolate color, and that's going to play a big role in determining overall color resolution.
    It was the second layout, i believe. They state it is similar to the Bayer array pattern, but for everyone colour on Bayer, they have two luma(white) and two colour patches.

    Bayer type sensor might have one Red photosite, but the BM12K replaces it with:
    R=
    r w
    w r

    That is only significant to get the extra light sensitivity and resolution, but the actual chroma subsample is a out quarter resolution, unless they are interpolating values by adding and subtracting values to figure out “virtual” chroma samples.

    Bayer would be:
    R G
    G B

    And BM12k is (as seen in figure 4 in the patent doc):
    r - w - g - w
    w - r - w - g
    g - w - b - w
    w - b - w - g


    Those colour groupings mean that without really smart interpolation, not sure how the base chroma sampling is much more than a 6K bayer sensor. Obviously there is additional tonal gradation, but as far as the groupings and cross talk between photo-sites go, and the potential for moire would be similar to a 6K sensor (quarter resolution=half K) that has an olpf made for a 12K sensor.

    But that is misleading, as there are ways to interpolate the chroma values for additional resolution, though interpolation has its own anomalies or side effects, correct? but i won’t pretend that i can synthesize that algorithm in my mind.

    Forgive the over simplification, as i am also trying to wrap my head around the pros and cons of this type of sensor design. My personal preference would be for more DR typically, but not every camera has to have the same design. I’m quite liking the bm12k overall, just trying to filter (pun intended) all the hyperbole.
    Last edited by James0b57; 10-20-2020 at 09:19 AM.


    Reply With Quote
     

  10. Collapse Details
    Senior Member
    Join Date
    Oct 2009
    Location
    Los Angeles
    Posts
    727
    Default
    Josh, I appreciated the detail and depths you're going to, but remember you've already contradicted yourself. Remember when I said ALL cinema cameras can moire even with an OLPF and you said...

    Quote Originally Posted by Joshua Cadmium View Post
    I am almost certain that that is not entirely true.

    --------

    I just did a deep dive into the math and, unfortunately, all chroma subsampled sensors (Bayer, Blackmagic 12K) will produce moire, even with an OLPF...
    We originally were talking about using OLPFs. I suggested they are not a magic bullet. You're also made reference to still users with Fuji cameras and what not.

    I get it. The idea of an OLPF is appealing. But it's a *theory* that rarely plays out that way in practice because it's really really really hard to make them to the specification required. RED use a very aggressive OLPF, which also helps with the REDCODE encoding, because it's a hell of a lot easier to compress soft detail than fine.

    There's downsides to this. I mean look at how poor a windowed image is on RED cameras. The resolution you shoot at falls away dramatically visually. 2K looks like bad 720 to me. That's because you have to design an OLPF for a target resolution and RED also use a very heavy handed OLPF (likely because it helps the compression).



    In reference to overall optical performance....
    Quote Originally Posted by Joshua Cadmium View Post




    No, that is diffraction and proves my point. If the resolution didn't drop off like that, it would mean that the Airy Disk could get bigger and not have as deleterious of an effect on the resolution, but it, in fact, does. You can't escape the physics of it.
    Have you used a lens projector before Josh ? It's really easy to see differences in lens performance based on batch to batch, focus distance, focal length. Aperture affects the image too before it ever hits the sensor. You're conflating optics with sensor resolution with a given lens.

    Before you even put a lens on a camera there's a lot of things going on with the design of the lens that will affect the image, including the way it renders sharpness.


    This isn't "only" diffraction. There's many factors at play here and I think you're honing in on this because you can do the maths and "see" it but it's not the whole story.

    Lenses don't really have "resolution". One company started marketing their lenses as 8K lenses. It's marketing BS. A 100 year old lens can work just fine on this 12K camera and "resolve" to 12K.

    Quote Originally Posted by Joshua Cadmium View Post
    Why? Let's look at some real numbers again and for that I'll use that Zeiss Batis 85mm again, since it best proves my point: https://www.opticallimits.com/sonyal...s85f18?start=1 . On the A7RII using 2.5x photosite widths, that defacto number is saying that diffraction will only be experienced at an Airy Disk width of 2.5 * 4.5um = 11.25um. That means that the A7RII will be in visible diffraction at f8.4 (2.44 * 550nm * f8.4 = 11.25um.)
    Josh did you read the article you yourself linked to ? When talking about the MTF of that lens with they state...

    "It is somewhat odd that the resolution figures stay exceedingly high at and beyond f/8. Diffraction effects should have a more deteriorating effect here. We suspect that f/11, as displayed by the camera, is really more like f/8 in the real world (interestingly we have seen this in another Sony 85mm lens as well).
    "


    But you went on to argue that at even lessor stops there were diffraction issues ?

    If you look at the patent, the CFA is clearly spelt out. Most aren't realising though that W pixels ALSO have ALL colour, and using colour subtraction techniques, they can also act as variable colour pixels, depending on how they're summed and combined. W pixels aren't just luminance in other words.

    I'm worried you're getting a bit lost in the woods mate. It's really important to know this stuff and know what affects imaging IQ, but you can't drive every choice though this narrow (diffraction) thinking. There's lots of examples where the numbers don't really tell you the truth of what's happening.

    I saw this kind of thinking on photography forums as well, especially around equivalence arguments. There's a basis in truth, but people then draw simplistic conclusions like "a 135 full frame sensor is always going to be better in low light than an MFT sensor."

    You know it's standard practice for those shooting miniatures and models to work at F22 and beyond ? Do you think Nolan was worrying about lens diffraction on the Dark Knight ?

    Dean Semler shot all the battle sequences in "We Were Soldiers" on long lenses at F16 because he wanted to combine long lens focal length compression and have everything in focus as much as possible. That's a creative want.

    If you think always in terms of sharpness and diffraction, you're going to make some really sharp and probably pretty boring looking movies...

    JB
    John Brawley ACS
    Cinematographer
    Los Angeles
    www.johnbrawley.com
    I also have a blog


    2 out of 4 members found this post helpful.
    Reply With Quote
     

Page 47 of 51 FirstFirst ... 37434445464748495051 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •