Page 48 of 51 FirstFirst ... 384445464748495051 LastLast
Results 471 to 480 of 504
  1. Collapse Details
    Senior Member ahalpert's Avatar
    Join Date
    Apr 2011
    Location
    NYC
    Posts
    2,814
    Default
    Quote Originally Posted by John Brawley View Post

    If you think always in terms of sharpness and diffraction, you're going to make some really sharp and probably pretty boring looking movies...

    JB
    Of course, but isn't this just an academic discussion? Didn't we get here from talking about moire? (Confession: I hadn't heard of the Airy disk before.) The science of optics and sensors is inherently interesting even if it's not inherently useful.

    But to be sure - some of the best cameramen have the least scientific knowledge. Trial and error rules. and ultimately making and evaluating imagery is about feeling, not thinking.


    1 out of 1 members found this post helpful.
    Reply With Quote
     

  2. Collapse Details
    Senior Member
    Join Date
    Apr 2011
    Posts
    267
    Default
    Quote Originally Posted by John Brawley View Post
    Josh, I appreciated the detail and depths you're going to, but remember you've already contradicted yourself. Remember when I said ALL cinema cameras can moire even with an OLPF and you said...
    I did not contradict myself. You initially said "any camera can moire". Any camera includes ones with Foveon pixels. I first said there would be no moire IF we were dealing with Foveon pixels. I then said all chroma subsampled sensors (Bayer, this 12K sensor) will have color moire. Those are two different things - two different types of sensors - one needs a CFA for color and one does not.

    The type that needs a CFA is going to have color moire, even with an OLPF (but can avoid luma moire with an OLPF.) A Foveon type sensor can avoid all moire with an OLPF, since color and luma resolution are the same.



    Quote Originally Posted by John Brawley View Post
    I get it. The idea of an OLPF is appealing. But it's a *theory* that rarely plays out that way in practice because it's really really really hard to make them to the specification required. RED use a very aggressive OLPF, which also helps with the REDCODE encoding, because it's a hell of a lot easier to compress soft detail than fine.

    There's downsides to this. I mean look at how poor a windowed image is on RED cameras. The resolution you shoot at falls away dramatically visually. 2K looks like bad 720 to me. That's because you have to design an OLPF for a target resolution and RED also use a very heavy handed OLPF (likely because it helps the compression).
    Oh sure, if an OLPF is aggressive, you're throwing away too much resolution, especially at a pixel level. Plus, like I said, an OLPF is only filtering luma resolution, so you are still going to have chrome moire, unless you want to throw away a ton of total resolution.



    Quote Originally Posted by John Brawley View Post
    Have you used a lens projector before Josh ? It's really easy to see differences in lens performance based on batch to batch, focus distance, focal length. Aperture affects the image too before it ever hits the sensor. You're conflating optics with sensor resolution with a given lens.
    No, I am definitely not conflating things.

    A lens projector is somewhat helpful to determine the resolution of the lens agnostic to the sensor you pair it with, but it is not going to tell us what we need to know about diffraction. For digital sensors, diffraction comes into play when a discrete limited measurement (photosite) tries to capture the continuous waves that a lens is seeing. That's why diffraction is measured in photosite (pixel) widths. Looking at a lens projector is like going one step backwards when trying to figure out diffraction. As soon as you introduce quantization - turning light waves into discrete, stair-stepped measurements - you limit the amount of light waves the system will see, especially as quantization (photosite size) gets smaller and smaller and can only fully contain the width of smaller and smaller Airy Disks.

    The f number aperture of every lens always tells us what the diameter of the Airy Disk point of light will be (at any given wavelength of light.) And the Airy Disk gets bigger when you stop down the lens.

    So, as that Airy Disk point of light diameter gets bigger from stopping down the lens, it is taking up more and more photosite widths. If an Airy Disk - the point of light - is taking up the space of multiple photosites, you simply can't resolve one Airy Disk per one photosite anymore.

    To put it another way, only so many Airy Disks can fit in a given area or sensor size. They start getting so big that it takes multiple photosites to capture one Airy Disk. That's all diffraction is, and the physics of it - you can't change that. Check out this short article here: https://www.edmundoptics.com/knowled...the-airy-disk/

    As that page says: "As focused Airy patterns from different details on the object come close together, they begin to overlap. ... When the overlapping patterns create enough constructive interference to reduce contrast, they eventually become indistinguishable from each other. ... As pixels continue to reduce in size, this effect becomes more of an issue and eventually is very difficult to overcome. ... The smallest achievable spot size can quickly exceed the size of small pixels."

    So, you have to look at the total system - lens AND sensor, if you are trying to figure out where diffraction starts.

    For instance, the Arri Alexa has those massive 8.25um photosite widths. Using the standard math of the 2.5x pixel width, a modern lens on the Alexa will be in diffraction at f15.4. Sounds good. That exact same lens on the 12K will be in diffraction at f4.1. Same ratio, same lens, but the Airy Disk needs to reduce in size (which you do by opening up the lens) in order to fit into the smaller photosites.



    Quote Originally Posted by John Brawley View Post
    Josh did you read the article you yourself linked to ? When talking about the MTF of that lens with they state...

    "It is somewhat odd that the resolution figures stay exceedingly high at and beyond f/8. Diffraction effects should have a more deteriorating effect here. We suspect that f/11, as displayed by the camera, is really more like f/8 in the real world (interestingly we have seen this in another Sony 85mm lens as well).
    "


    But you went on to argue that at even lessor stops there were diffraction issues ?
    If you go off of that resolution chart, the resolution peaks at f2.8. If a lens gets sharper as you stop down (with any aberrations basically going from blobs to proper points) why was it getting overall less sharp at f4? It's just diffraction - the points of light are now as perfect as they are going to be, so they just get bigger (again, Airy Disks get bigger as you stop down) and start taking up the space of more photosites. That's just the physics of it.

    The reason why resolution figures stayed high while stopping down was because this lens has few aberrations. If the Airy Disk is close to a perfect point and not a blob, diffraction will have less of an effect. The reviewer just didn't know how to fully describe what he was seeing in the resolution results, because a lot of lenses are not this good.

    But, if you can't get over that - throw out that review and try to find any other lens test out there that disagrees with what I've said. M4/3 lens tests, for instance, show diffraction quite soon, since the pixel sizes are smaller.



    Quote Originally Posted by John Brawley View Post
    If you look at the patent, the CFA is clearly spelt out. Most aren't realising though that W pixels ALSO have ALL colour, and using colour subtraction techniques, they can also act as variable colour pixels, depending on how they're summed and combined. W pixels aren't just luminance in other words.
    I was one of the first people to argue that W pixels could be contributing to color resolution - I definitely get that. I just don't know how much, if at all. It's too hard to guess without knowing the exact image kernel they are using to create a pixel based on the surrounding photosites.



    Quote Originally Posted by John Brawley View Post
    I'm worried you're getting a bit lost in the woods mate. It's really important to know this stuff and know what affects imaging IQ, but you can't drive every choice though this narrow (diffraction) thinking. There's lots of examples where the numbers don't really tell you the truth of what's happening.

    I saw this kind of thinking on photography forums as well, especially around equivalence arguments. There's a basis in truth, but people then draw simplistic conclusions like "a 135 full frame sensor is always going to be better in low light than an MFT sensor."
    I have been doing exactly that - figuring out why a single number - 2.5x - doesn't tell me the truth of what is actually happening. I was trying to figure out why there was a difference in moire, looked at a diffraction calculator, asked myself why, and tried to see if actual data matched what the calculators were saying. They don't. Why? That ratio is too simple.



    Quote Originally Posted by John Brawley View Post
    You know it's standard practice for those shooting miniatures and models to work at F22 and beyond ? Do you think Nolan was worrying about lens diffraction on the Dark Knight ?

    Dean Semler shot all the battle sequences in "We Were Soldiers" on long lenses at F16 because he wanted to combine long lens focal length compression and have everything in focus as much as possible. That's a creative want.

    If you think always in terms of sharpness and diffraction, you're going to make some really sharp and probably pretty boring looking movies...
    First, I am absolutely not arguing that you can't stop down past diffraction. Just that it is helpful to know where it is.

    I am not trying to say that everything needs to be interpreted through diffraction, if so, I'd be arguing that everyone needs to never stop down beyond f2 because why shoot the 12K if you're not getting 12K worth of resolution. I am absolutely not saying that. I could also say you need to always stop down to f5.6 and beyond in order to limit chroma moire. I'm not saying that either.

    If one doesn't care about diffraction either way and it's more of a hindrance, just don't think about it. I find it helpful, but not everyone will. My goal was to determine why there were differences in moire and I think we are closer to understanding why. I'd rather know the reasons than have it remain a mystery.

    Also, I definitely agree with you that resolution is not the end all be all. I keep saying that. I personally care way more about color reproduction than anything else, which is why I was drawn to this camera, not because it was 12K. I'd much prefer to be shooting at 2.8K Alexa than almost anything else out there. But for other people, resolution may be more important than any other criteria.

    Also, for your examples, film has a much lower resolution than what we are talking about, so you can stop down quite a bit more before hitting diffraction. (And to determine diffraction on film, find the resolution of a given piece of film. Take the width of that film in mm, divide by resolution and multiply by 2.5x - or a smaller ratio.)



    --------



    One last thing I wanted to mention, and it will really help with what I am arguing - you will start to see the effects of diffraction at a per pixel level. If you're not going to be viewing things at a per pixel level, then it doesn't matter as much. If one's plan is to take 12K worth of resolution and smoosh it into 4K worth of delivery pixels, you are already going to be blurring all of those pixels together anyways, so diffraction is not going to be seen unless you really stopped down the lens.

    For instance, let's say you stopped down and lost 20% of the total resolution you would otherwise get. You'd still be at about 10K. That still gives you a 25% oversample when viewing at 8K.

    However, if you are going to be punching in a lot, or you are going to be shooting with a smaller crop of the sensor, then diffraction will have much more of an effect. Does that help more with what I am saying?
    Last edited by Joshua Cadmium; 10-20-2020 at 05:49 PM.


    1 out of 1 members found this post helpful.
    Reply With Quote
     

  3. Collapse Details
    Senior Member
    Join Date
    Oct 2009
    Location
    Los Angeles
    Posts
    733
    Default
    Quote Originally Posted by ahalpert View Post

    But to be sure - some of the best cameramen have the least scientific knowledge. Trial and error rules. and ultimately making and evaluating imagery is about feeling, not thinking.
    For sure.

    I like to be aware of the imaging “science” but I also get nervous when I see absolutism, especially around numbers that in my experience, get turned into “rules” and I think rules should really be avoided in creative processes.

    There’s plenty of other reasons Moire (to bring it back to topic) could have occurred as well on the footage in question.

    And for whatever reason, It’s not been a show stopper on any of my 12K shooting so far. And I’ve been shooting right in the sweet spot of non-diffracted super high performing lenses.....so....

    JB
    John Brawley ACS
    Cinematographer
    Los Angeles
    www.johnbrawley.com
    I also have a blog


    Reply With Quote
     

  4. Collapse Details
    Senior Member
    Join Date
    Apr 2011
    Posts
    267
    Default
    Quote Originally Posted by ahalpert View Post
    But to be sure - some of the best cameramen have the least scientific knowledge. Trial and error rules. and ultimately making and evaluating imagery is about feeling, not thinking.
    Oh, sure, you don't need to know any deep knowledge in order to make aesthetically pleasing images or serve a story. But it can be helpful and keep you from doing too much trial and error, especially the error part. You want the technical aspects to inform you up to a point and they fade away so you can focus on the creative side. But they go hand in hand.


    Quote Originally Posted by John Brawley View Post
    I like to be aware of the imaging “science” but I also get nervous when I see absolutism, especially around numbers that in my experience, get turned into “rules” and I think rules should really be avoided in creative processes.
    There is quite a bit of pseudoscience in the photography / cinematography world. I think part of the reason is because the physics and math start to get real wonky real fast and it's easy just to make shortcuts or rules. But like any rules, you usually need to know why they are there before you can effectively ignore or break them.


    Reply With Quote
     

  5. Collapse Details
    Senior Member
    Join Date
    Apr 2011
    Posts
    267
    Default
    One more thing I wanted to mention about using 2.5x pixel width for diffraction. The calculators online and what I have been using is based on light being at green (usually 550nm) since that is right in the middle of the spectrum. But just using green doesn't tell the whole picture.

    Violet light (just at the edge of UV) is around 400nm, which means it actually creates a smaller Airy Disk at every f stop. So, using that 2.5x number on the 12K (a width of 5.5um) but with 400nm light, we get violet light diffraction right at f5.6. That's a much deeper f stop than the f4.1 you get when running the math at 550nm. [The math, once again, is 5.5um = 2.44 * 400nm * f stop. OR f stop = 5.5um / 2.44 / .4um]

    Far red light is around 700nm, which means it creates a larger Airy Disk at every f stop. The 2.5x math puts far red light diffraction at f3.2.

    Warmer tones are going to experience diffraction before cooler tones do. That's just the physics of light.


    For comparison at 2.5x width:

    400nm = f5.64
    550nm = f4.10
    700nm = f3.22

    For comparison at 1.5x width:

    400nm = f3.38
    550nm = f2.46
    700nm = f1.93

    For comparison at 1x width:

    400nm = f2.25
    550nm = f1.64
    700nm = f1.29


    Is this helpful? Maybe not to everyone, but I certainly find it absolutely fascinating. It also again shows why that 2.5x defacto standard for diffraction is just a simplistic number like any other rule.
    Last edited by Joshua Cadmium; 10-20-2020 at 09:28 PM.


    1 out of 1 members found this post helpful.
    Reply With Quote
     

  6. Collapse Details
    Senior Member James0b57's Avatar
    Join Date
    Dec 2008
    Posts
    6,120
    Default
    Quote Originally Posted by Joshua Cadmium View Post

    Warmer tones are going to experience diffraction before cooler tones do. That's just the physics of light.
    Well, they were getting moiré on the skate boarder's Khakis. ;)


    Reply With Quote
     

  7. Collapse Details
    Senior Member James0b57's Avatar
    Join Date
    Dec 2008
    Posts
    6,120
    Default
    Quote Originally Posted by Joshua Cadmium View Post

    For comparison at 2.5x width:

    400nm = f5.64
    550nm = f4.10
    700nm = f3.22

    For comparison at 1.5x width:

    400nm = f3.38
    550nm = f2.46
    700nm = f1.93

    For comparison at 1x width:

    400nm = f2.25
    550nm = f1.64
    700nm = f1.29


    Is this helpful? Maybe not to everyone, but I certainly find it absolutely fascinating. It also again shows why that 2.5x defacto standard for diffraction is just a simplistic number like any other rule.
    Thank you. Yes, I find it helpful.

    Could also be worth saying that since most lenses aren't their best across the whole image, that framing subjects not in the center could also be part of the reason why one person might see an issue, and not another person.


    I recall Bradford Young, in an interview talking about shooting 'Solo' on those crappy~ Arri DNA lenses, and he never used to center his subjects, much preferring rule of thirds and placing a subject to the left or right. But the DNA lenses wide open were so soft particularly in the edges, he found himself switching up how he composed an image, playing the subject dead center. Something he used to think was bad.


    Yes, we can fall in love with the flaws and the happy accidents, or improvise, or feel it, but then someone like Roger Deakins or Chivo use some lenses that a lot of my friends find unappealing. I showed a director friend some Arri signatures, and he was so unimpressed. Yet, he loved '1912', so, go figure. There are the rules, breaking the rules, and using the rules to make it look like you broke the rules.... to... hahah. and so on.

    But thanks again, Joshua, those numbers were never really any of my concern, but I can use them if they come in handy in the future. If nothing else, it was entertaining and informative read. thanks/
    Last edited by James0b57; 10-20-2020 at 10:08 PM.


    Reply With Quote
     

  8. Collapse Details
    Senior Member James0b57's Avatar
    Join Date
    Dec 2008
    Posts
    6,120
    Default
    Black Magic, as far as camera companies go, seems like a good company. If they have any fault, they are just too cheap, and that means they never really give a premium or particularly confidence inspiring experience for pros. However, with how fast technology advances, seems to be a little better business model to sacrifice quality and durability for better IQ and lower cost. In many ways, or at least for most creatives trying to make a living outside the corporate world of film and television, this is many times better than RED. Red, to me, has better cameras, but marginally... and really only subjectively. There is more to a camera than one spec here or there. Also, RED is not much better than Black magic, since they are just repackaging off the shelf technology and branding it as premium, and charging a butt load for it. I seriously don't care what any of these companies charge for whatever. They can do what they gotta do, but I sure do wish one company other than Arri would make a camera that makes sense and has the IQ. In the 2 plus decades of high end video, there have been so few cameras that combine sensibility, quality, and iq. I think Sony and Canon have made a few sensible things here and there, Panasonic definitely has, but those models generally suffered in iq. Ever since the 1DC, I realized how amazingly easy it would have been for Canon to slap a 10bit higher bit rate codec on their s35 HD crop, or hell even give the 1DC a 2K crop as well! The rest of the camera was sensibly laid out and weather sealing was great, and it was a tank. I dropped the thing and it landed on the weakest part and still had no issues. Canon service was amazing too.

    Now, cameras are getting better. 2020 has shown that even when these companies are still being very conservative and stratifying their products, they cannot help but to make some amazing things. Why did they never give us 10bit 2K? Dog knows. But that is in the past. They went from 17Mbps HD to 12K raw in the blink of an eye, but it was amazing how fast they skipped over good 2K in favour of passable 4K. And now, 6 and 8 K raw is everywhere and it feels like no one in particular asked for it. haha.



    So, despite maybe Blackmagic not being "my" brand, one could say they've done right by their user base. I prefer to pay more and get some premium features. Just that the gap between BM and Arri is waaayy too big. And Canon held back their stills division while their Cine EOS floundered about finding its legs in the actual "cine" aspects, before embracing their "cine-like" aesthetic, which is admittedly still capable of enabling user to capture gorgeous images. And the original C300 felt like a toy and ergonomics were weird. Luckily it looked professional enough, and provided some advantages over other systems at the time (larger sensor, faster RS, BNC, cheap media, XLR, many cheap lenses, built in ND, etc, etc).

    I say all that, because, when I talk about this 12K sensor, I am trying to just talk about this sensor, right now. Separate from its brand for a moment. It is an interesting sensor, because it pits the technology against itself. And it makes me wonder were we are going with cameras. Everything was so lame from 2004 to 2019. As soon as HD made the DVX100 obsolete, it has been one awkward goose of a product after another. Yes, a few expensive gems, and a few "bargains of IQ", or happy accidents in ergonomics, came along. But by and large it felt like a mess of cameras made by marketers and focus groups. "Does this look like an HD camera to you?" "Would more K's make you upgrade if everyone demanded more K's?"

    So, this 12k thing:
    - Does it shoot 12K? Well, technically yes, but... you reeeeally don't want to crop in on all that noise.
    - Does it resolve 12K or downsample to 8K nicely. Yes, it is sharper than RED's 8K, but it will moiré. The R5 might be equally sharp though.
    - Does oversampling resolution allow for better low light performance? No. However, the CFA does allow for better than expected noise performance, (insert Leo Decaprio meme from D'Jango)
    - Does it have any advantages in dynamic range or latitude. No.
    - Are the colours better? That is subjective. Technically yes, practically, not in the shadows or the highlights, but technically, in the midtones, there is more tonal and hue information.

    So, what are the advantages? "It's only $10K, and it has a flippy screen."


    I can see this camera being amazing on a soundstage for VFX and fashion work. And even for available light shooters out in the wild, it gives plenty of IQ and sensitivity. THERE ARE NO REAL DISADVANTAGES to the BM12k, that I can see, or have heard about. My question is just, "is it better?" Will have to see how the A7sIII raw out and Arri's s35 4K sensor play out. I think BM has made a very interesting and unique camera that has some good things going for it. It certainly isn't particularly lacking. But I have seen the Monstro in raw, and I am not certain the BM12K will be replacing either the Mini LF or Monstro, and those cameras' origins predate the BM12K by a few years. So, the BM is still more of a bargain version, rather than a 1:1 replacement. Believe me, I wish someone would dethrone RED's over priced toys. But I don't begrudge RED for being over priced if no one is definitively challenging Arri. Sony landed the Venice, and that camera does somethings that put RED in 3rd place, for sure, but a Venice package can still cost near Arri prices without being definitively better than Arri, and people are getting RED packages for 20-30 grand cheaper than a Venice one, and Sony's next product down is the Fx9.... so......

    But seeing as the 12K sensor is pushing the limits (not in all good or bad ways) of CMOS tech, where are we going in terms of image quality for the future? Seems we have plateaued, other than a few refinements and denoise tricks. We are still balancing the same tradeoffs since 2008. Basically that means, whatever camera you have now is going to be good enough for the foreseeable future, until either they come out with something better than CMOS, or if everyone starts doing good DGO. (is it expensive or just an Arri patent? I know Red wouldn't do it, because it isn't off the shelf hardware yet, lol).

    So yeah, 2020 has been both the one of the biggest advancements in the general quality of available cameras, while also just being more of the same. Many of us have had access to similar IQ for a long while (albeit at higher price points). So, that is good for those that already have a nice camera, as it means no need to upgrade, and it is also nice for those buying their first camera, as they don't have to wade through figuring out which camera is actually good... they all are now.


    Should be interesting to see "smart" features added to cameras in the future. Like imagine having F35 level IQ, but some very smart filters that work with AI to "fix" things in camera. Instead of face filters, the face tracking AI implements more subtle enhancements. Other than that, I mean, it seems like CMOS has run its course. It gave us cheaper cameras, that run longer on batteries and require less cooling to run than CCD. So, gotta say thank you to CMOS. But other than a DGO camera introduced back in 2010, it really never fully devilered on what matters to cinematographers.


    To be honest, brand loyalty in 2020 shocks me. But, I still have to give it up to all the engineers that made it possible to shoot really nice video. the fact that these cameras even exist as they are still amazes me. Even if they never really got to where I had hoped (actually expecting the Arri 4K sensor to be the one, but who knows). I think I have bought my last CMOS camera, and happy to let that one ride out. Idk, maybe if a lightweight highly ergonomic master piece comes out, but the chances of that are highly unlikely. Good ergonomics typically takes a back seat to technological advancements.
    Last edited by James0b57; 10-22-2020 at 09:43 AM. Reason: spelling, I tried to let it go, but I re-read the thing, and I don't even think I'll be able to fix all the errors on this pass


    Reply With Quote
     

  9. Collapse Details
    Senior Member Tom Roper's Avatar
    Join Date
    Jun 2008
    Location
    Denver, Colorado
    Posts
    1,234
    Default


    1 out of 1 members found this post helpful.
    Reply With Quote
     

  10. Collapse Details
    Senior Member James0b57's Avatar
    Join Date
    Dec 2008
    Posts
    6,120
    Default
    Quote Originally Posted by Tom Roper View Post
    That 2K crop looks really clean actually, and no moire. pretty good!


    Reply With Quote
     

Page 48 of 51 FirstFirst ... 384445464748495051 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •