Page 17 of 17 FirstFirst ... 71314151617
Results 161 to 170 of 170
  1. Collapse Details
    Senior Member
    Join Date
    Jun 2006
    Location
    West London, UK
    Posts
    951
    Default
    That's because it's outputting 18 mp 14 bit RAW . Not downsampled or 4:2:2 but the whole wad from the sensor.


    Reply With Quote
     

  2. Collapse Details
    Section Moderator Rick Burnett's Avatar
    Join Date
    Jan 2010
    Location
    Raleigh, NC
    Posts
    4,395
    Default
    I am definitely reading around and seeing people that are quite confused with what Canon says the sensor is doing. First of all, the *filter pattern* is a bayer pattern, from Canon, which they show in diagrams. It *IS* a 4K bayer pattern. They also claim that the green pixels are offset from one another and overlap to help reduce aliasing/moire.

    Now, pertaining to the sensor, if that is true, then the red/blue pixels would be shifted slightly different as well, since to get what they claim, each row would be RGRGRGRGR followed by GBGBGBGB. Also claimed is that Canon has created a gapless microlens design, which *IS* a good thing, it means that more light is gathered and not lost being reflected or absorbed by a non utilized surface area. Given I've seen photosite size is 6.4 x 6.4um, I'll assume they are all square.

    One issue is, what does that microlens cover? More than likely, it's just like a standard lens layer (minus gaps) that you see in a front illuminated structure, as such:

    http://www.i-micronews.com/lectureArticle.asp?id=1607

    (which is a good example of all the wiring that blocks light, which is one of the major reasons global shutter isn't used as it requires more wires AND more circuitry that takes away from active area that can detect photons).

    Because Canon JUST takes those 4 pixels to create one pixel, the area over say the red and blue pixel that has a green component, doesn't get recorded in the green channel. Further, the Red and Blue pixel get no influence from the pixels next to them which a debayering algorithm can do.

    Minus the offset of the green, for instance, the Scarlet-X footage could be processed the same way in post, but clearly, if it was to an advantage, you'd see other people using it. The advantage is not having to take a processor hit for debayering which is VERY expensive power and time wise. (In truth, you don't have too for the Scarlet-X either, since it is recorded RAW, but they do debayer for the display which means you got a whole lot of parallel processing happening in the Scarlet-X which is why it probably eats batteries so much).

    Further, it's not like a 3CCD image because if you take say the Red and Blue channels alone, and you look at what they see on the image sensor, each of them see's 25% of the area but at 100% of the light (let's assume no light loss). With a 3CCD setup, each color array sees 100% of the area, but if evenly split, 33% of the light.

    You might not think this is significant, but it is. It's the whole issue with a bayer filter layer and something that the C300 will not avoid as a sensor alone. The aliasing and moire will definitely be on par with other 4K cameras, that are REALLY 4k pixels, so the Scarlet-X would have the same aliasing/moire probably when used in that window size.

    However, what changes this is what I haven't taken into account and that is the OLPF, which of course will take the image and, in simple terms, diffuse it slightly before it hits the sensors. You reduce the spatial frequency of your incoming image so you spread out say the red and blue content so it is larger and, for lack of better description, slightly blurs the image to reduce aliasing. Like other bayer pattern arrays, the C300 will require an OLPF that probably sits right between the RED/BLUE pixel density and the GREEN pixel density. You don't want to make it the RED/BLUE because then you lose the advantage of the higher resolution of the green pixels. I've seen aliasing and moire in the Red One. It's REALLY HARD to make happen. I suspect the C300 will be the same. It's just the nature of a bayer sensor and what resolution you want to get from it.

    Again, the C300 will require the use of this clearly to account for the lack of area the red and blue pixels in each area that represents one final pixel.

    If one were to look at what is amazing in what Canon has done it's that they've created a sensor that reads R, G and B that is 4K in parallel. It's still getting 3x 1920x1080 values out of 4x 1920x1080 pixels that it has to get off the sensor. That is still a lot of data, but it divides the time in 1/3 that it takes to get it done, and because of decoupling the channels earlier, they can read them in parallel and not get artifacts from doing it.

    That's pretty cool.

    What I am most curious about is I can't wait to see someone take a challenging image off of the Scarlet-X at 4K and the C300 at 1080p and down convert the Scarlet-X footage to 1080p and compare. Then take the 1080p footage and blow up to 4K and compare. I'm not saying either won't be awesome, as they will, but I am just curious how different they look. At both at 1080p I think they will be so similar, you won't be able to really tell a quality difference, it will be more perceived difference in the color science. At 4K I think you will see a much bigger difference in what the Scarlet-X shows because of the post-processing debayer which can be VERY high quality. (And by challenging, I don't mean low light which the C300 looks like it would win, or dynamic range which the Scarlet-X would most likely win, I mean in terms of detail)

    THAT is the difference between the 4K of each sensor. Canon has EVERY right to call the C300 a 4K sensor because it is, they just choose to use that 4K to construct a superior 1080p image compared to a 1080p native resolution sensor which cannot do that. (Some correct me if I am wrong, but I think with a full debayer, a debayered image gets ~%70 of the actual resolution? I forget). The Scarlet-X shooting a 4K image will not be 4K of resolution, period. Sensor pixel count is what it is and both the Scarlet shooting video and the C300 are 4K sensors.

    Canon has created a very cool sensor. It does have rolling shutter, but it is MUCH MUCH faster. I say that because it DOES NOT have a global shutter. But, I really think, again from the whip pans I have seen, it would probably never be a problem for most people. And I think the mounted camera on vehicles shots pretty much confirm that. I'll just have to rent one and test because I am guessing at 6 to 7ms myself.

    I cannot wait to see this technology trickle down into other Canon products because it is a SIGNIFICANT upgrade to their old sensors, which I know and most of you know, VERY well. I also say this because I think some people get the impression that people here don't think the C300 is an awesome camera, and that's just not true. I really think it's a good camera and I really like what I see, regardless of my other still held opinions on the price. I think some of the design decisions were poor, like 8-bit out, but I also recognize had they wanted to do 10-bit, it could have taken them a lot longer to get to market. At least they have something out sooner that they can continue to flush out their technology. Many people had these SAME EXACT complaints with the AF100 and FS100 but Panasonic and Sony are no different. It's not always about holding back technology, it's about knowing you can create a reliable product that actually works.

    Let me put it in perspective as a CMOS designer. When you are on the cutting edge, like this sensor clearly is, it's the BEST design that you end up producing (BEST being subjective to cost, performance, etc), but that doesn't mean it is necessarily the ONLY design you did. Canon may have designed 2, 3, 4, 5 different imagers in parallel while they created this one. That's a lot of engineering resources. Also, don't forget, they have other product lines they work on as well. For instance, the 1D. It's not like you can go out and get more CMOS designers to work in parallel because CMOS designers are in HIGH demand, believe me, I know.

    When putting so much risk into a new sensor, and it is, and the amount of R&D that has to be focused on it, I can CLEARLY see why Canon picked a back end for the camera that wasn't a moving target and a source for more delays as those kinks were worked out as well. 2 years in a design cycle for a complicated product is definitely a short amount of time. They had to take their risks in design where they saw they made the most sense. I go through this EVERY day in the design work I do. It's not a MONEY risk, it's a TIME risk. TIME is pretty much THE most critical enemy in product design.
    formerly know as grimepoch.


    Reply With Quote
     

  3. Collapse Details
    Senior Member Hidef1080's Avatar
    Join Date
    Sep 2008
    Location
    Atlanta, Ga.
    Posts
    683
    Default
    Rick you are the man.

    I always learn something from your post.
    An UNPURE D800, Canon 7D | Dell M6500 - MSI GS70 | Windows 7 Pro 64bit - 8 Pro 64bit


    Reply With Quote
     

  4. Collapse Details
    Senior Member starcentral's Avatar
    Join Date
    Nov 2004
    Location
    Toronto, Canada
    Posts
    3,883
    Default
    Quote Originally Posted by starcentral View Post
    Do you know the F3 has a built in auto-exposure tool called TLCS that will allow a user specific amount of gain or shutter speed to be introduced to maintain exposure? Do you know how many uses that has from ENG, time lapse shooting, etc..? The C300 does not have this.

    The F3 can be unlocked to 444 RGB output albeit with off-board device and battery - but it's an option. This makes it possible to go through serious grading and post production process with extreme pushing or pulling over 14.5 stops of dynamic range. This exceeds film in lattitude and gives you full post processing optons you would expect to have if working with film. The C300 does not have this.

    The F3 can shoot in variable frame rates from 1 to 60p and after 30p resolution drops to 1440x1080. The C300 can only shoot 60p in 720p mode.

    Lens mount. F3 has E-mount for variety of lens mount options. The C300 you must chose PL or EF. You buy the C300 because you own a set of EF lenses, but wait, someone wants to hire you for a cine shoot and you can't even rent industry standard Cookes or ARRIs to use.

    White balance: presets on the F3 using picture profiles and ability to hold two custom white balance settings using AWB. The C300 does not have auto white balance period.

    In the end you can't beat 422, or its form factor and the ability to use any EF glass you already own for the C300 - but you can beat its price. EF glass owners definitely have a tough decision to make, no doubt.,

    Quote Originally Posted by KahL View Post
    Um... this mount comparison you made is terribly biased.
    The F3 comes w/ a PL mount, does it not? Not all glass is PL based, as far as I recall. So why on the opposite spectrum, why would it be so hard to simply purchase a PL adapter for the C300? And aren't there just as many ___to-EF adapters as well? If not more than for the PL base considering when you factor in still lenses.
    Sorry yes its true you can get a PL to EF adapter, but my guess (not based on fact) is the C300 EF version will only be able to talk to EOS lenses and not support Cooke/i and ARRIS LDS protocol where as the PL mount C300 version probably would but not support any EF protocols. You're right though, it's not a big big deal.

    I also want to add and clarify that:

    - The F3 can shoot full 1080p/60p through SDI if needed. (there is a guy who made a video overseas and dragged around an Apple with dual SDI and generator to shoot it for cheap - lol)

    - 10-bit 422 on the F3 is a possibility, again through SDI if needed.

    - The F3 has a rocker zoom switch for compatibility with zoom lenses already shipping.

    You already know that 8-bit is 8-bit and yes 422 is better color space than 420 but people want 10-bit more than anything.
    Dennis Hingsberg


    Reply With Quote
     

  5. Collapse Details
    Default
    Thanks for the excellent post, Rick.

    I think the response to the C300 has been divided into two groups: one that appreciates what Canon has done and the other that only focuses on what technology was reused.

    We have to realize this sensor was a clean sheet design. It's never been used in any other product. Who knows how much they spent in R&D. It could be $50 million for all we know. But get this, this sensor is capable of 60p. Larry Thorpe even said they can reuse this chip if they decide to develop new processors and codecs to do 60p or 10-bit.

    I've been saying all along that Canon made conscious design decisions to reduce engineering risk and schedule slip. They have to uphold a reputation of reliability in their products.

    Quote Originally Posted by starcentral View Post
    Sorry yes its true you can get a PL to EF adapter, but my guess (not based on fact) is the C300 EF version will only be able to talk to EOS lenses and not support Cooke/i and ARRIS LDS protocol where as the PL mount C300 version probably would but not support any EF protocols.
    This is true. Neither the PL nor EF verions of the C300 will support Cooke/i and ARRI LDS. There are no electrical contacts in the PL version.

    The F3 is a fine camera and definitely has some advantages over the C300. I potential buyer or user should thoroughly research each before deciding.
    Last edited by Piolet; 11-08-2011 at 08:01 AM.


    Reply With Quote
     

  6. Collapse Details
    Bronze Member
    Join Date
    May 2010
    Location
    Lisbon, Portugal
    Posts
    304
    Default


    Reply With Quote
     

  7. Collapse Details
    Default
    Rick, so it's 4K 'photo-sites' not pixels, right, or are there multiple photo-sites per each 4K 'pixel'? Not sure how 6.4um equates to mm.

    -cp


    Reply With Quote
     

  8. Collapse Details
    Senior Member
    Join Date
    Jun 2006
    Location
    West London, UK
    Posts
    951
    Default
    As I understand 4 photosites 1 red, 1 blue and 2 green are combined into 1 pixel

    6.4 um is 6.4 thousandths of a millimetre, ie 0.0064 mm

    cheers,
    Dave


    Reply With Quote
     

  9. Collapse Details
    Default
    Yeah, just saw that, about 8.3 Million photosites, ie 3840 X 2160.

    -cp


    Reply With Quote
     

  10. Collapse Details
    Section Moderator Rick Burnett's Avatar
    Join Date
    Jan 2010
    Location
    Raleigh, NC
    Posts
    4,395
    Default
    Quote Originally Posted by frantick View Post
    Rick, so it's 4K 'photo-sites' not pixels, right, or are there multiple photo-sites per each 4K 'pixel'? Not sure how 6.4um equates to mm.

    -cp
    4K photosites. Each red/green/blue counted as one each. Just like how Red calculates their sensors as well in terms of size.
    formerly know as grimepoch.


    Reply With Quote
     

Page 17 of 17 FirstFirst ... 71314151617

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •