Page 100 of 118 FirstFirst ... 509096979899100101102103104110 ... LastLast
Results 991 to 1,000 of 1174
  1. Collapse Details
    Refresh Rate
    Member
    Join Date
    Jun 2005
    Location
    New York City
    Posts
    82
    Default
    Quote Originally Posted by AGMedia View Post
    I assume this is in the "wish list" somewhere in these 99 pages, and I'm sorry if I'm beating a dead horse -- but this is important.

    DEAR PANASONIC: As an almost certain purchaser of this AF-100 camera, I request the fastest possible refresh rate on the CMOS sensor. The faster the refresh on the CMOS sensor, the less the skew/jello, and the more viable this camera becomes for use in the next generation of digital cinema.

    Please -- push that refresh rate to the max.
    I second that. The refresh rate needs to at least be comparable to an EX1. I know that may be a tall order due to the huge sensor, but it will definitely have an impact on whether I buy the camera or not.


     

  2. Collapse Details
    Section Moderator Rick Burnett's Avatar
    Join Date
    Jan 2010
    Location
    Raleigh, NC
    Posts
    4,395
    Default
    Quote Originally Posted by Barry_Green View Post
    The AF100 has a 4K chip, but it doesn't process it internally as 4K. All the internal processing is almost certainly going to be done at 1080p, just like all their other video cameras do. I doubt the chip will even be read at 4K, it will probably be pixel-binned to take maximum advantage of the sensitivity gains possible, and keep the total amount of individual data pixels read to a manageable level.
    With regards to pixel binning, if you are using a Bayer pattern, how exactly does this work when any two adjacent pixels are different colors? Do they bin on the diagonal?

    I've wondered this for awhile. My research of pixel binning shows there are certain non-bayer filter patterns that some companies use to make binning easier, but I cannot find a lot of info on how in hardware pixel binning is done on a bayer pattern, and what specific gains there are and what disadvantages there are as well.

    I imagine there are a host of different ways to do it. Thoughts?


     

  3. Collapse Details
    Red Team Graeme_Nattress's Avatar
    Join Date
    Apr 2005
    Location
    Ottawa, Canada
    Posts
    1,135
    Default
    Quote Originally Posted by grimepoch View Post
    With regards to pixel binning, if you are using a Bayer pattern, how exactly does this work when any two adjacent pixels are different colors? Do they bin on the diagonal?

    I've wondered this for awhile. My research of pixel binning shows there are certain non-bayer filter patterns that some companies use to make binning easier, but I cannot find a lot of info on how in hardware pixel binning is done on a bayer pattern, and what specific gains there are and what disadvantages there are as well.

    I imagine there are a host of different ways to do it. Thoughts?
    In binning, you'd only bin pixels of the same colour - essentially think of making bigger pixels by joining together smaller ones. Now that you've effectively got bigger pixels, what was the right amount of optical low pass filtering for the smaller native pixels is no longer the right amount for the larger binned pixels. Binning itself is therefore a very poor downsampling filter.

    Usually the advantage of binning is that it's done on chip, so there's a lot less data to read out, and therefore a bottle-neck is reduced. If you're reading out the whole sensor and post binning you're doing it because it's the 2nd cheapest downsample filter there is after "nearest neighbour".

    Graeme
    www.nattress.com - Film Effects and Standards Conversion for FCP
    www.red.com - RED - 4k Digital Cinema Camera


     

  4. Collapse Details
    Red Team Graeme_Nattress's Avatar
    Join Date
    Apr 2005
    Location
    Ottawa, Canada
    Posts
    1,135
    Default
    Quote Originally Posted by Duke M. View Post
    @Graeme

    I've seen a couple of people recently quoting the 3k (3072) Scarlet as having true resolution of chroma at 2.5k. (Of course we don't know yet.)

    Originally on Scarletuser people were speculating at a 20-30% loss in resolution due to the debayer process on a single chip camera, because that's where it is on most cameras of that type. (And generally closer to 30% loss.) At the time they were saying 2.2-2.3k resolution.

    This is just for curiosity, because the expectation will be a 2k output, but at a minimal 20% loss we are at 2457 in true resolution. Any over sampling is usually good, but... How much over sampling?

    Has something changed to make people now believe 18.5% resolution loss is to be expected? Or are people just playing what if again?
    It's not so much resolution loss though - it's resolution you shouldn't have anyway because to acquire such resolution leads to excessive aliasing which is ugly. All cameras are in very much the same position. Look at the F35 which is soft horizontally, and over-sharp vertically. It measures a full 1080 lines vertically, but at the expense of aliasing that can show up on broadcast when you shoot people who have fine detail on their clothing.

    Empirically, based on our RED One experience, you can expect ~80% of the rated pixel resolution, with negligible aliasing, which will make for a very nice image. Even on a 3 chip system, if you push your resolution beyond that 80% figure, you'll get into the land of objectionable aliasing due to optical filters not being "fast" - something to do with the lack of negative photons (darkons?).

    Graeme
    www.nattress.com - Film Effects and Standards Conversion for FCP
    www.red.com - RED - 4k Digital Cinema Camera


     

  5. Collapse Details
    Default
    Quote Originally Posted by Graeme_Nattress View Post
    In binning, you'd only bin pixels of the same colour - essentially think of making bigger pixels by joining together smaller ones. Now that you've effectively got bigger pixels, what was the right amount of optical low pass filtering for the smaller native pixels is no longer the right amount for the larger binned pixels. Binning itself is therefore a very poor downsampling filter.
    Yes, BUT -- the difference here is that the camera isn't going to be used at its native resolution for stills. So the OLPF being designed for a binned sensor, could result in very lovely footage after all. You won't get the 4K res, but you're not going to get that anyway because it's a 1080p system.

    Even if they only binned adjacent green pixels, they'd still cut the amount of sensor data down dramatically, while keeping it very high res.


     

  6. Collapse Details
    Default
    Quote Originally Posted by grimepoch View Post
    With regards to pixel binning, if you are using a Bayer pattern, how exactly does this work when any two adjacent pixels are different colors? Do they bin on the diagonal?
    As Graeme said, you only bin those of same color, so that would mean skipping and binning across the gaps. Which, in general, would be a pretty awful thing to do. However, in this particular case, I think it could work, specifically because the sensor has 6x as many pixels as the destination frame size. Those pixels will go to waste unless they can find a way to use them, and binning might be that way.

    The gain for binning is to increase the sensitivity and lower the noise by creating effectively larger "super-pixels". There's been some discussion of Panasonic implementing something they called a 4K2K sensor... that might be exactly what this is, a 4K sensor that delivers 2K resolution images. If they binned two green pixels together, and then used an appropriate-strength OLPF to blur the detail just enough, they could end up with an effectively 4:4:4 2K+ sensor, because that chip would have 2176 horizontal red and 2176 horizontal blue pixels on each line, and 2176 green super-pixels, created out of the 4352 pixels on each line. With similar performance in the vertical.


     

  7. Collapse Details
    Red Team Graeme_Nattress's Avatar
    Join Date
    Apr 2005
    Location
    Ottawa, Canada
    Posts
    1,135
    Default
    The complexity comes in that now that as we see, bigger pixels leads to a new OLPF. In a design that is only for that lower resolution, the OLPF is tuned for that resolution, but you'll still have the very same issue that you can't get your rated pixel resolution unless you accept aliasing, and how much you accept is a complex thing to get right. If you're going to use an over-sampled sensor only for an over-sampled image, the very best thing to do is a full adaptive demosaic followed by a high quality downsampling filter. That can lead to superb results.

    Just binning both greens in the bayer macro pixel gives you a 1/4 bandwidth saving, which is nice, but not the significant saving of 2/3 that basic line skipping gives you. Binning probably doesn't help skew, whereas line skipping helps it a lot (by 2/3 again).

    But when it comes down to it - almost everything you do is a compromise among image quality, size, weight, power consumption, heat and especially price.

    One thing I'm getting increasingly unhappy with in images is when image quality is different horizontally and vertically, which I'm noticing in images and it bugs me :-) I really like to keep everything symmetrical!

    Graeme
    www.nattress.com - Film Effects and Standards Conversion for FCP
    www.red.com - RED - 4k Digital Cinema Camera


     

  8. Collapse Details
    Section Moderator Rick Burnett's Avatar
    Join Date
    Jan 2010
    Location
    Raleigh, NC
    Posts
    4,395
    Default
    I see, I guess it also depends on the sensor and how much binning you are doing. For instance, taking the 7D sensor you'd have to skip every 2 of 3 lines, do they bin multiple pixels? Like:

    GRG
    BGB
    GRG

    In that pattern, if line skipping picked just the center G, a binning approach could use all 5 greens, or if you were looking at R and B, two of each.

    One question I would have is, does the binning process take as long as reading 3 lines? I know line skipping can give you 1/3 the time in this approach, but depending on how the binning occurs, I could see it being 2/3 the time of a full approach (giving at least one reads worth of time to combine the results).

    I'd really like to see some circuitry that does this! My EE side is frothing for knowledge.

    Given that the AF100 will have a stronger OLPF, it will be interesting to see how much softer the image looks. I say this in regards to the fact that anything you see in the pictures you are taking now that is aliased is going to have to get softer. People have commented that if a strong enough OLPF were used on say the 7D, the image would be so soft, people wouldn't like it.

    Given the density of R and B pixels in a given 9 pixel area, I think you'd have to still have a pretty strong OLPF which would really soften the image (on the 7D).

    On the GH1, how many lines are pixel binned into 1?


     

  9. Collapse Details
    Default
    Well, again, in this specific application, it's an unusual scenario. Red has a different design priority, in that you want stills and video at the same resolution. DSLRs have a different design priority, which is a $799 pricetag. But the AF100 is looking to get the best 1080p image it can, out of a 4.3k sensor.

    Skipping rows is a way to go, which is what we all presume the DSLRs are doing. But they *have* to do that, because they have to preserve the full resolution for the stills mode. The AF100 doesn't have to do that. All it needs is 1080p. So if it starts with an oversampled 2160 and downsamples to 1080p, they might end up with a fantastic 4:4:4 1080p image for internal processing.

    Furthermore, as far as savings go, there's not really all that much reason for them to even have to go with the 4:4:4 aspect, because the recording format is 4:2:0. So they could bin a couple of reds together, and a couple of blues together, and end up with 1080 chroma samples per line, and 2160 luma samples, and they've slashed the data rate in half, while giving up *nothing* in terms of ultimate image quality as per its recorded format.

    If you then slice off a third of the chip for the 16:9 shape, instead of its native 4:3 shape, you chop off the necessity to read a whole bunch of lines that aren't doing anything for you anyway. I think it's possible, even practical, to say that the combination of properly-engineered binning and ignoring unused areas of the chip could very well result in a fast-enough read off the chip that they won't have to resort to pixel-skipping.

    Granted they won't get 120fps or 4K out of it, but then again they weren't going to get that anyway. As you said, it's all about the compromise of heat vs. price vs. imager size vs. resolution vs. aliasing vs. everything, all together. You could get a Red One with an MX and get 4.5k and all that, but at $23,000. Or you could get a Scarlet and get 3K and all that, but with 2/3" chips and less-than-16mm DOF. Or you could get an AF100 and get 35mm-style DOF and a hopefully pristine alias-free 1080p image that should rival the EX1/HPX370 for sharpness, but you don't get 120fps or 4K res.

    For me, I'm plenty happy with that. EX1/HPX370 sharpness, 35mm cine DOF, interchangeable lenses, PL-mount options, pro audio, variable frame rates, less than $5995. I'm not arguing!


     

  10. Collapse Details
    Default
    Quote Originally Posted by grimepoch View Post
    Given that the AF100 will have a stronger OLPF, it will be interesting to see how much softer the image looks. I say this in regards to the fact that anything you see in the pictures you are taking now that is aliased is going to have to get softer. People have commented that if a strong enough OLPF were used on say the 7D, the image would be so soft, people wouldn't like it.
    That's because the 7D is designed for stills, and has a video mode grafted on. Its stills resolution is the priority. The video mode delivers barely 720p of resolution. If you used an appropriate OLPF for the 7D, you would end up with a 720p camera, that has to record in 1080p mode to get that 720p.

    With the AF100 we're talking about something that's taken into the design shed by the video engineers, starting from the perspective that stills are irrelevant. This unit will be tuned to deliver the best video image they can make. I fully expect HPX370/EX1 sharpness out of it.

    On the GH1, how many lines are pixel binned into 1?
    Nobody knows, outside of the engineers inside Panasonic Japan. You'll get people making guesses, even educated guesses, but that's all. Nobody actually knows.


     

Page 100 of 118 FirstFirst ... 509096979899100101102103104110 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •