Results 1 to 6 of 6
  1. Collapse Details
    HDR motion picture acquisition by lowering resolution/framerates
    High dynamic range acquisition would be the biggest revolution in digital cinema after the resolution has evolved to the level of 35mm (I feel 4K is that level).

    I just have a very basic idea: why not shot continously 'bracketing' frame after frame (*explained below)? Bracketing in photo terms means if you would like to find the correct exposure, you make 3 images one at -1 EV one a the suggested setting and one at +1 EV.

    Today bracketing has an other usage, if you make 5 pictures at -3 -1.5 0 +1.5 3 EV's you can combine those photos and have one photo with 16/32 bit dynamic range. Using HDRI photographs you can do image-based lightning and reflection but for PP motionblur really need HDR too. That makes new perspectives in CG, since you can do things never seen before, for example total lighting control in PP.

    (*)So, technically, if you can capture at 120 fps @ 720p you just need to make a very fast 4-way or 5-way apperture changing mechanism (like a spinning wheel) that changes apperture continuosly like:
    0 +2 -1 +1 -2 | 0 +2 -1 +1 -2 | 0 +2 -1 +1 -2

    Or if you can change resolution quickly, you can make every 0 EV frame be e.g. 2K and the additional 3-4 images be e.g. 720p for DR expansion. That would need strong post processing algorithms, an adaptive deinterlace-like algorithm to correct for the small changes between frames and custom file format (although could be implemented in RED RAW) but today in HDR the big question is hardware bottleneck, so there would be perfrect software solution if there is a hardware capable of doing so.

    For additional info on HDRI:


  2. Collapse Details
    yeah, but it'd be MORE than small changes between frames, and you'd be giving each frame less light, generating poorer frames.

    In theory this is a nice idea, but just doesn't carry over from still photography to motion cinematography.


  3. Collapse Details
    Nonononononono Stop trying to apply still HDR aquisition to motion pictures.

    You still have the same problems of each frame being different from the last.

    This approach is actually worse than trying to do different length exposures... say: in this case 1/240th and 1/480th. Now you're shooting two 1/240th of a seconds (or more) but you're also varrying the aperture. What'll happen is you'll have two images that are not only not lined up with each other but also have different DOFs.

    What that'll look like is really bad post DOF processes you see in programs like AE, Shake or Combustion with the blury fringe around a sharp edge.

    The only way to properly get a quality HDRI image from a motion picture sensor is just to build a sensor with like 20 stops of latitude that outputs a native float or half float image.

    This'll happen eventually, it's only a matter of time. It might not happen with a CMOS chip, but it'll hapen. Another technique in the meantime until we develop new sensors with unheard of SNRs will be to use a system sort of like the Viper Sensor where you embed multiple sensors onto the same chip.

    If each pixel had a small diffusion grid over it and each pixel was composed of two sub pixels: one of which had a 1/4 ND filter on it, you could theoretically simultaneously get a closer to HDRI image. Now if you had 4 sub pixels, each masked at different intervals: Unmasked, 1/2, 1/4, 1/8 and 1/16 you would be pretty close. But by this time you've not only divided your sensor into 5 wells but you've also significantly masked most of them. So you better have an extremely sensitive sensor to start with in order to give the underexposed sensors a chance to pick up anything.
    Last edited by im.thatoneguy; 11-04-2006 at 02:20 PM.
    - Gavin Greenwalt


  4. Collapse Details
    I believe Fuji(?) has a pixel layout where they have a mixture of pixel sizes which give a wider dynamic range than having pixels all the same size. I have no idea how well it works and apparently it causes a loss of resolution when compared with other sensors. Interesting nonetheless.
    Do not seek to follow in the footsteps of the men of old; seek what they sought.

    -Matsuo Basho


  5. Collapse Details
    Okay, but how about THIS:

    Make a three-sensor RED camera. Use a prism to split to the three different sensors. But instead of using each sensor for a different color, you'd use each sensor for a different exposure...

    meh. Probably wouldn't work; different exposure times all blended together would probably look weird. But if they all used the same exposure times, and one sensor had a three-stop ND filter over it and the other one had a six-stop ND filter over it, then you might get extreme highlight protection without sacrificing anything (other than paying more for a 3-sensor version, plus losing some light from the prism, plus now they have to run three times as much hardware in the camera...)


  6. Collapse Details
    Yes, a 3CCD method with not color but DR splitting would be great. Could be done easily because of the prism technilogy, but the camera could cost 2 or 3 times more, since the additional CCD chips (altough it would have a huge market in high-end cinema).

    You just need to be sure, that the sensor could handle +2 EV exposition (that means 4 times less light) with controllable noise, which I think is almost impossible in avaible light scenes.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts