My latest HDR still

Nitsuj

New member
There is a lot of video on this forum but I have been doing both video and stills with my GH1 and thought I would throw up a still. It is one of my favorite things about these HDSLR's. Here is a photograph I just shot yesterday and I just love how it turned out. Just to show that the GH1 is quite a great little tool to get creative with for both video and photography. This is an HDR image tone mapped in post. Can see the large version here along with description. http://exposureroom.com/0cd66b10e40a450284b71b84f307ebe6/

getassetthumbnailimage.aspx
 
HDR stills can be fun, but I'm really curious about the possibility/practicality of HDR live action video, not stills or timelapse video based on stills.

I started a thread about shooting HDR live action video using 2 identical cameras & lenses shooting through a beamsplitter mirror over on Cinematography.com, and there are some good responses there:
http://www.cinematography.com/index.php?showtopic=42332

... and there's another related thread, too:
http://www.cinematography.com/index.php?showtopic=41162

But it would great to see a GH1-centric discussion about HDR live action video here on DVXuser!
 
I've been in pursuit of the HDR video solution for a while. I thought a splitter solution myself however it would limit to highs and lows leaving out the mids with a two camera system. What I think would be the solution would be a three layered sensor. Of course this hasn't been invented yet that I know of but this was the only way I could think of that could allow true HDR. So basically it would work by capturing light at different levels of the sensor. I imagine it would look like some sort of grid. 3 plates with tiny holes. One plate by itself would have tiny holes in it. But if you looked straight down all three you would see a solid piece because the two other plates are positioned to capture the remaining light going through the first holes. All of them being slightly off center from the other. Not sure if you can remotely tell what I am getting at here. haha. I'm not an engineer or anything but that is how I imagine it to work by a triple layered sensor as far as hardware goes.
 
I'm really confused about HDR video. I mean I get the concept, but why do you have to use beam splitters?

If you used a configuration that put the lenses as close together as possible (My guess is upside down stacked one on top of the other or side by side, one upside down and one right-side up) then all you would have to do is match the frame in post and crop what doesn't match.

You lose resolution, if you're delivering a 1080P you can siimply instant HD the few pixels that are missing. If you're delivering 720P, it's a non-issue as you're downrezzing from 1080 anyway.

I suspect the MKii would be the best camera for this method, but I don't see why it couldn't be done by simply matching frames, combining, using opacity and difference layers in post then cropping what doesn't line up?

Am I just way off base?
 
this has been discussed before when RED was asking people what you would do to improve the RED camera. it's just impossible for it to work in real time. however, it can theoretically work in post using a single frame to adjust your highs mids and lows and paste them together.
 
I'm really confused about HDR video. I mean I get the concept, but why do you have to use beam splitters?

If you used a configuration that put the lenses as close together as possible (My guess is upside down stacked one on top of the other or side by side, one upside down and one right-side up) then all you would have to do is match the frame in post and crop what doesn't match.

You lose resolution, if you're delivering a 1080P you can siimply instant HD the few pixels that are missing. If you're delivering 720P, it's a non-issue as you're downrezzing from 1080 anyway.

I suspect the MKii would be the best camera for this method, but I don't see why it couldn't be done by simply matching frames, combining, using opacity and difference layers in post then cropping what doesn't line up?

Am I just way off base?

Yeah that could possibly work but if only we could get highs, mids, and lows that would be even better. A 3 camera system might even be more tricky. One of the problems though with that setup would be the slight perspective change. It would look like double-vision I imagine.
 
Hi Kholi: I believe the disadvantage of using 2 video cams shooting the subject directly _without_ a beamsplitter is that you'd get a slightly different angle/view of the subject in each cam, which is good for 3D video, but bad for 2D video (the latter is what I'm interested in).

My (probably not original) HDR live action video idea is that the 2 video cams be mounted 90 degrees relative to each other and both shoot through a single partially-silvered mirror. Similar to how a teleprompter & its half-silvered mirror can be mounted on a video cam, except the teleprompter's CRT or LCD monitor is replaced by the 2nd camera, so both cams view the scene identically thru the mirror.

As for exposure, I think it would be OK if one video cam was set to expose from the shadows through to the mid-tones, and the other cam was set to expose from the upper mid-tones up through the highlights -- with some overlap of course. Although you'd almost certainly want to set each lens' iris with the same f-stop so each cam's video has the same DOF (use an ND filter on 1 lens to adjust exposure), it would also be fun to experiment with "dual" DOF in the composited final video as a weird effect. :)

The resulting 2 video recordings would be composited in post using software techniques similar/identical to what's used for HDR digital still photography, but here with motion video instead (I'm guessing using FCP or AfterEffects, etc.).

Seems like it could work, and the folks who replied to my post over on Cinematography.com kinda agree, although they caution there are some technical challenges to watch out for.

The reason I'm proposing doing it with 2 identical video cams plus 2 identical lenses & 1 beamsplitter (instead of 2 video cams, 1 beamsplitter & 1 lens) is that I'm pretty sure the latter requires some pretty fancy custom optics to make-up the beamsplitter assembly. Whereas my approach is built out of relatively inexpensive, off-the-self, readily available "parts".

Now that video-capable DSLRs are readily available, their fully-manual lenses would seem helpful for a project like this, and DSLRs are small, light, battery operable and relatively inexpensive.

As mentioned in my posts on Cinematography.com, this company sells inexpensive front-surface & rear-surface beamsplitter mirrors:
http://www.frontsurfacemirror.org/
http://www.stereoscopicmirrors.com/3d.htm

Question: Since the 2 video recording are composited in post, do the cams need to be genlocked when shooting the original footage?

Anyway, just brainstorming ... comments welcome.
 

Attachments

  • HDR_video_diagram.jpg
    HDR_video_diagram.jpg
    10.5 KB · Views: 0
Last edited:
I don't sync cameras ever, but if doing this, wouldn't you need them to be shutter synced as well? If not they would be grabbing slightly different moments in time and combining that in motion I imagine would come out very ugly.
 
Back when Astronomy was a big hobby of mine (still love astronomy) I remember wanting to get a beam splitter so I could take photographs of the stars. There might be some sort of splitter that could be modified to work with what you are saying instead of actually making the entire thing. Might want to check that out. Now I want to get a telescope again and use it with my GH1.... bah.

The method Car3o is talking about was something I thought about trying a while back but I didn't have the camera for it. It isn't true HDR and I have seen very bad attempts but I might just give it a whirl and see what I can do now.
 
Back when Astronomy was a big hobby of mine (still love astronomy) I remember wanting to get a beam splitter so I could take photographs of the stars. There might be some sort of splitter that could be modified to work with what you are saying instead of actually making the entire thing. Might want to check that out. Now I want to get a telescope again and use it with my GH1.... bah.

The method Car3o is talking about was something I thought about trying a while back but I didn't have the camera for it. It isn't true HDR and I have seen very bad attempts but I might just give it a whirl and see what I can do now.

Nitsui, the photograph is very nice. Actually you could already get HDR - films out of the GH1: if you do photo timelapses with the "bracket" feature. I think I'll give it a try in the next days.

I think HDR from a camera manufacturers perspective shouldn't be too difficult - you already have the capability to shoot 60 frames per second in the GH1, and an easy step would be to change the exposure time for every second frame, and then mangle the 2 frames together again - this would give 30fps with HDR.

Thats what I love about this forum - it is always very inspiring!
 
I think HDR from a camera manufacturers perspective shouldn't be too difficult - you already have the capability to shoot 60 frames per second in the GH1, and an easy step would be to change the exposure time for every second frame, and then mangle the 2 frames together again - this would give 30fps with HDR.

That will not work with long GOP compression unless the two exposure settings are recorded as two different streams.
 
Nitsui, the photograph is very nice. Actually you could already get HDR - films out of the GH1: if you do photo timelapses with the "bracket" feature. I think I'll give it a try in the next days.

I think HDR from a camera manufacturers perspective shouldn't be too difficult - you already have the capability to shoot 60 frames per second in the GH1, and an easy step would be to change the exposure time for every second frame, and then mangle the 2 frames together again - this would give 30fps with HDR.

Thats what I love about this forum - it is always very inspiring!

Thanks for comment about photo!

Here is the problem with the first approach you explained. The timelapse HDR has been done with success however only with wide shots of scenery. However it doesn't work well when there are any sort of moving objects. The HDR is a layer of photos that combine their dynamic ranges. If there is any movement in the photos you get a lot of blur and a lot of times I have found a lot of artifacts that are tough or impossible to remove. That is just with one frame so you could imagine a minute worth of video.

The problem with the second approach about the 60fps would still leave a blur effect by changing the exposure time you would capture blur with anything moving. Then you would be back to the same problem as above. I think it would have to be a completely different idea of how sensors work. Basically a sensor so sensitive that it is able to capture the low dark areas without blur. And a camera capable of just keeping the shadow detail on that particular plate then getting rid of everything else and replacing it with detail of the highlights on another plate. Basically a sensor that can have simultaneously 2 or more different dynamic ranges. That's why I think it would need to be a 3 layered sensor that is able to capture different ranges on each layer.
 
I think the beam splitter idea needs to be tried to see if it is on the right track. I was going to try it a while ago but like I said I didn't have the equipment to even attempt it. I still just have one GH1 so I wouldn't be somebody able to try. If there was a way to get a 3rd camera in there I think it would be perfect. And just imagine that idea in miniature on a circuit inside a future camera.
 
Back
Top