Panasonic produces a 3D lens for their Lumix DMC-GH2 stills camera. Since the lens is a native Micro Four Thirds mount lens, people have been naturally curious – could it work on the AF100? During the Osaka trip, we were taken on a tour of Panasonic Osaka Center (a huge store in a shopping mall that serves as kind of a store/museum/showcase/demo center of everything Panasonic makes) and when we got to the consumer camera section, several of us clamored to see the 3D lens. We put it on the AF100, and … nothing. No video at all, whether we told it to “Check Lens” or not. It just refused to work. And, on the GH2, it works for stills, but not for video; the camera tells you that it cannot shoot video with the current lens attached.
That was disappointing, but really, not all that surprising; the lens is a very inexpensive, fixed-focus, fixed-iris, fixed-focal-length lens that was designed for one task – taking 3D stills. However, an enterprising YouTube poster named "540iayt" showed that you actually could defeat the lens's lockout so that it could be made to work on the GH2 in video mode, if you just covered up the lens electrical contacts with a strip of paper!
Well, heck... that got my curiosity going, so I ordered one. It was on backorder for a while, but it showed up a couple of days ago, and since then I've been experimenting with it. Can you, really, use just this little $249 lens to turn your AF100 or GH2 into a 3D moviemaking machine?
The short answer: yes. The longer answer: but you're probably not gonna bother. There are many good reasons why they disabled video recording with this lens!
Okay, first things first – the lens itself. It's tiny. Ridiculously tiny and featherweight. How tiny? Let's just say that the Lumix 20mm f/1.7 “pancake” lens looks big next to it! The 3D lens seems like it's about half the weight and even thinner than the 20mm.
3D-and-20mm-profile.jpg
3D-and-20mm-front.jpg
And, it's entirely and completely “fixed” in all ways – fixed iris (f/12), fixed focal length (12.5mm), fixed focus (anything from two feet to infinity is in focus). It's a very minimalist lens. Don't let the 12.5mm focal length fool you though, this is not a wide-angle lens! While its focal length is technically 12.5mm, you have to take into account that it's two lenses recording to (less than) half the frame each –so it's as if it's 12.5mm on a 2/3” sensor, or actually probably more like a 1/2” sensor. The actual field of view is more on par with a 35mm lens on an m43 camera – meaning it's a bit of a telephoto; that'd be about equivalent to 70mm on a full-frame 35mm stills camera.
The GH2 and the AF100 both use the Micro Four Thirds lens mount, which provides for 11 electrical contact points, which allow the camera to communicate with the lens. When these contacts are used properly they can tell the lens what iris to go to, they can power the lens' autofocus motor, they can read the lens' iris and focus and zoom positions, they can power up the lens' optical image stabilizer, and they can allow the camera body to pass data to the lens such that the body can actually update the firmware inside the lens. And apparently, there is some protocol in there that allows the lens to tell the camera “I am a 3D lens”.
3D-lens-with-contacts.jpg
Now, looking at the contacts, I thought – what practical purpose do they serve, on this lens? There is no iris (the lens is fixed at f/12, you cannot open up or stop down the iris), so obviously there's no iris control going on. There is no focus ring (the lens is a fixed-focus lens, with everything from about two feet to infinity in sharp focus), so there's no autofocusing going to happen. And there's no zoom, as it's a fixed-focal-length lens, at 12.5mm. So really, if you disabled all the electrical contacts, what would you lose? Nothing, except that the lens can no longer tell the camera that it's a 3D lens, and therefore the camera won't know to shut down video recording. And therein is the secret – if you don't tell the camera there's a 3D lens attached, the camera will just record anyway – and what it records will be a side-by-side image that you could then take into post and mux into a true 3D image, right?
Eh, not so much. Sort of, but … not quite. Okay, well, let's start with what the lens is, so this can all make some sense. The 3D lens is actually two lenses, side by side. It records a stereoscopic image, sort of like your eyes do, with there being a left lens and a right lens, to simulate how you have a left eye and a right eye. Each lens sees a very slightly different perspective of whatever you're pointing the camera at, and when those two perspectives are shown simultaneously (on a 3D-capable device) you get a 3D image.
How does a 3D image get recorded? In stills mode, it uses a .MPO file, and also takes a JPG at the same time, so you can view the image in 2D mode (by looking at the .JPG) or in 3D mode (when connected to a 3D-capable display device, such as a 3D HDTV, it will use the .MPO file instead of the .JPG). The GH2 has the capability to represent the image from two lenses as one single 2D image on the LCD; there's obviously quite a bit of processing going on in order to do that.
What about video? Well, 3D video is recorded and stored in a number of ways; when using two cameras you'll often record each eye separately and merge them in post into a 3D file, but a very common technique for 3D distribution is to anamorphically squeeze each lens's image onto one frame, resulting in both lens views being stored on one frame, side by side (this is called the “side-by-side” format). Yes, this does mean you lose half your horizontal resolution, but … hey, you have to fit two frames' worth of information in one frame's worth of space somehow, and side-by-side is a very common 3D video recording format.
Antelope-2D.jpg
Antelope-3D.jpg
When uploading a 3D video to YouTube, YouTube requires you to upload in “side by side” format, and then YouTube's playback capability can convert the image into whatever 3D display method you need (which could be side-by-side, or checkerboard, or red/blue anaglyph, or any other number of 3D display methods). But the important thing to understand here is that YouTube is expecting a side-by-side recording, and to some degree, that's sort of what the Lumix lens does. It has two lenses side-by-side, it projects each eye's image onto each side of the frame, and the resulting image on the sensor is a sort of a side-by-side image.
Can you record video in side-by-side mode on a GH2 or AF100, and upload it to YouTube, and get 3D video? Um, not really. Sort of. Not exactly. Kind of. Because the lens doesn't do a true, proper side by side, you end up with the images being not centered for true video purposes; things are “off”. If you really, really cross your eyes you can sort of make it happen, but it's not designed for that and it's got problems. Probably the worst problem is that everything is really fat and stretched out, but … we'll get to that. For now, let's leave it at that it's got problems.
So – instead of making the lens fit the workflow that we want, how about just examining what the lens actually does? I dug into it to try to find out exactly what this lens is doing, and what we would have to do to make it work for shooting real 3D. To find out, I took off the contacts mask so the lens was working in regular 3D mode, and framed up and shot a DSC Labs Chroma Du Monde chart. The reason I chose a chart is because I could precisely square up the camera to the chart (in terms of left-to-right) and keep the framing consistent (using the arrowheads). Plus, it's exactly 16x9-shaped. I didn't do the greatest job framing it up, but hey – it got me where I was going, so I'm gonna roll with it. The LCD showed a full-frame chart image, so I firmly locked the tripod off and took a shot of the chart, in 3D mode. First, here's what the chart looks like, from the 3D framing. This is what it looks like in the viewfinder when you frame the chart up, when the 3D lens is attached to the GH2 and the GH2 is doing its internal processing to turn its two-lens image into one viewfinder image:
Chart-Full-Frame.jpg
Just what you'd expect, right? And, in a side-by-side 3D recording, here's a mockup of what I expected the 3D lens to produce when I covered the contacts, so that the camera would record exactly what the lens sees:
Chart-Sbs-Mockup.jpg
That's what I would have expected the lens to produce – a proper anamorphic side-by-side image, ready for 3D viewing. But is that what it really does?
I pulled off the lens, re-installed the paper shim covering the lens contacts, and reattached the lens. That let the viewfinder actually display what the camera was really seeing, and let me take a still shot of it. And here's where the surprises began.
Chart-actual-shot.jpg
Wow. Not what I was expecting at all. What's happening here? There are two side-by-side images, yes, but why are they so tiny? Well, it turns out that the lenses in the 3D lens aren't anamorphic after all. They're spherical. The small charts on the recording are recorded as 16x9 images, so that's proof right there that it's not an anamorphic squeeze. Which explains all the wasted room above and below the chart – that stuff is just cropped off to make a proper 3D image. And there's a lot of space on the sides that's just cropped off too – what's with that? Well, I think there's allowance for vignetting (look at the middle of the frame and in the corners, you can see how there's vignetting there). So what we're seeing here is not a case of two 960x1080 images being stuck together to make one 1920x1080 image... instead, we're seeing two much-smaller images that are being pieced back together by the camera into one full image. And they're not anamorphic, they're normal (spherical). This is not what you want as a direct 3D recording, for one thing – you want two side-by-side full-height 960x1080 squeezed images. That's what works best, and that's what YouTube 3D is expecting (or any 3DTV or monitor that's expecting to receive a side-by-side signal). That's not what this lens delivers.
Secondly, the realization settles in when you go into PhotoShop and measure the size of the little charts: the GH2's charts measure about 740 x 417. That's lower than standard-def (16x9 standard-def video is 864x480 in terms of square pixels, or 720x480 in anamorphic pixels). The AF100 fares a little better; because its sensor is ever so slightly smaller, the image from the lens is projected onto more pixels so you get about 785 x 442 of usable image area. So what you really end up with, what you really have to work with, is two less-than-SD images. Bah. But, on the other hand, SD or not, they're genuine 3D, genuine stereoscopic images, so... can you use them to make a 3D video?
Yes, that you can. You can certainly (if you want to go through the effort) use a GH2 or AF100 to shoot standard-def 3D video. Framing is a problem, but once you solve that, you can end up with two video images. Go into post and stretch each side of the video to become anamorphically squeezed to 960x1080, and you can then render out an uprezzed HD file in side-by-side that will be understood by any 3DTV, by YouTube 3D, or playable on a 3D monitor. It won't be legitimate HD, it'll be uprezzed SD, but it can be done.
Here's an example of a video that I shot on the AF100, in 3D mode, showing what kind of results you can get from the Lumix 3D lens in video mode.
http://www.youtube.com/watch?v=jUbZmrF8vws
(video isn't embedded because the embedded player isn't letting it show in 3D. Go to YouTube to see it in 3D; of course you'll need 3D glasses to see it properly).
So what do you need to do, to use the Lumix 3D lens on an AF100 or GH2 to shoot 3D-SD video? Well, there are two things you need to construct:
3D-lens-with-paper-shim.jpg
The monitor overlay is a bit more work. What I did is use a side-by-side charts image to create a template, which blacks out everything other than what will be used in the final combined video frames. I then cropped it to reflect the image size difference between the AF100's 17.8mmx10mm sensor and the GH2's 18.8mmx10.6 sensor. I then scaled that to the exact size of the LCD (3” x 1.688” on the AF100) and printed it out. (You can download the template for the AF100 here, or the template for the GH2 here; print them out at 300 DPI.) Then, I took some leftover screen protectors from my old Blackberry Storm cell phone, which has a screen that's exactly 3” wide, but a little too tall. I taped the screen protector over my printed sheet so that the template lined up exactly, and then colored in the screen protector with a Sharpie permanent marker to cover all the unused area and leave the imaging windows clear. Then, I sliced off the excess vertical size of the screen protector and voila – a perfect overlay for the AF100's LCD screen, with two windows that should identically show the framing for a 3D stereoscopic image. Here's what the AF100's LCD looks like, with the 3D lens attached and the framing overlay installed on the LCD:
LCD-Overlay-In-Use.jpg
(note – there are probably better screen protectors to use than the Blackberry Storm ones, those are just ones that I happened to have laying around and the width was ideal, so I used 'em.) And, if I had to do it over again, I wouldn't try to make the area outside the 3D windows opaque, as it's really annoying not being able to see the screen displays underneath. You can see them in the viewfinder, of course, but in retrospect I think a darker translucent screen would work out better. Maybe some window-tint film that you cut windows out of.
The Blackberry screen protectors are too big for the GH2's smaller screen, so you'd want to find some tranparent overlay material or a screen protector for a different phone that would work for that display. You cannot use the same overlay for the GH2 as you use for the AF100. Not only are the screen sizes different, but the size and positioning of the 3D windows are different too, due to the difference in sensor size between them.
Oh – does this mean the 3D lens would work on a GH1 in video mode? Sure, I believe it would. I don't have a GH1 to test with, but I can't see why it would work any differently from the GH2, as long as you covered up the contacts.
Now, the big question – why did Panasonic disable this lens for video recording? Is it some massive conspiracy or “deliberate crippling,” as is so often claimed? I'd say no – I'd say that they disabled it for video recording because it does a fairly lousy job of 3D for video! When the best you can get out of it is standard-def, and you have to jump through hoops to get that, would you really want that? The 3D lens works fine for stills because the stills are starting out at 5K resolution! The still frame is 4976 pixels wide, so once you extract out the 3D windows from that frame, guess what size they are? 1920 x 1080. You can take stills with 1920x1080 resolution, which is decent enough to be worthwhile. But in video, you don't have 4976 pixels wide, you only have 1920x1080 to start with, and by the time you extract out each eye's frame, you're left with a less-than-SD 740x417 on the GH2, or 785x442 on the AF100. Hardly worth it, is it? As a lark, for fun, sure. But think of the customer-service complaints they'd receive if they tried to market this as a complete 3D video solution!
It's a bit of work and hassle to even get that standard-def 3D video out of it. To minimize that they would have had to add even more hardware, which would have driven up the price. The GH2 has the hardware image-processing capability to output appropriate 3D image information to a TV or monitor in order to display that 3D image. But, the GH2 doesn't likely have that same hardware for video! It'd take realtime scaling of two images, upconverting and anamorphosizing them and combining them into side-by-side mode... it would be expected to be able to deliver that on the LCD as well as on the outputs, and it just doesn't have the hardware to do that. And the AF100 doesn't have any 3D hardware in it; it's not a stills camera, it doesn't do 3D stills, and it doesn't do 3D video. So it makes perfect sense to me why they disabled video recording when the 3D lens is attached.
In summation – it is kind of fun to play with. And if you only need standard-def, you could build a workflow around it – as long as you're willing to accept some severe limitations, such as no zoom, no focusing, no shallow DOF, and a fixed iris of a very, very dark F12 which means that it's really only suitable for broad daylight use. But if that's what you're shooting, and you're willing to make a shim and an overlay and to appropriately squeeze and extract in post, yes you can make 3D with a GH1, GH2, or AF100 and the Lumix 3D lens.
The fixed iris is a problem, but I did my testing on the AF100 because its built-in neutral density filters made that pretty much effortless. I set the low gain to 200 ISO, and the mid gain to 400 ISO. That way I could “stop down” by one stop by going to low gain, or I could leave the gain in the middle and stop down by two stops by using the ND filter wheel, or three stops by combining the ND filter wheel and the low gain, etc. Using that combination I was able to keep the shutter at the appropriate speed for video, work with the limited fixed iris, and still be able to adjust exposure to suit just about any circumstance. Obviously it would not be so easy on the GH1 or GH2 as they lack built-in ND filters and the 3D lens won't let you attach filters. You'd probably have to rely entirely on ISO and shutter speed to have any measure of exposure control, and that would mean some potentially significant shifts in the graininess and motion rendition of your video. If you have a mattebox and slide-in ND filters, that would overcome any exposure difficulties.
For those who want to experiment further, I calculated the filters necessary to convert the AF100's raw 3D-lens video to a properly-stretched 3D side-by-side 1920x1080 image in Premiere Pro CS5. The process is really simple, once you know the appropriate effects and numbers to use. Establish a new AVCHD project, presumably 1080/24p; then...
5. On the right video stream, apply the Distort-> Transform filter, and then the Transform->Crop filter, with the following settings:
Right-Eye-CS5-Filter-Settings.jpg
6. Render your footage out to a 1920x1080 file, which you can then upload to YouTube 3D or use on a 3D blu-ray or play out to a 3D monitor.
Note – because so much of the space above and below the usable frame is, well, wasted … that does give you substantial opportunities for reframing in post, at least on the vertical. Just change the “Position” parameters in the Distort->Transform filter. There's little to no reframing that can be done on the horizontal, but on the vertical you do have most of the frame height to work with. And, further note -- these settings are for the AF100. The numbers for the GH2 would be slightly different since its sensor is slightly a different size, and that means the position of the 3D windows, and the size of the 3D windows, would be slightly different from the AF100's.
What's it like editing 3D on a 2D system? Well, the phrase “not that much fun” comes to mind, but hey, since we're talking about a way to cheat, here's the rest of what I did. Premiere lets you scale the playback window to various magnification sizes, and you can drag the borders of the preview monitor to increase or decrease the size of the monitor. I put up the original shot of the chart and then worked the combination of magnification and playback monitor size until the playback/preview monitor was showing the chart at full size. That meant a magnification of 150% on my 1920x1080 monitor, and some dragging of the edges to get it to fit. At that point, I just used the left-side chart as my framing, and then proceeded to edit as if it was regular 2D. Couldn't watch it in 3D, but … meh, it was test footage, not some epic, so I cut it as 2D, and rendered it out as 3D afterwards, and uploaded it to YouTube so I could view it in its 3D glory using my $1.68 DealExtreme 3D glasses (and hey, that's TWO pairs, for $1.68, free shipping included!) Did I mention this was a no-budget project?
In summary – it's a neat toy. Quite well matched to the GH2 for 3D photography, but not really suited well to 3D cinematography at all. However, if you want to jump through hoops, you can have one of the simplest, easiest-to-use 3D rigs on the market. Just don't expect much more than standard-def footage from it, and be prepared to make compromises in how you monitor the footage and how you work with it in post.
That was disappointing, but really, not all that surprising; the lens is a very inexpensive, fixed-focus, fixed-iris, fixed-focal-length lens that was designed for one task – taking 3D stills. However, an enterprising YouTube poster named "540iayt" showed that you actually could defeat the lens's lockout so that it could be made to work on the GH2 in video mode, if you just covered up the lens electrical contacts with a strip of paper!
Well, heck... that got my curiosity going, so I ordered one. It was on backorder for a while, but it showed up a couple of days ago, and since then I've been experimenting with it. Can you, really, use just this little $249 lens to turn your AF100 or GH2 into a 3D moviemaking machine?
The short answer: yes. The longer answer: but you're probably not gonna bother. There are many good reasons why they disabled video recording with this lens!
Okay, first things first – the lens itself. It's tiny. Ridiculously tiny and featherweight. How tiny? Let's just say that the Lumix 20mm f/1.7 “pancake” lens looks big next to it! The 3D lens seems like it's about half the weight and even thinner than the 20mm.
3D-and-20mm-profile.jpg
3D-and-20mm-front.jpg
And, it's entirely and completely “fixed” in all ways – fixed iris (f/12), fixed focal length (12.5mm), fixed focus (anything from two feet to infinity is in focus). It's a very minimalist lens. Don't let the 12.5mm focal length fool you though, this is not a wide-angle lens! While its focal length is technically 12.5mm, you have to take into account that it's two lenses recording to (less than) half the frame each –so it's as if it's 12.5mm on a 2/3” sensor, or actually probably more like a 1/2” sensor. The actual field of view is more on par with a 35mm lens on an m43 camera – meaning it's a bit of a telephoto; that'd be about equivalent to 70mm on a full-frame 35mm stills camera.
The GH2 and the AF100 both use the Micro Four Thirds lens mount, which provides for 11 electrical contact points, which allow the camera to communicate with the lens. When these contacts are used properly they can tell the lens what iris to go to, they can power the lens' autofocus motor, they can read the lens' iris and focus and zoom positions, they can power up the lens' optical image stabilizer, and they can allow the camera body to pass data to the lens such that the body can actually update the firmware inside the lens. And apparently, there is some protocol in there that allows the lens to tell the camera “I am a 3D lens”.
3D-lens-with-contacts.jpg
Now, looking at the contacts, I thought – what practical purpose do they serve, on this lens? There is no iris (the lens is fixed at f/12, you cannot open up or stop down the iris), so obviously there's no iris control going on. There is no focus ring (the lens is a fixed-focus lens, with everything from about two feet to infinity in sharp focus), so there's no autofocusing going to happen. And there's no zoom, as it's a fixed-focal-length lens, at 12.5mm. So really, if you disabled all the electrical contacts, what would you lose? Nothing, except that the lens can no longer tell the camera that it's a 3D lens, and therefore the camera won't know to shut down video recording. And therein is the secret – if you don't tell the camera there's a 3D lens attached, the camera will just record anyway – and what it records will be a side-by-side image that you could then take into post and mux into a true 3D image, right?
Eh, not so much. Sort of, but … not quite. Okay, well, let's start with what the lens is, so this can all make some sense. The 3D lens is actually two lenses, side by side. It records a stereoscopic image, sort of like your eyes do, with there being a left lens and a right lens, to simulate how you have a left eye and a right eye. Each lens sees a very slightly different perspective of whatever you're pointing the camera at, and when those two perspectives are shown simultaneously (on a 3D-capable device) you get a 3D image.
How does a 3D image get recorded? In stills mode, it uses a .MPO file, and also takes a JPG at the same time, so you can view the image in 2D mode (by looking at the .JPG) or in 3D mode (when connected to a 3D-capable display device, such as a 3D HDTV, it will use the .MPO file instead of the .JPG). The GH2 has the capability to represent the image from two lenses as one single 2D image on the LCD; there's obviously quite a bit of processing going on in order to do that.
What about video? Well, 3D video is recorded and stored in a number of ways; when using two cameras you'll often record each eye separately and merge them in post into a 3D file, but a very common technique for 3D distribution is to anamorphically squeeze each lens's image onto one frame, resulting in both lens views being stored on one frame, side by side (this is called the “side-by-side” format). Yes, this does mean you lose half your horizontal resolution, but … hey, you have to fit two frames' worth of information in one frame's worth of space somehow, and side-by-side is a very common 3D video recording format.
Antelope-2D.jpg
Antelope-3D.jpg
When uploading a 3D video to YouTube, YouTube requires you to upload in “side by side” format, and then YouTube's playback capability can convert the image into whatever 3D display method you need (which could be side-by-side, or checkerboard, or red/blue anaglyph, or any other number of 3D display methods). But the important thing to understand here is that YouTube is expecting a side-by-side recording, and to some degree, that's sort of what the Lumix lens does. It has two lenses side-by-side, it projects each eye's image onto each side of the frame, and the resulting image on the sensor is a sort of a side-by-side image.
Can you record video in side-by-side mode on a GH2 or AF100, and upload it to YouTube, and get 3D video? Um, not really. Sort of. Not exactly. Kind of. Because the lens doesn't do a true, proper side by side, you end up with the images being not centered for true video purposes; things are “off”. If you really, really cross your eyes you can sort of make it happen, but it's not designed for that and it's got problems. Probably the worst problem is that everything is really fat and stretched out, but … we'll get to that. For now, let's leave it at that it's got problems.
So – instead of making the lens fit the workflow that we want, how about just examining what the lens actually does? I dug into it to try to find out exactly what this lens is doing, and what we would have to do to make it work for shooting real 3D. To find out, I took off the contacts mask so the lens was working in regular 3D mode, and framed up and shot a DSC Labs Chroma Du Monde chart. The reason I chose a chart is because I could precisely square up the camera to the chart (in terms of left-to-right) and keep the framing consistent (using the arrowheads). Plus, it's exactly 16x9-shaped. I didn't do the greatest job framing it up, but hey – it got me where I was going, so I'm gonna roll with it. The LCD showed a full-frame chart image, so I firmly locked the tripod off and took a shot of the chart, in 3D mode. First, here's what the chart looks like, from the 3D framing. This is what it looks like in the viewfinder when you frame the chart up, when the 3D lens is attached to the GH2 and the GH2 is doing its internal processing to turn its two-lens image into one viewfinder image:
Chart-Full-Frame.jpg
Just what you'd expect, right? And, in a side-by-side 3D recording, here's a mockup of what I expected the 3D lens to produce when I covered the contacts, so that the camera would record exactly what the lens sees:
Chart-Sbs-Mockup.jpg
That's what I would have expected the lens to produce – a proper anamorphic side-by-side image, ready for 3D viewing. But is that what it really does?
I pulled off the lens, re-installed the paper shim covering the lens contacts, and reattached the lens. That let the viewfinder actually display what the camera was really seeing, and let me take a still shot of it. And here's where the surprises began.
Chart-actual-shot.jpg
Wow. Not what I was expecting at all. What's happening here? There are two side-by-side images, yes, but why are they so tiny? Well, it turns out that the lenses in the 3D lens aren't anamorphic after all. They're spherical. The small charts on the recording are recorded as 16x9 images, so that's proof right there that it's not an anamorphic squeeze. Which explains all the wasted room above and below the chart – that stuff is just cropped off to make a proper 3D image. And there's a lot of space on the sides that's just cropped off too – what's with that? Well, I think there's allowance for vignetting (look at the middle of the frame and in the corners, you can see how there's vignetting there). So what we're seeing here is not a case of two 960x1080 images being stuck together to make one 1920x1080 image... instead, we're seeing two much-smaller images that are being pieced back together by the camera into one full image. And they're not anamorphic, they're normal (spherical). This is not what you want as a direct 3D recording, for one thing – you want two side-by-side full-height 960x1080 squeezed images. That's what works best, and that's what YouTube 3D is expecting (or any 3DTV or monitor that's expecting to receive a side-by-side signal). That's not what this lens delivers.
Secondly, the realization settles in when you go into PhotoShop and measure the size of the little charts: the GH2's charts measure about 740 x 417. That's lower than standard-def (16x9 standard-def video is 864x480 in terms of square pixels, or 720x480 in anamorphic pixels). The AF100 fares a little better; because its sensor is ever so slightly smaller, the image from the lens is projected onto more pixels so you get about 785 x 442 of usable image area. So what you really end up with, what you really have to work with, is two less-than-SD images. Bah. But, on the other hand, SD or not, they're genuine 3D, genuine stereoscopic images, so... can you use them to make a 3D video?
Yes, that you can. You can certainly (if you want to go through the effort) use a GH2 or AF100 to shoot standard-def 3D video. Framing is a problem, but once you solve that, you can end up with two video images. Go into post and stretch each side of the video to become anamorphically squeezed to 960x1080, and you can then render out an uprezzed HD file in side-by-side that will be understood by any 3DTV, by YouTube 3D, or playable on a 3D monitor. It won't be legitimate HD, it'll be uprezzed SD, but it can be done.
Here's an example of a video that I shot on the AF100, in 3D mode, showing what kind of results you can get from the Lumix 3D lens in video mode.
http://www.youtube.com/watch?v=jUbZmrF8vws
(video isn't embedded because the embedded player isn't letting it show in 3D. Go to YouTube to see it in 3D; of course you'll need 3D glasses to see it properly).
So what do you need to do, to use the Lumix 3D lens on an AF100 or GH2 to shoot 3D-SD video? Well, there are two things you need to construct:
- a paper shim to cover up the electrical contacts in the lens, and
- an overlay for the LCD panel, to show you what the true framing is.
3D-lens-with-paper-shim.jpg
The monitor overlay is a bit more work. What I did is use a side-by-side charts image to create a template, which blacks out everything other than what will be used in the final combined video frames. I then cropped it to reflect the image size difference between the AF100's 17.8mmx10mm sensor and the GH2's 18.8mmx10.6 sensor. I then scaled that to the exact size of the LCD (3” x 1.688” on the AF100) and printed it out. (You can download the template for the AF100 here, or the template for the GH2 here; print them out at 300 DPI.) Then, I took some leftover screen protectors from my old Blackberry Storm cell phone, which has a screen that's exactly 3” wide, but a little too tall. I taped the screen protector over my printed sheet so that the template lined up exactly, and then colored in the screen protector with a Sharpie permanent marker to cover all the unused area and leave the imaging windows clear. Then, I sliced off the excess vertical size of the screen protector and voila – a perfect overlay for the AF100's LCD screen, with two windows that should identically show the framing for a 3D stereoscopic image. Here's what the AF100's LCD looks like, with the 3D lens attached and the framing overlay installed on the LCD:
LCD-Overlay-In-Use.jpg
(note – there are probably better screen protectors to use than the Blackberry Storm ones, those are just ones that I happened to have laying around and the width was ideal, so I used 'em.) And, if I had to do it over again, I wouldn't try to make the area outside the 3D windows opaque, as it's really annoying not being able to see the screen displays underneath. You can see them in the viewfinder, of course, but in retrospect I think a darker translucent screen would work out better. Maybe some window-tint film that you cut windows out of.
The Blackberry screen protectors are too big for the GH2's smaller screen, so you'd want to find some tranparent overlay material or a screen protector for a different phone that would work for that display. You cannot use the same overlay for the GH2 as you use for the AF100. Not only are the screen sizes different, but the size and positioning of the 3D windows are different too, due to the difference in sensor size between them.
Oh – does this mean the 3D lens would work on a GH1 in video mode? Sure, I believe it would. I don't have a GH1 to test with, but I can't see why it would work any differently from the GH2, as long as you covered up the contacts.
Now, the big question – why did Panasonic disable this lens for video recording? Is it some massive conspiracy or “deliberate crippling,” as is so often claimed? I'd say no – I'd say that they disabled it for video recording because it does a fairly lousy job of 3D for video! When the best you can get out of it is standard-def, and you have to jump through hoops to get that, would you really want that? The 3D lens works fine for stills because the stills are starting out at 5K resolution! The still frame is 4976 pixels wide, so once you extract out the 3D windows from that frame, guess what size they are? 1920 x 1080. You can take stills with 1920x1080 resolution, which is decent enough to be worthwhile. But in video, you don't have 4976 pixels wide, you only have 1920x1080 to start with, and by the time you extract out each eye's frame, you're left with a less-than-SD 740x417 on the GH2, or 785x442 on the AF100. Hardly worth it, is it? As a lark, for fun, sure. But think of the customer-service complaints they'd receive if they tried to market this as a complete 3D video solution!
It's a bit of work and hassle to even get that standard-def 3D video out of it. To minimize that they would have had to add even more hardware, which would have driven up the price. The GH2 has the hardware image-processing capability to output appropriate 3D image information to a TV or monitor in order to display that 3D image. But, the GH2 doesn't likely have that same hardware for video! It'd take realtime scaling of two images, upconverting and anamorphosizing them and combining them into side-by-side mode... it would be expected to be able to deliver that on the LCD as well as on the outputs, and it just doesn't have the hardware to do that. And the AF100 doesn't have any 3D hardware in it; it's not a stills camera, it doesn't do 3D stills, and it doesn't do 3D video. So it makes perfect sense to me why they disabled video recording when the 3D lens is attached.
In summation – it is kind of fun to play with. And if you only need standard-def, you could build a workflow around it – as long as you're willing to accept some severe limitations, such as no zoom, no focusing, no shallow DOF, and a fixed iris of a very, very dark F12 which means that it's really only suitable for broad daylight use. But if that's what you're shooting, and you're willing to make a shim and an overlay and to appropriately squeeze and extract in post, yes you can make 3D with a GH1, GH2, or AF100 and the Lumix 3D lens.
The fixed iris is a problem, but I did my testing on the AF100 because its built-in neutral density filters made that pretty much effortless. I set the low gain to 200 ISO, and the mid gain to 400 ISO. That way I could “stop down” by one stop by going to low gain, or I could leave the gain in the middle and stop down by two stops by using the ND filter wheel, or three stops by combining the ND filter wheel and the low gain, etc. Using that combination I was able to keep the shutter at the appropriate speed for video, work with the limited fixed iris, and still be able to adjust exposure to suit just about any circumstance. Obviously it would not be so easy on the GH1 or GH2 as they lack built-in ND filters and the 3D lens won't let you attach filters. You'd probably have to rely entirely on ISO and shutter speed to have any measure of exposure control, and that would mean some potentially significant shifts in the graininess and motion rendition of your video. If you have a mattebox and slide-in ND filters, that would overcome any exposure difficulties.
For those who want to experiment further, I calculated the filters necessary to convert the AF100's raw 3D-lens video to a properly-stretched 3D side-by-side 1920x1080 image in Premiere Pro CS5. The process is really simple, once you know the appropriate effects and numbers to use. Establish a new AVCHD project, presumably 1080/24p; then...
- Import your video into the bin
- Copy it to the timeline twice – left eye on one track, right eye on the track below it. In my example, I put the left eye on Video 2, and the right eye on Video 1.
- On the left video stream, apply the Distort->Transform filter, and then the Transform->Crop filter, with the following settings:
5. On the right video stream, apply the Distort-> Transform filter, and then the Transform->Crop filter, with the following settings:
Right-Eye-CS5-Filter-Settings.jpg
6. Render your footage out to a 1920x1080 file, which you can then upload to YouTube 3D or use on a 3D blu-ray or play out to a 3D monitor.
Note – because so much of the space above and below the usable frame is, well, wasted … that does give you substantial opportunities for reframing in post, at least on the vertical. Just change the “Position” parameters in the Distort->Transform filter. There's little to no reframing that can be done on the horizontal, but on the vertical you do have most of the frame height to work with. And, further note -- these settings are for the AF100. The numbers for the GH2 would be slightly different since its sensor is slightly a different size, and that means the position of the 3D windows, and the size of the 3D windows, would be slightly different from the AF100's.
What's it like editing 3D on a 2D system? Well, the phrase “not that much fun” comes to mind, but hey, since we're talking about a way to cheat, here's the rest of what I did. Premiere lets you scale the playback window to various magnification sizes, and you can drag the borders of the preview monitor to increase or decrease the size of the monitor. I put up the original shot of the chart and then worked the combination of magnification and playback monitor size until the playback/preview monitor was showing the chart at full size. That meant a magnification of 150% on my 1920x1080 monitor, and some dragging of the edges to get it to fit. At that point, I just used the left-side chart as my framing, and then proceeded to edit as if it was regular 2D. Couldn't watch it in 3D, but … meh, it was test footage, not some epic, so I cut it as 2D, and rendered it out as 3D afterwards, and uploaded it to YouTube so I could view it in its 3D glory using my $1.68 DealExtreme 3D glasses (and hey, that's TWO pairs, for $1.68, free shipping included!) Did I mention this was a no-budget project?
In summary – it's a neat toy. Quite well matched to the GH2 for 3D photography, but not really suited well to 3D cinematography at all. However, if you want to jump through hoops, you can have one of the simplest, easiest-to-use 3D rigs on the market. Just don't expect much more than standard-def footage from it, and be prepared to make compromises in how you monitor the footage and how you work with it in post.
Comment