Aliasing

How about a hypothetical camera with interchangeable OLPFs, so you could choose the one appropriate for the job? Or even, and this is way out, interchangeable sensors? Keep all the computational gubbins in the camera body, and swap sensors as needed? So you could have a FF 24Mp sensor for stills, a DX 16 (Say) Mp one for sports / wildlife and a DX / APS-C / s35 3+ Mp one optimised for cine?
Everyone wins, the marketing chaps can sell you alternative sensors, or upgraded bodies and you (The shooter) only need to buy the sensor you need.
Bet Canikon don't see it that way though.
Dave

Nikon already does something somewhat similar... on their high end you can set the camera to use either the full size of the sensor or a DX-sized crop.

Could be nice to see FF and APS-H Canon cameras use an APS-C (S35) crop when in video mode.
 
It's not so much the crop as the pixel pitch, which remains the same. Also the sensor itself can be optimised (In terms of read-reset times etc) for the job it has to do.
Dave
 
Interchanging the OLPF could get rid of the aliasing in these DSLRs video, but the end result will be to turn your camera into barely more than a standard-def camera.

They need to re-engineer the hardware to support full frame rate reading of the sensor, and proper downscaling. Without that, changing the OLPF won't really accomplish what you want at all. Because of the way these cameras read their chips, you're simply not going to get more than about 550 lines of resolution out of them.
 
Hence my second point about interchangeable sensors. Use a sensor appropriate to the job rather than trying to bodge it with a compromise.
Dave
 
Hence my second point about interchangeable sensors. Use a sensor appropriate to the job rather than trying to bodge it with a compromise.
Dave

The point to having a FF35 sensor and using a S35 crop of it for video would be two-fold:
1) the OLPF effectively becomes lower resolution, thus closer to the intended output resolution
2) the effective chip bandwidth become much lower, possibly to the point that the on-board electronics could efficiently process it.
 
would lowering megapixels/photosites help ?
In terms of aliasing? Well, that's a complex question. Oversampling provides the ultimate way to combat aliasing, but there's a limit to how much these chips can read. The reason we have the problem as we do, is because the still-camera chips cannot be read out as fast as the video rates demand, and so there are shortcuts taken (binning pixels and/or skipping lines, etc). And that's where the aliasing is coming from.

So if they had a 3-megapixel chip, that should help enormously with read rates, sensitivity, dynamic range, and low noise (but it'd make for lousy still photos). So yes, that would help, but it would hurt these cameras' primary mission, which is still photos.

Ultimately a chip designed and optimized to produce video is what will combat the aliasing -- or, alternatively, when they get the still-camera burst rate fast enough to support video frame rates.
 
That's what I would consider the ideal. But not 2mp, you'd want at least 3mp. You have to factor in the resolution loss due to the Bayer pattern and demosaic process, so you need about 3 megapixels to deliver a truly sharp 1080p image. But yes, that would be the best of all possible worlds: incredible dynamic range, incredible sensitivity, tiny noise, razor sharp images, and could have an optimally-tuned anti-alias filter. Only problem would be that it would be lousy for shooting still photos, hence why it's not likely to happen on an HDSLR.
Amen to that!
All that we need/want is a good 1080p APS-C/s35mm video sensor.
Can it be so difficult?
Give us the D90/7D core with PROPER in-camera down-sampling to 1080p, that's all I ask :)
I don't care for true 2k, 3k, or 4k - I can't edit it, and I only output to SD DVD right now anyway!
Blu-Ray is my end-goal, nothing fancier.
it's simply a matter of 12-24 months of camera evolution and it will be in our hands! :)
 
Barry, so you're saying that if we blew up some of Philip Bloom's footage it wouldn't look so good. He says when he films he never zooms or pans so that may have something to do with the quality of lack of aliasing. Most of these glamor shoots for the 5DmkII / 7D have been shots of people and close up of faces haven't they.

I want to believe that the 7D is rubbish but the footage I see looks lovely to me. I also realize that in the hands of a pro like Mr B, even a little standard def camera can look good.

So what about an HMC40 for the run and gun stuff and then for anything with faces, interviews, tripod mount stuff, a 7D? Seems like the perfect pieces of kit to me?
 
Why would you "want to believe a 7D is rubbish"? Why would you wish ill against any product?

If you blew up Bloom's footage, it'd look just like it does now, only bigger.

I've shot some (what I consider) really, really good looking stuff on a 7D. It's capable of great results. And I've shot some trash on it too, and found it very frustrating for anything wide/deep focus. But it's $1700! You've got to cut it a lot of slack for that!

All I'm doing is pointing out exactly how these things work. It's up to you to decide whether your scenarios would work within their limitations. If you're shooting faces, they can excel. The more that you can keep out of focus, the better they'll do. The more that's in sharp focus, the more potential for negative complications from aliasing.

They are not a magic bullet. They are not Red-killers. They're not sharper than conventional video cameras. Keep that all in perspective, and use them for what they're good for, and they can do astonishingly good things at an unprecedented low price point.
 
I'm viewing them more now as 35mm adapter killers, which don't work well in low light. The 7D and 5D excell in low light when you want DOF control.

Nothing really matches them there. Is it any wonder that Philip Blooms videos emphasized those assets.
 
It's just all the footage I've seen with these cameras looks fantastic. Problem is I was all ready to grab the HMC40 and then I stumbled on the whole 7D discussion so this got me a little confused and conflicted. Problem is, most of us don't have the lolly to go out and buy both cameras so we just have to read all the reviews and figure out what's best based on what someone else says, not easy.
 
I think Duke M. got it right. They are 35mm adapter killers, not Red killers. This is a huge advance. In a very light package. The cumbersome-ness of those adapters is a thing of the past. The real issue now is panning for rolling shutter and aliasing. I believe RED knows this and their redesigns of the Scarlet are probably related.
 
Another great article, Barry. It's good to know all of this before making a purchase. And then if we still buy, we at least know the camera's limitations and can try to work around them. I hope you'll scrutinize the 1D Mark IV when it hits the shelves as well. With it's pricetage of $4999, I'd hope they'd made some improvements.
 
Kudos Barry. Usually it's been said that a picture is worth a thousand words. In this case, I find your words are just as strong as the pictures. I've known about true alaising since my first S-VHS & High 8 shots of a football field and at the time just didn't understand why a better (sharper image) looked worse than my VHS footage.

This also explains why when I shot footage using my HVX200 of a plant in my front yard blowing in the wind looked very soft compared to the same shot taken with my 7D. When I played them on a timeline and showed them to several people and asked them to pick the best one, they picked the 7D because it "appeared to be sharper".

I think this article should be required reading for everyone, whether they pick up a HD SLR or just continue to shoot with their regular video cameras.

Again Barry, thanks from all of us for your wonderful and thoughtful explaination.

eric
 
HDSLRs are all the rage right now, offering unprecedented imaging at an amazing price point, but if there's an achilles' heel, it's usually mentioned as "aliasing." So -- aliasing – we keep talking about it, but what is it? And how will it affect you?
ith various HDSLRs.

How do these aliasing issues compare with other HD cameras like the Sony EX3,
Red One etc. Is it just a matter of more processing power to define the actual detail in the image?
 
The EX3 and Red One largely avoid all aliasing complications. They're engineered to provide video images and use an appropriate anti-alias filter.

The differences also extend to the chips; in video or digital cinema cameras, they use sensors that are engineered and designed from the ground up to handle sustained frame rates at high enough rate. In the HDSLRs, they're using still-camera sensors that weren't designed for video usage. Accordingly, they have to "cheat" to get the information off the sensor fast enough, and that "cheating" is accomplished by only reading part of the sensor (either through pixel-binning or through line-skipping).

If you want alias-free imagery, you're going to want to look into a product that uses a sensor that's fast enough to be read out full-resolution at video frame rates.

(of course, video cameras can exhibit aliasing as well, depending on the strength of their anti-alias filters... it's just that the HDSLRs don't really have a choice, the aliasing comes about largely because they're trying to live in two worlds, stills and video, and they're doing so with a stills-oriented chip).
 
Barry, on a one hour documentary project targeted to TV distribution with a razor thin budget, and you had to pick one of your DSLR cameras right now, would you bring your GH1 or your 7D?
 
Neither, I'd use my HMC150. Right tool for the job, and the HMC150 excels at that type of work, way better than a DSLR.

But for sit-down interview shots, choosing between those two? No question I'd go with the GH1. It has longer record times (no 12-minute time limit), better 720/60 for the "reality" look if that's what the show calls for (and if it's 24p, then the sit-down interview is where the codec differences are completely eliminated), less color-fringing moire, and it's more video-friendly. Any perceived advantage of the 7D due to sensitivity/noise would be nullified by proper lighting, and the codec differences would be completely nullified, so there's pretty much no drawback and plenty of advantages to choosing the GH1 for that usage.

Unless Magic Lantern was available for the 7D; that might make the 7D the preferred choice.

But this really isn't the right thread to be discussing choices like that, this is about aliasing.
 
Yes, the issue with the Foveon is that if you look at a raw image, the colors are very under-saturated (compared to a raw, un-matrixed video from a 3 chip camera or even a Bayer Pattern CFA camera). This means that overall colorimetry suffers, along with noise performance because of the large matrix coefficients needed to get a reasonably saturated image. See pg 127 of Alan Robert's Circles of Confusion book for more information.

Also, the Foveon sensor does need an OLPF if aliasing is to be avoided. The Sigma cameras ommited the OLPF which leads people to think it's unnecessary, but if you read the Foveon literature you will find references to the necessity of adequate optical filtering. Now, because the colours are co-sited, you don't get false colors from the moire and aliasing - instead you just get typically ugly luma moire and aliasing. Even with a Bayer pattern CFA sensor used as it is in a DSLR (in stills mode), chroma moire is hardly ever an issue and can usually be effectively removed without ruining the look of the image, but in all sensor types, luma aliasing is a practically un-removable artifact.

Fact is - all sensors of all types will produce aliasing artifacts if they are not adequately optically filtered. There's no magic way around it. All you can do is eliminate the practical possibility of seeing them and do so without drastically reducing resolution.

Pixel Binning is a effectively a very poor downsampling filter, so either binning on sensor or further down the image chain can only make aliasing worse, not better. Of course, nearest neighbour sampling, which is what line skipping effectively is, is worse still. Neither should be seen as a proper alternative to a well designed (and usually expensive to implement properly) anti-alias downsampling filter.

Great article Barry!

Graeme

The Foveon is effectively a three-chip system, it's just that all three chips are sandwiched into one physical place instead of being in three separate spots and having light directed at them with a prism. With all three chips cosited like that, it acts to the lens as a one-chip system.

Foveon inspires loyalty and hatred like no other technology I've seen. Graeme Nattress doesn't seem too big of a fan, with his main gripe seeming to be that silicon is a lousy optical filter, making the longer wavelengths noisier to gain up.

But inherently, using a Foveon could turn this whole situation upside down, depending on how they read the chip out, and what native res they put on the chip. If they read it as three separate chips, reading each color separately, then you could do proper pixel binning without skipping gaps, and therefore probably get much cleaner performance. But if they read the chip as one chip, maybe not so possible.
 
Back
Top