Aliasing
by Barry Green

HDSLRs are all the rage right now, offering unprecedented imaging at an amazing price point, but if there's an achilles' heel, it's usually mentioned as "aliasing." So -- aliasing – we keep talking about it, but what is it? And how will it affect you?

Aliasing, by way of a definition, is when a sampling system fails to accurately reproduce what it's attempting to sample. Instead of an accurate representation of what you're trying to capture, you get an inaccurate “alias” of it. Aliasing happens in audio recording, in motion, and in still frames; basically, it can happen in any sampling system. Aliasing happens when inaccurate or “false data” gets through and is captured by the system as if it was actually accurate information.

An excellent description of how aliasing happens in audio recording is on The Audio Fool's blog. In this example, you can see how sampling too high of a frequency, from too slow of a sampling rate, results in an artificial wave being created. It may look like an audio waveform, it may even sound like an audio waveform, but it's wrong. It's in no way an accurate representation of the sound it was attempting to capture. And, an aliased waveform can sound awful – wordiq's definition of aliasing includes a playable waveform that shows properly-sampled, and aliased, frequencies. Give it a listen here. (note: that Audio Fool's blog is really an excellent description of aliasing and its effects and how to defeat it, and all its recommendations and techniques are directly applicable to video aliasing as well; it's a good read.)

Aliasing of course isn't restricted to audio, it also affects motion. One superb example of aliasing and the dangers it can cause, in a totally different way, is to use the iPhone “rolling shutter” video of the airplane propeller.


Here we have a motion sample of aliasing that can clearly show that the camera is capturing “something” but what it's capturing is not, in any way, representative of the actual reality. It's sampled the propeller image, but it is a completely inaccurate representation of what was actually happening. And, I think most people would agree, this is bad; they'd much rather have a proper representation of a spinning propeller. How would a proper imaging system have captured this? Probably with a blur for the propeller; I'm sure the propeller is spinning way too fast for any video system to be able to capture it accurately, so I would expect to see nothing but a blur here; with it spinning so fast, no accurate detail could be captured so the whole thing would likely render to a blur.

Which raises an interesting point: I have heard people argue that capturing some detail is better than none, even if it's aliased detail, but – is that really true? Do folks think that this propeller looks better than it would have if it had just been a typical blurred propeller shot?

And actually, that's a counter-argument – some folks might actually prefer this propeller shot. Not because it's “accurate” but because they think it looks cool. And, against that, there really can be no argument; everyone's individual taste may vary. I submit that in a quest for accuracy, this propeller shot would never pass muster, and neither would any other type of alias artifacts. However, when it comes to video, and particularly the HDSLR phenomenon, people seem generally very happy to accept false detail and aliased artifacts instead of the blur that would otherwise occur if the HDSLR cameras were prevented from showing alias “detail.” More on that later.

For further clarification, let's go with another example of aliasing in motion, the famous “wagon wheels” effect, where Hollywood western films would occasionally show a wagon travelling and the wheels would look like they were actually spinning backwards. This effect is caused by aliasing; the relatively low sampling rate of the 24fps film wasn't high enough to capture the actual speed of the wheels, and the resulting filmed imagery just looks “wrong.” To see how this could happen, imagine that the wagon wheel was moving so fast that it was able to complete one entire rotation every 1/24th of a second. And the film takes an image every 1/24th of a second. To keep track of it, let's assume that we've spray-painted a marker on the wheel at the top of one particular spoke, and we start our sequence when that spray-painted marker is at the absolute top dead center of the wheel's rotation. So now, as the wagon drives on and the wheel rotates, we happen to take our first film frame when the wheel is in the position so that the marker is at the absolute top dead center, and the film snaps that picture. Then, over the course of 1/24th of a second, the wheel completes a full rotation, so that the marker is again at TDC, and the film takes another image. Another rotation, another image, and on and on. What would that film look like when projected? Well, it'd look like the wagon wheel didn't move at all! Even though the wagon wheel was turning at a furiously fast rate of 1,440 RPM, the film's low temporal sampling would have captured samples of it at the exact same position each time, and therefore the projected film would show a wagon traveling across the desert with stationary wheels. It would look odd, it would look wrong.

So here's where the question comes in: wouldn't it look more appropriate to see the wagon wheels portrayed as a whirling blur, rather than seeing clearly-defined spokes that aren't moving at all? Is, in fact, seeing aliased detail better than seeing none? Okay, now let's take it a step further – let's take a scenario where the wheel doesn't quite complete a full rotation between frames. Let's use the example of a clock face, so the marked spoke at top-dead-center position would be referred to as pointing to “12 o'clock”, and if it was pointing at 90 degrees to the right, we'd say it was at “3 o'clock.” So in our example, let's take that fast-moving wagon wheel, and we start filming at the point where the spray-painted marker is exactly at 12 o'clock. Then, over the course of the next 1/24th of a second, the wagon wheel completes [U]almost[/U] an entire rotation, so the spray-painted marker is now pointing at 11 o'clock, and the film takes a frame. Then the wheel continues on, completing almost an entire rotation, so that the marker is now pointing at 10 o'clock, and the film takes a frame, and on and on. What will that film look like when it's projected? It'll look like the wheels are actually spinning backwards!

I youtube'd for the wagon wheels, and couldn't find it. But I did find this excellent example of the exact same effect, with (another) propeller.

Fast-forward to 1:00 in, and you can see when the propeller starts spinning, how it starts out quickly and then appears to reverse, and then slow down, actually stop, reverse direction, and speed up again.

http://www.youtube.com/v/XiD_JYy72Nk">


That propeller action is, of course, completely wrong. In reality the propeller went in one direction, and increased in speed, and that's all that happened. Yet due to temporal aliasing, we have a totally inaccurate representation here on video!

Clearly, that's just plain wrong. Again, it'd probably look better or more accurate if the wagon wheel spokes were just blurred, wouldn't it? And wouldn't you rather see that propeller just get faster until it turns to a blur, rather than this weird funky slowing down/reversing/stopping action? Would you rather see information that's wrong information, or no information at all? How could we solve this so that the wagon wheels or propeller looked “right”? The only way to do that would be to speed up the frame rate of the film so that the film frames are coming fast enough that they can accurately catch the wagon wheels as they turn. That would involve increasing the resolution of our sampling system (increasing the frames per second gives us more opportunities to sample the image, and more samples would yield more accurate sampling, so long as we have enough samples to accurately represent the image).

OKAY, BUT HOW DOES THIS APPLY TO THE HD-SLR?

It actually applies quite directly. Let's see how this same concept of time-based (“temporal”) aliasing works with spatial aliasing in still images. We're going to be talking about the ability to resolve fine detail in an image, in terms of actual space, so we'll refer to this as “spatial” aliasing. Really though, it's the exact same thing, so you can see how the exact same issues will arise.

In spatial sampling, we have a sensor that's capable of a certain spatial resolution (dots per inch, for example), just like in temporal sampling where we had a system that was capable of a certain temporal resolution (frames per second). The spatial sampling system won't be taking snapshots in increments of moments of time; instead it'll be taking snapshots in increments of units of distance. The same type of issues will apply though – if the detail underneath is too fine, the snapshots it takes will be compromised and we will get patterns or errors in the image that are a spatial equivalent of wagon wheels turning backwards!

Here's an example, both in video and in still form. Look at the bendy propeller again – that propeller should consist of vertical lines, yet the slow sampling from the progressively-scanning CMOS sensor has actually rendered some of the propeller lines horizontally! Now let's look at this video example; in this case it's an extraction from a Canon EOS 7D resolution chart:



Now, if you don't know how to read a res chart, you might look at this and say “wow, great, the 7D is resolving detail all the way past 1500 lines, that's super-high-def!” But you'd be very wrong. The 7D is actually resolving only about 600 lines here; everything after that is false detail and aliasing. It may look like all those vertical lines between 1000 and 1500 are actual detail, but they're not at all. Look at the actual raw image of the resolution chart to see what it's supposed to look like:



Now do you see the problem? The chart is supposed to be showing nothing but straight lines, but the HDSLR actually “invented” vertical lines! It may even be the same thing, the same situation, as the bended propeller... one is caused by a low temporal sampling rate, one is caused by a low spatial sampling rate, but both are yielding similar results because of the same underlying cause – the image they're trying to sample is too much detail for it to handle. So the imaging system is catching false detail and faking it and actually grossly misrepresenting what's supposed to be there.

Where in the world did the camera get the idea that there should be vertical lines there? It's completely inaccurate! The image that has been presented to us here is fake, it's wrong. Now let's look at how a conventional video camera handles the chart:



Doesn't that look more like the original chart should? The lines are clearly separated, and stay separated as far as the image sampling system can track, until the point where it just can't track anymore (in this case, at about 950 lines), and at that point it blurs the image together to a nice smooth blur. Think again about the propeller – wouldn't you rather see the propeller rendered accurately, and when it gets to going too fast, wouldn't you rather see it turn to a blur instead of separating into odd bizarre horizontal lines? Or, the wagon wheel – as the wagon wheel turns faster and faster, wouldn't you want to see it turn to a blur, rather than watching as it bizarrely starts turning backwards or freezing in place?

That's the thing with aliasing. It makes you see things that aren't actually there, or that shouldn't be there. It puts false images up. It invents or creates patterns that simply aren't in the original image, but it renders them and presents them to you such that it looks like detail. It isn't. It's fake detail. It's wrong. It's a misrepresentation of what the image actually looks like. It's not detail, it's contamination.

Which brings us back to the HDSLRs – the GH1, the 7D, and the 5D Mark II all look like they're rendering incredibly sharp, highly detailed images, but they're not. In reality, according to the resolution charts, they're rendering images that fall somewhere around a standard-definition camera, and maybe a 720p camera. Any additional “sharpness” you see in the image is fake – it's aliasing, it's smoke and mirrors, it's image contamination.

Rarely will you see such obvious image contamination as on a resolution chart. In a real-world image, it may be much harder to spot what's “real” and what's aliased “fake” detail. To an untrained eye, a heavily aliased image might even look good. Before you had the chart explained to you, which image looked “sharper” -- the one with the blur, or the one with the vertical lines? Granted, neither of them is a perfect representation of what they're supposed to look like, but which of these images looks more like an accurate representation of the third chart?



Overall, in image sampling theory, aliasing is considered just plain bad, no matter how you slice it. We can easily understand how aliasing ruins audio waveforms, and how it can ruin portrayals of motion. For some reason, though, people want to give aliasing a “pass” when it comes to the new still cameras. In fact, aliasing is at the core of the new HDSLRs! Aliasing is why they look so very sharp (and, admittedly, they do look very sharp). Just be aware that their images are simply chock full of fake, aliased “detail.” There's not a lot of genuine resolution there, so the sharpness you're seeing is aliasing. And aliasing happens all the time, not just when you notice it. It's always there, in any scene where the fine detail in the image is too small for the sampling system to accurately represent it. I would go so far as to say that a large percentage of the "sharpness" of HDSLR video is not sharpness at all, but is actually aliasing.

So – is it a good thing, or a bad thing? In the end, it's all about what you find pleasing to the eye. And, truth be told, many people actually LIKE the aliasing artifacts, they think it makes the images look sharper. And it does make them look sharper, even if it's fake. But like all aliasing, it can cause problems in the image. The most well-known are moire and jaggy lines (stairstepping). Moire is a still-image equivalent of the backwards wagon wheel or the misbehaving propeller – it happens when a repeating pattern of detail is too fine for the image sensor to handle it, so it creates a fake pattern. Just like a wagon wheel going backwards, moire creates sweeping curve patterns where there aren't any. Look at this example, taken from a picture by Barlow Elton on his Canon 7D, showing a building with what looks like perhaps aluminum siding on it, which is a real world example of nearly-parallel lines (not all that unlike a resolution chart.)



Image © 2009 Barlow Elton, used by permission

When shot at a certain distance, it looks fine – but when shot just a little bit further away, the building turns to a mass of green and purple circular bands, called “moire.” What happened? Well, in the closer shot, the lines are viewed by the camera as being further apart, and the fine detail is spaced far enough apart that the imager can cleanly resolve it (just like a slow-moving wagon wheel.) But as the camera became further away from the building, the camera's view is that the lines became closer together, and they passed the threshold where the camera could cleanly resolve, and so aliasing takes over. The result is that it creates a fake pattern (just like a wagon wheel going backwards) and the shot is ruined by a huge blob of green and purple moire all over it. And, in addition to moire, the aliasing has had another effect – look at the thin lines of the electricity wires and the pole. What's rendered cleanly in one shot, becomes a mess of stair-stepping colored blocks in the other shot. This is exactly what happens in aliasing, and both the moire and the stairstepping are examples of false detail.

On the HDSLRs the images are always full of aliasing and false detail, but (and here's the key distinction): if it's not a repeating pattern of false detail, the viewer typically doesn't notice it! They simply don't know what's real detail, and what's fake... and largely it doesn't really matter, dos it? So the frame is full of detail (much of it fake) but the viewer looks at the image and says “wow, what a highly detailed picture.” Just like our wedge of resolution chart, with the weirdly rendered vertical lines – if you didn't know any better, you might think that that was a highly detailed shot.

Aliasing, then, can actually make images look a lot sharper than they really are. It is, in effect, a cheat. And the question you have to ask yourself is: do you care? As in, “who cares how it does it, as long as the end result looks good?” I submit to you that the question you face is similar to the question of, say, looking at two lovely young women: one is a natural beauty, and the other is equally pretty but got there through the use of botox, a face lift, a tummy tuck, liposuction, breast augmentation, dermabrasion, a chemical peel, dentures, eyelash implants, laser hair removal, a hair transplant, rhinoplasty, and lots and lots of makeup. Does it matter to you how they got that way, as long as they're equally pretty? That's for you to decide, of course, but it seems obvious that the natural beauty is going to be subject to a lot fewer complications overall. Or, another way to look at it is choosing a genuine location to shoot on, versus building a fake set using flats. The end result, on video, may look the same – but one is a real house, and the other is a few hunks of lumber and plywood. Obviously you can do far more with the real house than you ever could with the fake set, but as long as you shoot the fake set from certain angles, in certain lighting, it can look just as good. And if the end result is all about having a good-looking shot, the set can serve just as well as the house, for that purpose.

So it is with aliasing. Some people actually prefer the look. But it would be wise to be aware of what's happening, and how it can potentially affect your images.

  • If you're shooting scenes with fine lines (such as street scenes with overhead telephone wires or electrical wires) you might see jaggy stairstepping instead of cleanly-rendered fine lines – that's aliasing.
  • If you're shooting scenes with repeating detail, such as a shingled roof or vinyl siding or a chain-link fence, that all might turn into a blob of moire – and that's aliasing.
  • If you're shooting a pattern or fabric, such as a knit shirt or tweed suit, you might get moire all over your image – and that's aliasing.
  • If you're shooting an interview and the subject is wearing thin metal framed glasses, and the glasses frames seem to ripple or jiggle as the subject nods their head or turns side to side – that's aliasing.
  • If you're shooting a scene with lots of fine detail, and the scene seems to shimmer or sparkle or ripple as you pan through, that's aliasing.

Aliasing is a cheat; it's not real detail, it's false detail. Better video and digital cinema cameras have filters to eliminate most (if not all) aliasing; these Optical Low Pass Filters (OLPF) are placed in front of the sensor and cause any tiny detail to blur, rather than letting it through to contaminate the image with aliasing. Sensors are tuned to be able to perform at a certain resolution, and any detail finer than the sensor's maximum resolution can result in aliasing, and so these systems include an OLPF to eliminate the particles that are too small and will result in contamination. That's why you see such clean renderings on the resolution charts from properly-engineered video cameras, and why the resolution charts render cleanly until they blur together into a nice mush when the detail gets too fine. However, in the HDSLR world, that's not possible, because the HDSLR is tasked with two jobs: taking still pictures, and taking video. If the OLPF (or anti-aliasing filter) were tuned to deliver great video, you'd find that your still pictures were just awful! An OLPF is, basically, a blur filter – any detail too fine for the video system to resolve gets turned to blur. But video is, at most, a 2.2-megapixel frame. HDSLRs are designed to handle 14-megapixel stills, or 18-megapixel, or 21-megapixel or beyond! You can't have an optical low pass filter blurring at the 2-megapixel level, and put that in front of a 21-megapixel sensor and expect anything good to come from that; at best you'd get still frames that were 2-megapixel in resolution. So the still cameras have anti-alias filters tuned for still photos, and that means that they let in way too much tiny detail for the video portion to render properly. You can't have a single system designed to render 21-megapixel stills, and 2-megapixel video, simultaneously – you have to choose to prioritize one or the other, and all the HDSLR manufacturers are (rightly) choosing to prioritize the still-camera capability (these are, after all, primarily still cameras, with video “grafted on”).

So that brings us to an unusual situation – here we have cameras that were designed to shoot stills, who have massive aliasing issues in their video, and there's nothing the manufacturer can do about it, so what does the manufacturer do? Release it as-is, targeted for web video. That's how Canon describes the 5D Mark II – it was designed for photojournalists, to produce great stills and also take video suitable for posting on the internet (which is why its video runs at 30.000 fps instead of video-friendly 23.976 or 29.97). The true (non-aliased) resolution of the system is suitable for web video, and since web video is frequently scaled down in size (and scaling down helps to reduce aliasing artifacts) they released the 5D Mk II as-is. And then a funny thing happened – video shooters looked at the alias-filled video, and thought it looked fantastic – because it does. It looks incredibly detailed (even though that detail is “false detail”). And just like the plastic surgery beauty, people don't seem to care how it got the way it is, they like it as it is. The detail in the GH1, 7D, and 5D Mark II is a “happy accident” caused by the presence of fake detail. It's not true high-definition imagery. It is compromised. But even though it's a cheat, it looks great, and the price tag simply cannot be ignored – a GH1 body-only would be only about $800, and even the 5D Mark II, body-only, is half the price of a typical prosumer video camera. And the inherent shallow depth of field of these large-sensor cameras helps to hide the aliasing in the out-of-focus background of many shots. These cameras suffer the negative effects mostly in wide and/or deep-focus shots (where the fine detail, and therefore the aliased detail, is all in focus), but they excel at close-ups where the background is totally out of focus and a face occupies much of the shot. Faces don't typically show aliasing; faces are exempt from repeating patterns of fine detail, so if you're shooting closeups or medium shots of people (and, depending on the patterns of fabric they're wearing), the HDSLRs really perform well and don't reveal any of the trickery that got them there. Does it matter, when shooting a face, which particular freckle is accurate and which is an aliased “false detail” freckle? Likely not. And truth be told, portraiture and interviews are the types of shots where these HDSLRs can look astonishingly good. Really, really good. Amazingly good, considering the incredibly affordable price tags!

Some people seem to think that seeing still shots of aliasing might be exaggerating the issue, because what really counts is how it looks in motion, right? Well, see, that's the problem -- where aliasing really reveals itself is in motion. In a still shot, most people would be hard pressed to tell the difference between real detail, and aliased fake detail. But the thing about aliasing is, it moves – and it moves in the opposite direction that the camera is moving. So if your image is contaminated with aliasing, and you move the camera, aliasing will draw attention to itself. Look at these two examples of a res chart – one is from a video camera, and the other is from an HDSLR plagued with aliasing. Look what happens when some movement is introduced; the video camera's chart looks just like you would expect, but the HDSLR's chart has a serious case of the aliasing motion issues; as the chart moves up, the aliased false detail moves down, and becomes very apparent. Nothing on the chart should be moving at all, but the aliasing makes it impossible to see anything but the very obvious problems in the image.


Video Camera moving res chart:

HDSLR moving res chart:

Obviously the aliased version is far more distracting and a much less accurate portrayal of what the reality was. And that's on a res chart, and sometimes people want to dismiss res charts (even though they're a perfectly appropriate diagnostic tool that allows us to see what's really happening in the image). People say “I usually don't interview shoot res charts, I shoot real footage.” So can aliasing affect your real footage? Of course – it's contaminating all your footage, whether it's of res charts or not. The question is whether you notice it or not. Sometimes the artifacts can be perfectly acceptable, sometimes they might be mild, but sometimes they can be disastrous. Look at these simple examples to see how the aliasing can ruin footage with unwanted artifacts in deep-focus shots. I took some footage from a house at various focal lengths, and every single shot was compromised heavily. This was simple deep-focus photography, not anything special trying to drag out aliasing. Look at the roof, look at the bricks, look at the windows. These are the kind of complications you can expect to experience with a heavily-aliasing camera.



Here's an example of a "real world" situation that was ruined by aliasing (shot on a Canon 5D Mark II). Especially towards the end of the clip.



Here's a different example of a "real world" situation where aliasing ruined the shot. In this one, the propensity of the Canon 5D camera to introduce color artifacting on the aliased edges, causes the waves to become riddled with color contamination.



Can you combat aliasing? Is there anything you can do to prevent it? Well, it basically comes down to focus – since only the finest detail causes aliasing, throwing it out of focus will prevent aliasing. Soft focus eliminates fine detail. You can try a couple of things, such as defocusing slightly on shots where aliasing is showing itself. By defocusing, you change the amount of detail present in the shot and you add a natural blur, which means you're kind of inventing your own video-appropriate OLPF, right? You can also try a softening filter on the front of your lens, to see if that will take the false detail away and leave only the true sample-able detail. But be aware, if you took all the false detail out, you'll be left with a camera that's capable of barely better than standard-def imaging! The true resolving capability of these cameras is shown on the resolution charts, and the only reason they look high-def at all is because of the presence of the false detail, so you don't necessarily want to filter it all out, you really just want to get rid of the stuff that is obviously wrong. Frame it out, or open the iris to throw the offending background detail more out of focus, or consider using something like a Caprock anti-moire filter or a diffusion filter on the front of your lens. I'll update this article with the effectiveness of some of these methods as I have a chance to test them with various HDSLRs.

Discuss this in the Forum