PDA

View Full Version : Aliasing



Barry_Green
10-19-2009, 06:45 PM
Click here to read the full article (http://www.dvxuser.com/articles/article.php/20)

noirist
10-19-2009, 07:56 PM
Thanks for the great article!

From what I've seen the aliasing on the Canon 7D appears worse than on the Panasonic GH1. So is it really fair to lump all the HDSLRs together?

Barry_Green
10-19-2009, 09:01 PM
Thanks for the great article!

From what I've seen the aliasing on the Canon 7D appears worse than on the Panasonic GH1. So is it really fair to lump all the HDSLRs together?
The ones in the price bracket I've been following, which is the 7D, 5D, GH1, and K-X, all do the same basic thing, to one degree or more. The GH1 doesn't appear to do the severe color contamination that the 7D and 5D do, but the amount of aliasing is still quite substantial.

Jack Daniel Stanley
10-19-2009, 09:12 PM
Shallow depth of field look pretty.
Low light be good.
Price tag am nice.
That's a moire.

*Jack puts on dunce hat and sits in corner*
Does Barry make anyone else feel like they road to DVXuser on the short bus?

Wow, wow, wow. Thanks for the great article Barry. Obviously it's important to know what's what. And this amazing article will undoubtedly help many, many of us get the best out of our cameras. But as fore the questions you pose, I'd say, IMO, filmmaking is smoke in mirrors anyway. I'm looking for truth in content and truth in performance, but as for the rest I want an illusion with the appearance of authenticity that's either aesthetically pleasing, rough, or life-like heightened as appropriate to the story, that feels real whether it is or not.

The plastic surgery analogy works to illustrate the potential complications / problems that can arise, but doesn't work as well (for me) in the picking a date vs. picking a camera I think. Most of us want authenticity in a mate, but all of art is artifice - hopefully with the illusion of truth - and that seems to be what the DSLR's offer 99% of the time.

visugeek
10-20-2009, 12:36 AM
Great article Barry, thanks!

philiplipetz
10-21-2009, 03:41 AM
When people say that they see aliasing only occationally in their hDSLR videos they are usually confusing aliasing and moire, one of the many artifacts caused by aliasing. Moire is more obviously wrong than normal aliasing. Most shots I have seen of men with beard stubble dramatically show how aliasing introduces false detail. The beard looks much stronger than it really is. This false detail is what Barry, thank you, is talking about.

John Froton
10-21-2009, 05:45 AM
As always, your explanations are very enlightening Barry. Thank you for the article.

I wonder what steps the manufacturers of HDSLRs would need to implement in order to reduce the amount of aliasing in these or next gen cameras? Where specifically in the hardware is so much aliasing happening? In the h.264 encoding chip?

It makes me wonder if there are other h.264 encoding chips out there, in some video camcorders, that are producing images with considerably less aliasing.

Barry_Green
10-21-2009, 09:39 AM
It's not in the encoder, I believe it's in the way the sensor is being read and subsequently downconverted. These sensors are not designed for video, they're designed for still camera use, and they have a limited speed at which they can be read out. The still-frame rate on a 7D is 8fps, on a GH1 it's 3.5fps. You cannot read the entire sensor at 60fps or even 24fps. So, they use pixel binning (or, as some insist, line-skipping) to reduce the amount of data read from the chip, and it is that process that is introducing all the aliasing. Then, there's the matter of converting that down to a 1920x1080 frame; according to Alan Roberts' examination of the 5D, he thinks that they're simply employing very poor downconversion hardware.

Until we get a camera specifically designed for video purposes, we will probably continue to see these issues. Red custom-designed their own sensor to get away from this, and we're probably going to have to see such an evolution in DSLRs to get away from it too. In the meantime, we have aliased performance, but we also have incredibly low price tags, so one could even look at the aliasing as an enabling technology -- it enables us to get sharper-looking images at unheard-of price tags. It's just that the images aren't accurate and can lead to serious image problems too, but ... then the old "you get what you pay for" adage comes into play.

Windjammer
10-21-2009, 09:52 AM
Great article

Barry_Green
10-21-2009, 10:06 AM
That's what I would consider the ideal. But not 2mp, you'd want at least 3mp. You have to factor in the resolution loss due to the Bayer pattern and demosaic process, so you need about 3 megapixels to deliver a truly sharp 1080p image. But yes, that would be the best of all possible worlds: incredible dynamic range, incredible sensitivity, tiny noise, razor sharp images, and could have an optimally-tuned anti-alias filter. Only problem would be that it would be lousy for shooting still photos, hence why it's not likely to happen on an HDSLR.

Windjammer
10-21-2009, 11:47 AM
Good point

10s
10-21-2009, 11:58 AM
Nice work Barry, ... once again!

adkimery
10-21-2009, 01:36 PM
Very interesting article Barry (as usual). Just goes to show that, again, there's no such thing as a free lunch.


-Andrew

Cranky
10-21-2009, 01:44 PM
That's what I would consider the ideal. But not 2mp, you'd want at least 3mp. You have to factor in the resolution loss due to the Bayer pattern and demosaic process, so you need about 3 megapixels to deliver a truly sharp 1080p image.
Incedentally, this is what the HMC40\TM300 has, 3MP per chip. But these are three-chip cams, do they use Bayer pattern? Do they NEED to be 3MP each?

Barry_Green
10-21-2009, 03:49 PM
A three-chip camera would not use a Bayer pattern. The Bayer pattern is necessary because CCD and CMOS chips are monochrome; therefore to get a single chip to see in color you need a Bayer pattern. Three-chip systems don't need or use the Bayer, they use a prism to split the light and direct the red, green, and blue to individual chips.

Those cams probably have denser chips to improve the still-camera functionality...

ravencr
10-21-2009, 06:03 PM
Can I reduce the sharpening in the scene file section of my HMC40 to reduce this from happening?

Chris

Seamus McFlannel
10-21-2009, 06:11 PM
Barry,
This is a great article. I have a question in this regard that concerns the Scarlet, or rather the speculative information that has been disseminated about it. It seems that they are calling it a DSMC, and they are saying it is both a motion picture and a still camera. So, would the same problems apply in this case. It does seem that is primarily a motion camera though. The reason I'm asking is that I have almost made the decision to buy an HPX300 along with the P+S Mini35C. But then I started reading about the Scarlet S35. From what I've been reading, it looks as though the cost of the Scarlet with the necessary features would cost about the same as the HPX300 with the P+S Mini35C. So I'm wondering if I should wait for the Scarlet as it will take the same lenses I would use for the Mini35 but be less cumbersome and it would have a much larger imager. The rolling shutter would not be an issue as both the HPX300 and the Scarlet both use CMOS chips. BUT, if the the Scarlet would have aliasing issues, I wouldn't be interested.
With all that said, and with what we currently know, which do you think would be a better buy for the money.
By the way, I am strictly a cinema style shooter.
Thanks again for the article.
Seamus

adkimery
10-21-2009, 06:38 PM
W/regards to Red, the Red One has been used for high-end still photography (for example Megan Fox's spread in Esquire) so whatever balancing act they've come up seems to be working (or at least 'good enough').


-A

strangways
10-21-2009, 07:56 PM
W/regards to Red, the Red One has been used for high-end still photography (for example Megan Fox's spread in Esquire) so whatever balancing act they've come up seems to be working (or at least 'good enough').


The thing with HDSLRs is, as Barry says, "You can't have a single system designed to render 21-megapixel stills, and 2-megapixel video, simultaneously"

The RED ONE is (various cropping choices notwithstanding) a 12-megapixel sensor, doing 12-megapixel stills, and 12-megapixel video, simultaneously. Hence, no aliasing problems because the OLPF is designed for 12-megapixels, and that's all it ever records.

Of course, to be able to record 12-megapixel video, they are recording at a data rate that is 10 times higher than the HDSLRs!

Al MacLeod
10-22-2009, 07:10 AM
Writing this article on aliasing must have taken some time!
After reading it I don't think rolling shutter meets the test for aliasing. The depiction of the propeller blades a cmos sensor renders is accurate. It just represents motion differently than a global shutter would. There is no extraneous or additional information added, nor is any information deleted.
I guess it's a definition thing.

smelni
10-22-2009, 02:04 PM
why isnt there a way to create and put a real OLPF to put in front of the lens or behind the lens - If a OLPF is a physical thing and not a digital thing why isnt this possible

Barry_Green
10-22-2009, 02:15 PM
It is possible. CommanderSpike's already done a bit of a homebrew version.

Problem is, if you eliminate all the aliasing, you're going to be left with only the true resolved detail that these cameras can provide, and that's not much more more than standard-def. It is the presence of aliasing that makes these cameras look sharp at all.

These cameras are trying to use a still-camera sensor to do high-speed motion photography. It's beyond what they were ever designed to do. So the pixel-binning (or line skipping, whichever/whomever you believe) is necessary to get the frame rate up to video speeds. There are compromises involved. If you believe the "six-pixel bin" approach as stated by Canon's Tim Smith, each binned pixel is made up of reading a combination of six sensor pixels. But because it's a Bayer pattern sensor, thos six pixels cannot be side by side -- it's not like a 3x2 block! It's six pixels extracted out of a matrix of at least 24 (in red and blue), which means there are gaps between the pixels. Which is why spurious aliased detail "gets through". So if you put on an OLPF that's strong enough to blur those little details out, something that blurs the entire six-pixel matrix into one big pixel, you'll be left with imagery that's way way way lower than HD. But it'd be accurate.

Which is, of course, why these cameras only resolve a medium-def image -- because, in terms of true resolving power, they can only resolve detail as small as that six-pixel bin. But if we examine that six-pixel bin as being out of a 24-pixel matrix, then that really means the OLPF would drop the resolution from the stated 18-megapixels, to 1/24 of that (which would mean a 750,000-pixel frame).

And, guess what -- let's do the math, an SD frame has about 350,000 pixels in it, and a 720p frame has about 920,000 pixels in it. 750,000 fits right between there, doesn't it? Which is a reaffirmation of what I've been saying -- the true resolved detail on these systems is somewhere between standard-def and 720p. At best.

Any additional "sharpness" being detected is aliasing that's leaking through and contaminating the image. (and yes, I'm well aware that people like that contamination and think it adds to the image).

TimurCivan
10-22-2009, 09:27 PM
I think the bendy propeller looks AMAZING. i know its inaccurate.... but God its beautiful!

Postmaster
10-23-2009, 03:27 AM
Actually I don`t get it.

Why do they use those Bayer sensors? (yeah they are cheaper) :violin:
Why not 3 traditional chips in full size/full HD matrix and do away with all that debayer, rolling shutter, Jello, moire Voodoo Shmoodoo?

In my book they should have phenomenal low light sensitivity and great DOF.

To complicated? To expensive?


Frank

Al MacLeod
10-23-2009, 04:27 AM
Manufacturers have long known that to soak the market for all it's worth, you have to appeal
to every niche you can. If you accommodate everybody with one camera, you have wasted a lot of market potential.
I read once that color tv availability was delayed for a time because the manufacturers wanted to fully saturate the market for black and white sets. Waste not want not!

adkimery
10-23-2009, 09:25 AM
Actually I don`t get it.

Why do they use those Bayer sensors? (yeah they are cheaper) :violin:
Why not 3 traditional chips in full size/full HD matrix and do away with all that debayer, rolling shutter, Jello, moire Voodoo Shmoodoo?

In my book they should have phenomenal low light sensitivity and great DOF.

To complicated? To expensive?


Frank
Well, like Barry said these are still cameras first and foremost w/bolted on video functionality because many still photojournalists are now being asked to shoot video for the web. The fact that video people are lusting for them and trying to use them for more than what the cameras are designed for is the fault of the video people, not the camera makers.

W/regards to 3xCCD vs 1xCMOS, AFAIK, 1xCMOS doesn't require as much space, doesn't require as much power, doesn't produce as much heat, and is overall a less expensive technology to go with. I think you also run into lens compatibility problems between camera systems that use 3xCCDs and camera systems that use 1xCMOS.


-A

Barry_Green
10-23-2009, 10:00 AM
Why do they use those Bayer sensors?
Because sensors are monochrome. The only way to get color is to use three chips and a prism, or stick a bayer cfa in front of the chip.


Why not 3 traditional chips in full size/full HD matrix and do away with all that debayer, rolling shutter, Jello, moire Voodoo Shmoodoo?
Using three chips will get you away from debayering, but you'll still be subject to rolling shutter, jello, and moire.

Rolling shutter is integral to today's CMOS video chips, and it happens whether you have one chip or three. And jello is one of the effects of rolling shutter.

Moire is one of the most objectionable ways that aliasing manifests itself, but it's not tied inherently to Bayer filters, it's endemic to an undersampling video system.

The reason they use one chip in these SLRs is because of the simulation of the way a film camera works. They can use the same lenses, with the same flange focal distance, with one chip. That's not possible with three chips because you have to build an optical prism assembly to split the beam of light across three chips, and that takes up space, which blows the relationship of the lens-to-chip distance, meaning those lenses won't focus properly. It's more important to them to maintain the lens relationship, than it is to avoid Bayer de-mosaic'ing.

Mattbatt
10-24-2009, 07:30 PM
Barry great article, thanks. Too bad my 5d is such a compromise! Couldn't one bypass the compression with some software config and get the entire uncompressed sensor readout via hdmi? I was reading about some thought on that from cinema5d. Anyway I enjoyed learning and reading whY you said. I guess now I can see and agree with why the BBC won't use the 5d

Maheel
10-26-2009, 05:31 AM
Barry,

What will happen if we down sample the material to SD. Will it be comparable to materials shot using a 35 mm adapter with a HD prosumer camera (EX1) and down sampled to SD.

ROne
10-26-2009, 07:37 AM
Brilliant article.

And explains to me why the GH1 doesn't looks so good blown up on my 1080p projector, as the information just isn't there. Especially on a wide shot.

I really didn't think it this bad though!

Lenilenapi
10-26-2009, 11:05 AM
Barry, what camera did you use for the DSLR res chart images? Some of these cameras have more aliasing than others don't they?

Also a friend has been pointing out artifacting in out of focus areas of Canon 5D footage, that looks kind of like soft rectangular patches . Do you know what that is caused by?

Lenny Levy

Barry_Green
10-26-2009, 06:24 PM
What will happen if we down sample the material to SD. Will it be comparable to materials shot using a 35 mm adapter with a HD prosumer camera (EX1) and down sampled to SD.
I haven't tried, but I would imagine the SD downconversion should be pretty good. The EX1 makes cleaner HD than DSLRs do, so downconverted EX1 footage would look at least as good, and probably better, than downconverted DSLR footage.

Barry_Green
10-26-2009, 06:26 PM
Barry, what camera did you use for the DSLR res chart images?
Those were from my 7D.


Some of these cameras have more aliasing than others don't they?
From what I've seen, they all seem pretty comparable in terms of total amount; maybe the GH1 is a little less than the Canons but it's not a big difference. What is a huge difference is the way the Canons turn to colored mush, the GH1 doesn't do that. The Pentax K-7 does something about halfway between, it's not as colorfully mushy as the Canons but it's not as monochrome as the GH1.


Also a friend has been pointing out artifacting in out of focus areas of Canon 5D footage, that looks kind of like soft rectangular patches . Do you know what that is caused by?
Not off the top of my head but I would expect some type of fixed-pattern noise? Does it happen in underexposed out-of-focus patches?

Duke M.
10-27-2009, 01:13 PM
Barry, any chance you could put a linear PL filter in front and rotate it while shooting the chart to see if that makes any difference. My guess is it would create interference patterns that would tone down the moire and aliasing at certain angles.

Thanks,
Duke

Barry_Green
10-27-2009, 04:27 PM
I don't have an LP, only circular...

yoclay
10-27-2009, 09:03 PM
Just out of curiousity, what would happen if a Sigma Foveon X3 sensor were used? It definitely has a sensor that works in a completely different way and doesnt use a bayer pattern.The results for stills from the Sigma cameras are often quite beautiful as well. Could additionally get rid of a lot of artifacts:

http://en.wikipedia.org/wiki/Foveon_X3_sensor

Barry_Green
10-27-2009, 09:09 PM
The Foveon is effectively a three-chip system, it's just that all three chips are sandwiched into one physical place instead of being in three separate spots and having light directed at them with a prism. With all three chips cosited like that, it acts to the lens as a one-chip system.

Foveon inspires loyalty and hatred like no other technology I've seen. Graeme Nattress doesn't seem too big of a fan, with his main gripe seeming to be that silicon is a lousy optical filter, making the longer wavelengths noisier to gain up.

But inherently, using a Foveon could turn this whole situation upside down, depending on how they read the chip out, and what native res they put on the chip. If they read it as three separate chips, reading each color separately, then you could do proper pixel binning without skipping gaps, and therefore probably get much cleaner performance. But if they read the chip as one chip, maybe not so possible.

strangways
10-28-2009, 12:02 PM
As the owner of a Foveon-sensored camera which I both love and hate, I can say for certain that it does have its weaknesses, such as plenty of crosstalk between colors, but it does have a few advantages that could help avoid aliasing.

First, it does not really require an OLPF, and the first Sigma SLRs to use the sensor omitted them.

Second, the sensor itself can do pixel binning and thereby increase the readout speed. Foveon calls this "Variable Pixel Size" (http://www.foveon.com/article.php?a=71)
Current Bayer HDSLRs need to do pixel binning on another chip, and have to do a full readout of the sensor first. Because they cannot do that fast enough, they resort to line skipping instead.

The problem is that VPS does not provide very high resolution. The Fx17-78-F13D sensor used in the current Sigma cameras can only do a full un-binned sensor readout of 2652x1768 pixels at 5 fps, and to get it up to 30 fps, you have to bin it to 640x480.
It can also do 700×525 at 20fps, which leads me to believe 25 (and 24 fps) would fall somewhere around 610x457, which is less than standard def.
That would, however, be a true resolution, in that it would not have aliasing, and no blur from an OLPF, so it would be actually resolving the same as a 3-CCD or 3-MOS video camera with the same pixel resolution (as long as that video camera was not using pixel shifting, and was recording 4:4:4 color.)

sumitagarwal
10-28-2009, 01:42 PM
It is possible. CommanderSpike's already done a bit of a homebrew version.

Problem is, if you eliminate all the aliasing, you're going to be left with only the true resolved detail that these cameras can provide, and that's not much more more than standard-def. It is the presence of aliasing that makes these cameras look sharp at all.

These cameras are trying to use a still-camera sensor to do high-speed motion photography. It's beyond what they were ever designed to do. So the pixel-binning (or line skipping, whichever/whomever you believe) is necessary to get the frame rate up to video speeds. There are compromises involved. If you believe the "six-pixel bin" approach as stated by Canon's Tim Smith, each binned pixel is made up of reading a combination of six sensor pixels. But because it's a Bayer pattern sensor, thos six pixels cannot be side by side -- it's not like a 3x2 block! It's six pixels extracted out of a matrix of at least 24 (in red and blue), which means there are gaps between the pixels. Which is why spurious aliased detail "gets through". So if you put on an OLPF that's strong enough to blur those little details out, something that blurs the entire six-pixel matrix into one big pixel, you'll be left with imagery that's way way way lower than HD. But it'd be accurate.

Which is, of course, why these cameras only resolve a medium-def image -- because, in terms of true resolving power, they can only resolve detail as small as that six-pixel bin. But if we examine that six-pixel bin as being out of a 24-pixel matrix, then that really means the OLPF would drop the resolution from the stated 18-megapixels, to 1/24 of that (which would mean a 750,000-pixel frame).

And, guess what -- let's do the math, an SD frame has about 350,000 pixels in it, and a 720p frame has about 920,000 pixels in it. 750,000 fits right between there, doesn't it? Which is a reaffirmation of what I've been saying -- the true resolved detail on these systems is somewhere between standard-def and 720p. At best.

Any additional "sharpness" being detected is aliasing that's leaking through and contaminating the image. (and yes, I'm well aware that people like that contamination and think it adds to the image).

Barry, thanks for trying to tackle this in a logical and mathematic fashion, it is a huge asset to the community. I've been following your advice since I got the DVX100P years ago (recently sold to my company as I bought the 7D!.

Your numbers put the 7D at about 1155 x 650 when in 1080P mode.

I was going to try to do an equivalent set of calculations for other Canon cameras, but then realized I couldn't fully wrap my head around how you arrived at the 24 photosites/pixel, so I couldn't appropriately determine what that number would be for the other cameras.

Would you mind walking us through a numerical assumption on how the 1DM4 would perform and what the 5DM2 and the T1i should be like?

My guess is that the lower the native resolution of the sensor, the better the resulting binned image because fewer pixels need to be binned and thus you are not spacing out each data point as far. I would expect the 7D to perform better than the 5DM2, but then would expect the 1DM4 and T1i to perform better than the 7D.

To save you a couple minutes (should you agree to help) I've calculated the 16:9 image areas for each:
7D: 5184 x 2916 = 15,116,544
5DM2: 5616 x 3159 = 17,740,944
1DM4: 4896 x 2754 = 13,483,584
T1i: 4752 x 2673 = 12,702,096

daveswan
10-29-2009, 09:23 AM
How about a hypothetical camera with interchangeable OLPFs, so you could choose the one appropriate for the job? Or even, and this is way out, interchangeable sensors? Keep all the computational gubbins in the camera body, and swap sensors as needed? So you could have a FF 24Mp sensor for stills, a DX 16 (Say) Mp one for sports / wildlife and a DX / APS-C / s35 3+ Mp one optimised for cine?
Everyone wins, the marketing chaps can sell you alternative sensors, or upgraded bodies and you (The shooter) only need to buy the sensor you need.
Bet Canikon don't see it that way though.
Dave

sumitagarwal
10-29-2009, 02:31 PM
How about a hypothetical camera with interchangeable OLPFs, so you could choose the one appropriate for the job? Or even, and this is way out, interchangeable sensors? Keep all the computational gubbins in the camera body, and swap sensors as needed? So you could have a FF 24Mp sensor for stills, a DX 16 (Say) Mp one for sports / wildlife and a DX / APS-C / s35 3+ Mp one optimised for cine?
Everyone wins, the marketing chaps can sell you alternative sensors, or upgraded bodies and you (The shooter) only need to buy the sensor you need.
Bet Canikon don't see it that way though.
Dave

Nikon already does something somewhat similar... on their high end you can set the camera to use either the full size of the sensor or a DX-sized crop.

Could be nice to see FF and APS-H Canon cameras use an APS-C (S35) crop when in video mode.

daveswan
10-29-2009, 03:28 PM
It's not so much the crop as the pixel pitch, which remains the same. Also the sensor itself can be optimised (In terms of read-reset times etc) for the job it has to do.
Dave

Barry_Green
10-29-2009, 05:03 PM
Interchanging the OLPF could get rid of the aliasing in these DSLRs video, but the end result will be to turn your camera into barely more than a standard-def camera.

They need to re-engineer the hardware to support full frame rate reading of the sensor, and proper downscaling. Without that, changing the OLPF won't really accomplish what you want at all. Because of the way these cameras read their chips, you're simply not going to get more than about 550 lines of resolution out of them.

dcloud
10-29-2009, 08:04 PM
would lowering megapixels/photosites help ?

daveswan
10-30-2009, 02:17 AM
Hence my second point about interchangeable sensors. Use a sensor appropriate to the job rather than trying to bodge it with a compromise.
Dave

sumitagarwal
10-30-2009, 08:32 AM
Hence my second point about interchangeable sensors. Use a sensor appropriate to the job rather than trying to bodge it with a compromise.
Dave

The point to having a FF35 sensor and using a S35 crop of it for video would be two-fold:
1) the OLPF effectively becomes lower resolution, thus closer to the intended output resolution
2) the effective chip bandwidth become much lower, possibly to the point that the on-board electronics could efficiently process it.

Barry_Green
10-30-2009, 08:57 AM
would lowering megapixels/photosites help ?
In terms of aliasing? Well, that's a complex question. Oversampling provides the ultimate way to combat aliasing, but there's a limit to how much these chips can read. The reason we have the problem as we do, is because the still-camera chips cannot be read out as fast as the video rates demand, and so there are shortcuts taken (binning pixels and/or skipping lines, etc). And that's where the aliasing is coming from.

So if they had a 3-megapixel chip, that should help enormously with read rates, sensitivity, dynamic range, and low noise (but it'd make for lousy still photos). So yes, that would help, but it would hurt these cameras' primary mission, which is still photos.

Ultimately a chip designed and optimized to produce video is what will combat the aliasing -- or, alternatively, when they get the still-camera burst rate fast enough to support video frame rates.

Paul V Doherty
10-30-2009, 02:31 PM
That's what I would consider the ideal. But not 2mp, you'd want at least 3mp. You have to factor in the resolution loss due to the Bayer pattern and demosaic process, so you need about 3 megapixels to deliver a truly sharp 1080p image. But yes, that would be the best of all possible worlds: incredible dynamic range, incredible sensitivity, tiny noise, razor sharp images, and could have an optimally-tuned anti-alias filter. Only problem would be that it would be lousy for shooting still photos, hence why it's not likely to happen on an HDSLR.
Amen to that!
All that we need/want is a good 1080p APS-C/s35mm video sensor.
Can it be so difficult?
Give us the D90/7D core with PROPER in-camera down-sampling to 1080p, that's all I ask :)
I don't care for true 2k, 3k, or 4k - I can't edit it, and I only output to SD DVD right now anyway!
Blu-Ray is my end-goal, nothing fancier.
it's simply a matter of 12-24 months of camera evolution and it will be in our hands! :)

thabo
10-30-2009, 06:23 PM
Barry, so you're saying that if we blew up some of Philip Bloom's footage it wouldn't look so good. He says when he films he never zooms or pans so that may have something to do with the quality of lack of aliasing. Most of these glamor shoots for the 5DmkII / 7D have been shots of people and close up of faces haven't they.

I want to believe that the 7D is rubbish but the footage I see looks lovely to me. I also realize that in the hands of a pro like Mr B, even a little standard def camera can look good.

So what about an HMC40 for the run and gun stuff and then for anything with faces, interviews, tripod mount stuff, a 7D? Seems like the perfect pieces of kit to me?

Barry_Green
10-30-2009, 07:03 PM
Why would you "want to believe a 7D is rubbish"? Why would you wish ill against any product?

If you blew up Bloom's footage, it'd look just like it does now, only bigger.

I've shot some (what I consider) really, really good looking stuff on a 7D. It's capable of great results. And I've shot some trash on it too, and found it very frustrating for anything wide/deep focus. But it's $1700! You've got to cut it a lot of slack for that!

All I'm doing is pointing out exactly how these things work. It's up to you to decide whether your scenarios would work within their limitations. If you're shooting faces, they can excel. The more that you can keep out of focus, the better they'll do. The more that's in sharp focus, the more potential for negative complications from aliasing.

They are not a magic bullet. They are not Red-killers. They're not sharper than conventional video cameras. Keep that all in perspective, and use them for what they're good for, and they can do astonishingly good things at an unprecedented low price point.

Duke M.
10-30-2009, 08:52 PM
I'm viewing them more now as 35mm adapter killers, which don't work well in low light. The 7D and 5D excell in low light when you want DOF control.

Nothing really matches them there. Is it any wonder that Philip Blooms videos emphasized those assets.

thabo
10-30-2009, 10:00 PM
It's just all the footage I've seen with these cameras looks fantastic. Problem is I was all ready to grab the HMC40 and then I stumbled on the whole 7D discussion so this got me a little confused and conflicted. Problem is, most of us don't have the lolly to go out and buy both cameras so we just have to read all the reviews and figure out what's best based on what someone else says, not easy.

yoclay
10-31-2009, 02:39 AM
I think Duke M. got it right. They are 35mm adapter killers, not Red killers. This is a huge advance. In a very light package. The cumbersome-ness of those adapters is a thing of the past. The real issue now is panning for rolling shutter and aliasing. I believe RED knows this and their redesigns of the Scarlet are probably related.

Phil H.
11-01-2009, 11:53 AM
Another great article, Barry. It's good to know all of this before making a purchase. And then if we still buy, we at least know the camera's limitations and can try to work around them. I hope you'll scrutinize the 1D Mark IV when it hits the shelves as well. With it's pricetage of $4999, I'd hope they'd made some improvements.

ericcosh
11-03-2009, 12:48 PM
Kudos Barry. Usually it's been said that a picture is worth a thousand words. In this case, I find your words are just as strong as the pictures. I've known about true alaising since my first S-VHS & High 8 shots of a football field and at the time just didn't understand why a better (sharper image) looked worse than my VHS footage.

This also explains why when I shot footage using my HVX200 of a plant in my front yard blowing in the wind looked very soft compared to the same shot taken with my 7D. When I played them on a timeline and showed them to several people and asked them to pick the best one, they picked the 7D because it "appeared to be sharper".

I think this article should be required reading for everyone, whether they pick up a HD SLR or just continue to shoot with their regular video cameras.

Again Barry, thanks from all of us for your wonderful and thoughtful explaination.

eric

ROCKMORE
11-03-2009, 02:57 PM
HDSLRs are all the rage right now, offering unprecedented imaging at an amazing price point, but if there's an achilles' heel, it's usually mentioned as "aliasing." So -- aliasing – we keep talking about it, but what is it? And how will it affect you?
ith various HDSLRs.

How do these aliasing issues compare with other HD cameras like the Sony EX3,
Red One etc. Is it just a matter of more processing power to define the actual detail in the image?

Barry_Green
11-03-2009, 03:03 PM
The EX3 and Red One largely avoid all aliasing complications. They're engineered to provide video images and use an appropriate anti-alias filter.

The differences also extend to the chips; in video or digital cinema cameras, they use sensors that are engineered and designed from the ground up to handle sustained frame rates at high enough rate. In the HDSLRs, they're using still-camera sensors that weren't designed for video usage. Accordingly, they have to "cheat" to get the information off the sensor fast enough, and that "cheating" is accomplished by only reading part of the sensor (either through pixel-binning or through line-skipping).

If you want alias-free imagery, you're going to want to look into a product that uses a sensor that's fast enough to be read out full-resolution at video frame rates.

(of course, video cameras can exhibit aliasing as well, depending on the strength of their anti-alias filters... it's just that the HDSLRs don't really have a choice, the aliasing comes about largely because they're trying to live in two worlds, stills and video, and they're doing so with a stills-oriented chip).

ROCKMORE
11-04-2009, 01:43 AM
Barry, on a one hour documentary project targeted to TV distribution with a razor thin budget, and you had to pick one of your DSLR cameras right now, would you bring your GH1 or your 7D?

Barry_Green
11-04-2009, 08:58 AM
Neither, I'd use my HMC150. Right tool for the job, and the HMC150 excels at that type of work, way better than a DSLR.

But for sit-down interview shots, choosing between those two? No question I'd go with the GH1. It has longer record times (no 12-minute time limit), better 720/60 for the "reality" look if that's what the show calls for (and if it's 24p, then the sit-down interview is where the codec differences are completely eliminated), less color-fringing moire, and it's more video-friendly. Any perceived advantage of the 7D due to sensitivity/noise would be nullified by proper lighting, and the codec differences would be completely nullified, so there's pretty much no drawback and plenty of advantages to choosing the GH1 for that usage.

Unless Magic Lantern was available for the 7D; that might make the 7D the preferred choice.

But this really isn't the right thread to be discussing choices like that, this is about aliasing.

Graeme_Nattress
11-04-2009, 12:40 PM
Yes, the issue with the Foveon is that if you look at a raw image, the colors are very under-saturated (compared to a raw, un-matrixed video from a 3 chip camera or even a Bayer Pattern CFA camera). This means that overall colorimetry suffers, along with noise performance because of the large matrix coefficients needed to get a reasonably saturated image. See pg 127 of Alan Robert's Circles of Confusion book for more information.

Also, the Foveon sensor does need an OLPF if aliasing is to be avoided. The Sigma cameras ommited the OLPF which leads people to think it's unnecessary, but if you read the Foveon literature you will find references to the necessity of adequate optical filtering. Now, because the colours are co-sited, you don't get false colors from the moire and aliasing - instead you just get typically ugly luma moire and aliasing. Even with a Bayer pattern CFA sensor used as it is in a DSLR (in stills mode), chroma moire is hardly ever an issue and can usually be effectively removed without ruining the look of the image, but in all sensor types, luma aliasing is a practically un-removable artifact.

Fact is - all sensors of all types will produce aliasing artifacts if they are not adequately optically filtered. There's no magic way around it. All you can do is eliminate the practical possibility of seeing them and do so without drastically reducing resolution.

Pixel Binning is a effectively a very poor downsampling filter, so either binning on sensor or further down the image chain can only make aliasing worse, not better. Of course, nearest neighbour sampling, which is what line skipping effectively is, is worse still. Neither should be seen as a proper alternative to a well designed (and usually expensive to implement properly) anti-alias downsampling filter.

Great article Barry!

Graeme


The Foveon is effectively a three-chip system, it's just that all three chips are sandwiched into one physical place instead of being in three separate spots and having light directed at them with a prism. With all three chips cosited like that, it acts to the lens as a one-chip system.

Foveon inspires loyalty and hatred like no other technology I've seen. Graeme Nattress doesn't seem too big of a fan, with his main gripe seeming to be that silicon is a lousy optical filter, making the longer wavelengths noisier to gain up.

But inherently, using a Foveon could turn this whole situation upside down, depending on how they read the chip out, and what native res they put on the chip. If they read it as three separate chips, reading each color separately, then you could do proper pixel binning without skipping gaps, and therefore probably get much cleaner performance. But if they read the chip as one chip, maybe not so possible.

Tom Warner
11-04-2009, 05:11 PM
Really excellent article. As a 5d owner, this is exactly what I really didn't want to believe, so thanks for laying out the incontrovertible evidence and forcing us all to face facts. It does make me want to find a good res chart and test it on the 5d, though I suspect its real resolution is only slightly better than the 7d.

Barry_Green
11-04-2009, 05:22 PM
The only chart I've seen on a 5D is that by Alan Roberts, during his assessment of the 5D for BBC HD use.
http://thebrownings.name/WHP034/pdf/WHP034-ADD39_Canon_5D_DSLR.pdf

Scott Lovejoy
11-05-2009, 05:42 AM
Barry, I think this article does a lot to quell the hatred you sometimes get (from fanboys, etc.).

The key for me is to point out that it may not matter to the user. For some, the picture looking good is all that matters, regardless of how it got there. It seems that for some people they need you "permission" to think that way. The usual conversation goes:

"This is what I'm seeing with tests: 500 lines of resolution"
"No! It looks so much nicer than my EX1, way more sharp!"
"Yes, it's tricking you, the EX1 resolves more lines"
"But...I like the way it looks"
"Okay, it's still not really sharper"
"Just tell me it's okay!"


I like that in this article you put more information about what the facts mean for the user.

Barry_Green
11-05-2009, 10:10 AM
Thanks. My intent with these articles is to point out what happens, so that folks can be prepared to combat or avoid troublesome scenarios, and will know the limitations that these new technologies provide.

Article has been updated with a couple of "real world" examples where aliasing contaminated and ruined shots, even in natural settings.

Tom Warner
11-07-2009, 05:01 PM
Actually, the 5d aliasing examples that you posted are quite mild. You should see some of the shots that don't get posted to vimeo. Twilight beach scenes are notorious.

On the other hand, the 5d can take twilight beach scenes that will knock your socks off, if you have the time to hit some shots and miss others. To my eye the 5d's rich color in low light is heads above any other prosumer camera (<$10k). So is its depth of field, of course. I don't agree that other cameras are equivalent "with proper lighting". Being able to shoot with less light opens up creative possibilities. What you call "proper lighting" is likely to be in my mind "the best compromise between the lighting I would prefer and the lighting the camera needs". Some cameras give you a wider range of lighting possibilities than others, and that's a definite advantage in terms of artistic range, and not just convenience.

You're absolutely right that all DSLRs have unprofessional sound, but that doesn't exclude them from professional use. It just means you need an external recorder. The internal audio is fine for home and most web video, and quite handy for manual syncing with external audio.

I also agree that it would be foolish to use a DSLR as primary video camera at a live event for a paying client. Unplanned situations force you to use a wide depth of field, which hugely increases the likelhood of moire, and nullifies one of the DSLR's main advantages.

However, you can do a very professional job with the 5d as primary camera for independent TV and cinematic work. It requires more pre-shooting and botches more shots than a pro crew and pro actors would tolerate, but for self-produced, no-budget projects, it's one of the best options. Still, it's probably best used in tandem with a standard videocam.

My wife and I have been shooting with the 5d for almost a year now, and I think we've run into just about every flaw it has. Until I read this article, I wasn't sure if some of these flaws were the result of aliasing or the codec, so thanks for solving that question and laying out the case in favor of aliasing being the primary culprit. But there's one more thing that I'm still unsure about.

To my eye, compared to conventional video, the 5d often looks flatter. By that I mean the 5d's image is somehow less convincing in producing the illusion of 3d. This is a very subjective judgment, and maybe others will disagree. But I'm wondering if that also could be a result of aliasing. Could aliased fake details be stomping on the subtle real details that give the illusion of depth?

Graeme_Nattress
11-07-2009, 05:26 PM
I think that perhaps what you're seeing is that actually, even when sharply focussed, the 5D2 is quite soft. Resolution measures very low for a "1080p" camera, and standardly aggressive edge sharpening is applied. That means there's very little in the way of real micro-detail, and what is left gets squashed by the codec. I reckon that could quite easily account for the lack of "3d effect".

joe 1008
11-15-2009, 11:42 AM
Thank you Barry for that great article. I'm a bit late on this discussion so I probably will repeat things that are already said. I own a 5D and I'm conscious of its limitations. But I always stated that I'm not too keen on sharpness. I sympathized for a long while with the HPX 500 and though we are talking about totally different cameras, I would say if you can live with a resolution of 720 lines (or less) the 500 and those video DSLRs are great tools. I really love their low light, DOF and color capacity - the only really ugly and distracting thing is aliasing and when one uses the now commonly known tricks one will get a pleasantly looking 720p image. Within one or two years we will have full HD resolution - but for me that is not the quantum leap that represented this first generation of video DSLRs, It's evolution. It is welcomed, of course, but is not a game changer anymore. Consider your video DSLR as a 720p camera and you will be happy. I am ;)

ProLost
11-15-2009, 10:05 PM
Problem is, if you eliminate all the aliasing, you're going to be left with only the true resolved detail that these cameras can provide, and that's not much more more than standard-def. It is the presence of aliasing that makes these cameras look sharp at all.

This is a great article Barry, and important info for folks to have if they're planning on working with these cameras.

You categorize the rolling shutter effect as being a kind of aliasing. That's the first time I've heard it described this way. The classic wagon-wheel example makes sense as aliasing — the sample rate is two low to accurately depict what's going on, rendering the impression of false information. But I'm not quite seeing how rolling shutter adheres to this definition.

So you threw me there at the beginning, but then you grabbed me right back again. I think I often confuse people when I write about the low effective resolution of the 5D and 7D, now I have a reference to point them to. Thanks!

-Stu

Barry_Green
11-15-2009, 10:27 PM
The idea with the "rolling shutter as aliasing" is that the slow sampling rate of the rolling shutter is not letting the system always accurately represent the information it's attempting to. Look at the "bendy propeller" and you might see how it's kind of the same situation as the wagon wheel, but driven by the slow progression of the rolling shutter down the frame. It misrepresents the propellor as actually being horizontal instead of vertical. It's a "motion" version of the res chart extraction I pulled out, where mainly horizontal lines are actually represented as primarily vertical lines. Might be a little bit of a stretch, but the main goal I was going for is that aliasing results in image (or sound or motion) representations that are not accurate. And the rolling shutter bendy propeller certainly did that.

yoclay
11-16-2009, 12:10 AM
I see rolling shutter as a seperate issue from aliasing. There is currently a passionate (perhaps even overly-passionate) discussion about LCD shutters over at Cinema5D with a member claiming that one is on the market in Paris for the 5D. There is also a vimeo of this kind of solution as proof-of-concept here:

http://www.vimeo.com/5976527

Aliasing remains for me an artifact of accentuation, whereas rolling shutter is the outcome of the slow reading-out of the sensor. Might be good to have a seperate article about rolling shutter if there isn't one already. (Come to think of it, I think there already is somewhere here...)

ProLost
11-16-2009, 12:43 AM
I see rolling shutter as a seperate issue from aliasing.

I think you're right. It's the only kink in an otherwise perfect article. I just linked to it on Twitter and have seen about fifty retweets.

Barry, I would humbly suggest removing the rolling shutter reference. The wagon wheel example is a better on-ramp into the discussion.

-Stu

yoclay
11-16-2009, 03:57 AM
How's this for moiré/aliasing?

http://www.vimeo.com/7590690

see my post here about it: http://www.dvxuser.com/V6/showthread.php?p=1813563#post1813563

joe 1008
11-16-2009, 04:41 AM
I think you're right. It's the only kink in an otherwise perfect article. I just linked to it on Twitter and have seen about fifty retweets.

Barry, I would humbly suggest removing the rolling shutter reference. The wagon wheel example is a better on-ramp into the discussion.

-Stu

Though it is clear that the limited read out speed of the sensor is the (main) cause for both problems.

divergent
11-16-2009, 12:21 PM
from the article:


For some reason, though, people want to give aliasing a “pass” when it comes to the new still cameras.

This brings us back to the wagon wheel - for "some reason", people have given aliasing a "pass" when it comes to 24fps for quite a while now. Actually, lately, not just given a pass but actively demanded 24fps despite the temporal aliasing. I would suspect it's the same reason the HDSLRs get a pass on their spatial aliasing - the overall aesthetic they produce is worth the trouble of shooting around (or simply accepting) the aliasing. The audience has been watching wagon wheels go backward for a lot of years without complaint - I suspect the the same will be true for the spatial aliasing of these cameras as long as your story is engaging.

Barry_Green
11-16-2009, 01:10 PM
I agree. 24fps is clearly not able to accurately resolve motion as well as 60fps, and the reason is the same -- lower sampling accuracy. 24 is less than 60, and 60 samples result in much more accurate motion rendition.

BUT -- we (the moviemaking/moviewatching public) LIKE the look of 24fps (in most things; it can get headache-inducing if movement isn't controlled). I don't think anyone actually likes the wagon wheel effect, but we put up with it for the rest of the benefits.

The higher the sampling accuracy, the more accurate the representation of the thing being sampled. But in film, that's pretty much what we don't want -- we like the "surreal" or "larger than life" or "dream state" look of 24fps motion, instead of the "hyper-reality" look of 60fps.

The DSLRs certainly use aliasing to "punch above their weight class", giving images that, in the right circumstances, look like they came from far more expensive cameras. Without the aliasing, they'd never create objectionable image artifacts, but they'd also be substantially lower "sharpness".*

It is a tradeoff that you need to know you're making when you get into the game. The aliased DSLR look is sometimes incredible, and sometimes it can cause distracting or even shot-ruining artifacts. But at the price point, I think it's probably the right compromise, as long as people know what the compromise is, know how it manifests itself, and are on the lookout for it.

*the "resolution" would stay exactly the same without the aliasing; aliasing isn't resolution, it's an artifact. What would disappear is the "false sharpness". Which is what gives the DSLRs their "punch".

stip
12-07-2009, 05:24 AM
Barry, your article is making it's way through the internet; ProLost, slashcam.de, canonrumors.com...it's becoming a reference :)

There are some thoughts running round my head, I'd be curious about your opinion.

There are common modifications for astronomy- and infra-red photography, where the original anti-aliasing glass in front of the DSLR sensor is removed and replaced by other (special) glasses for astronomy or IR, or just normal glass or UV-filter glass. All replacements result in much sharper images and higher light-sensibility of the sensor due to the lack of the anti-aliasing/lowpass-filter glass.

http://www.maxmax.com/hot_rod_visible.htm

http://www.baader-planetarium.de/sektion/s45/s45.htm

http://www.optik-makario.de/
(sorry, last 2 are german)


Now, the caprock anti-moire filter seems to work nice on the 7D,

http://www.dvxuser.com/V6/showthread.php?t=189857&highlight=caprock

so I'm wondering if it would do the job on a modified 7D in video mode as well? I'm very insecure about the whole issue, so I could imagine any result from becoming absolutely useless for video to being "real" sharp with less moire if the caprock filter can deal with the new amount of aliasing. The modification seems to work great for stills photography but I understand video and the meaning of anti-aliasing for video is a whole different thing and I'm not sure what role the DSLR's software plays in this.

EDIT: I'm very sorry if this has been discussed here already, I didn't find anything yet though

Barry_Green
12-07-2009, 08:39 AM
All replacements result in much sharper images and higher light-sensibility of the sensor due to the lack of the anti-aliasing/lowpass-filter glass.

But "much sharper" is due to the exact same problems we've been discussing in this article. Removing the anti-aliasing filter means that you're letting all the aliasing through, which means you'll have all the moire and jaggy line problems and false detail that we've been talking about.

So while it may appear to be sharper, it's due to cheating and fake detail. Whether you (or any viewer) prefers that look is, of course, a matter of personal preference. But it's not actually sharper and it's not actually recording any more accurate of a picture, it's actually recording spurious inaccurate detail and presenting it as if it's accurate.


Now, the caprock anti-moire filter seems to work nice on the 7D,
The Caprock filters can certainly overcome all of the aliasing issues, turning the DSLRs into properly anti-aliased video cameras. But, as I said in the article, you'll also be giving up all the fake detail that makes the images look so "sharp" in the first place! You'll be left with a camera that delivers somewhere around a reasonable 720p image, because that's all that the current crop of DSLRs (5D/GH1/7D) can deliver. Anything beyond that, is aliased false detail. The caprock eliminates the aliasing, which means that the false detail goes, leaving only the actual resolved detail. On these SLRs, that means about 600 lines.

Now, that's not bad, and I'm actually looking at picking up some of those to really experiment with. It would remove any uncertainty about the SLR's image, any worries about moire or rainbow patterns, etc... and still deliver sharpness around HVX200 level. With shallow DOF. For a grand total of about $2500 (camera plus filters). That's not bad. The Caprock filters need to be tested in combination with the SLR and the lenses, because each focal length lens will need a different strength of anti-moire filter.

Of course, a much less expensive way to go, that would work with every focal length lens, would be to just de-focus very slightly. :)

stip
12-07-2009, 10:49 AM
Removing the anti-aliasing filter means that you're letting all the aliasing through, which means you'll have all the moire and jaggy line problems and false detail that we've been talking about.

Which I thought would be good when using the Caprock filter because:



The Caprock filters can certainly overcome all of the aliasing issues, turning the DSLRs into properly anti-aliased video cameras.

By removing/replacing the A-A glass and using the Caprock filter instead the image should stay quite "sharp" (as there is no more additional blurring from A-A glass only softening from Caprock filter) but be less "faked" with less moire issues:



But, as I said in the article, you'll also be giving up all the fake detail that makes the images look so "sharp" in the first place!

I might be really confused here though :)

Barry_Green
12-07-2009, 10:58 AM
Well... I can see where you're coming from, but I doubt you're going to see any benefit to removing the AA filter from the 7D. The 7D's AA filter is tuned for the stills, and its video resolution is already so low that I bet the 7D's AA filter really has little to no effect on the video footage at all. The Caprock would probably filter out any and all too-fine detail, leaving nothing for the 7D's AA filter to catch. So if you like the look of footage from a filterless camera for stills, but want AA for video, then that's a case where removing the 7D's filter might make sense (and compensating with the caprocks).

stip
12-07-2009, 11:05 AM
Well... I can see where you're coming from, but I doubt you're going to see any benefit to removing the AA filter from the 7D. The 7D's AA filter is tuned for the stills, and its video resolution is already so low that I bet the 7D's AA filter really has little to no effect on the video footage at all. The Caprock would probably filter out any and all too-fine detail, leaving nothing for the 7D's AA filter to catch. So if you like the look of footage from a filterless camera for stills, but want AA for video, then that's a case where removing the 7D's filter might make sense (and compensating with the caprocks).


Thank you Barry!

Ian-T
12-07-2009, 11:07 AM
I guess for those who are just interested in using this camera for its cinematic capabilities (not for stills) that could be a great compromise. I hope someday someone could test this out.

Paul Hudson
12-18-2009, 09:31 AM
Barry, The question does beg to be answered; why do manufactures not make a Optical Low Pass Filters that can be engaged for video and disengaged for stills. I am sure to do this is much more difficult than it seems, but will someone some day solve this riddle?

Robert Ruffo
04-04-2010, 06:27 PM
Never discussed on these forums is the problems 5D footage creates, due to its aliasing, when heavily re-compressed for digital broadcast. if you think it looks soft on your timeline, you are in for an extremely unpleasant surprise if you ever see your work on TV.

yoclay
04-04-2010, 10:35 PM
Never discussed on these forums is the problems 5D footage creates, due to its aliasing, when heavily re-compressed for digital broadcast. if you think it looks soft on your timeline, you are in for an extremely unpleasant surprise if you ever see your work on TV.

Not specifically sure what you are referring to. Aliasing is a seperate issue than re-compression. Most of the time aliased footage looks too sharp on the timeline. If it looks too soft, aliasing is generally not an issue. Are you speaking about the compensation people make to avoid aliasing by softening it?

Chris Light
04-05-2010, 02:34 AM
i agree with yoclay...
saw the music video for "kings and queens" by 30 Seconds to Mars a few times on Palladia(HD music channel, for those not familiar), and while it was shot primarily on the 7d(not the 5d, which is what this conversation is about now), i saw nothing but superb imagery. take it for what it's worth...it looked fantastic on TV. i'd think the 5d would hold up nicely. i kjnow, only one example though. i also don't ever watch "tv" on the internet. (i'm not assumiing that any of you do)...it just never looks the same.

noirist
04-05-2010, 01:25 PM
Barry,

How does the aliasing on HD-SLRs compare to high-end consumer camcorders like the Panasonic HDC-TM700?

Barry_Green
04-05-2010, 02:37 PM
Haven't used a TM700 so I don't know for sure. But typically the aliasing on a camcorder is much less, because the OLPF is tuned for the sensor appropriately.

paulcurtis
04-06-2010, 01:31 AM
Just to add my euro cents

I think the aliasing issue is more to do with the line skipping the canons use rather than OLPF. Whilst the OLPF does have an effect it's nowhere near as aggressive as the skipping. Line skipping is like looking through venetian blinds, giant horizontal chunks of the image simply isn't being recorded.

I have an LX3 next to me, 720p and i don't have anywhere near like the artefacts as the canons. Yes, i can see moireing in places if i look for it, and that *is* down to the OLPF but if the canons performed like the LX3 in this regard i think most people would find them much more acceptable.

I'm looking forward to the new sony alphas and see what they do in that regard.

cheers
paul

nsphoto
11-07-2010, 06:28 AM
Hi Barry or anyone else who can help,

Just joined this forum today. I have a question regarding the Panasonic TM700 (I'm a beginner with a camcorder).

I take the video in default (IA) mode...the camera does everything.
I download with Panasonic's HD Writer software to my hard drive.
I view the video and see shimmering edges (marching ants) on things like tree limbs, car door edges, etc.
I also see the same edge effects in the latest PowerDirector software.
I don't know if this is aliasing of some sort or called something else.
Not all things in video have shimmering...just certain edges.

Any suggestions as to what they are and how to fix would be greatly asppreciated.

Thanks,

Nick

nsphoto
11-07-2010, 06:30 AM
Hi Barry or anyone else who can help,

Just joined this forum today. I have a question regarding the Panasonic TM700 (I'm a beginner with a camcorder).

I take the video in default (IA) mode...the camera does everything.
I download with Panasonic's HD Writer software to my hard drive.
I view the video and see shimmering edges (marching ants) on things like tree limbs, car door edges, etc.
I also see the same edge effects in the latest PowerDirector software.
I don't know if this is aliasing of some sort or called something else.
Not all things in video have shimmering...just certain edges.

Any suggestions as to what they are and how to fix would be greatly asppreciated.

Thanks,

Nick

bwhitz
11-07-2010, 08:19 PM
Just to add my euro cents

I think the aliasing issue is more to do with the line skipping the canons use rather than OLPF. Whilst the OLPF does have an effect it's nowhere near as aggressive as the skipping. Line skipping is like looking through venetian blinds, giant horizontal chunks of the image simply isn't being recorded.

I have an LX3 next to me, 720p and i don't have anywhere near like the artefacts as the canons. Yes, i can see moireing in places if i look for it, and that *is* down to the OLPF but if the canons performed like the LX3 in this regard i think most people would find them much more acceptable.

I'm looking forward to the new sony alphas and see what they do in that regard.

cheers
paul

The Canons do not line skip. They use pixel binning. Every pixel is is used.

strangways
11-09-2010, 08:07 AM
The Canon DSLRs bin horizontally and line-skip vertically. It's not FUD or suspicion, it's an easy fact to prove with a resolution chart.
http://www.dvxuser.com/V6/showpost.php?p=2098573&postcount=10

ives
11-14-2010, 06:45 AM
^ as above - this is why the GH1 looks a little better in the res charts. Some more blarb to back this up from the EOSHD wiki:

For video, the 7D sensor has a further curtailing of advantage, because despite being larger the resolution of the sensor is 18MP compared to 12MP on the GH1's, so pixel pitch is actually similar. Furthermore, whilst the GH1 uses pixel binning to downscale the image, the 5D Mark II and 7D line-skip over the sensor to produce an image containing more aliasing and moire than if a pixel binning method was used. The ideal method would be similar to that used in Photoshop, whereby bicubic algorithms are used and pixel binning occurs in a less crude way. But current generations of image processing chips in portable electronic devices are not yet high-performance / power-economical enough at once, to handle such intensive work at over 24fps.
(Source: EOSHD.com)
From the DXOMark article, GH1 versus 7D:

The gh2's "teleconverter" mode sounds very interesting and could potentially end these aliasing/moire nasties, here (http://www.eoshd.com/content/420-Hold-onto-your-super-fast-c-mount-zooms-GH2-Sensor-1-1-Mode-sneak-peak)

Sorry to sound like a panasonic fan-boy, just stuff i found doing my research!

Seamus Warren
11-27-2010, 10:41 PM
Well written piece.

Made sense even to me.

Will scan the thread in search of some solutions or compromises in the form of filters or throwing the frame a little out of focus.

Thank you. :)

Stab
01-13-2011, 03:01 AM
Can someone explain me how it is possible that the DSLR's image has a lot of detail and is sharp, but that it is actually rendered wrong? In most shots you dont see aliasing but according to this article it is still there cause it's responsible for a lot of the detail... How it does that?
And is it really so bad if you avoid certain situations?

kulitam
06-14-2011, 07:19 AM
I'd like to ask about getting rid of moire. One of suggestions in the article claims that unfocusing area with this effect could be helpful. I'm wondering how effective this solution could be (Have anyone seen some tests somewhere on the internet, or have anyone personal experience with solving these problem?).
Or are there any other "methods" (which aren't included in the article) which can lead to clear image without distracting artifacts? (and eventually how effective/expensive they are).

MarkB1
07-01-2011, 01:14 AM
Great article, may actually be of some use.