It was back in the 90s when the Hi-Color Video cards came out. They were a major improvement over VGA cards that were limited to just 256 colors. The 16 bit Hi-Color cards had 32 shades of red, 32 shades of blue, and 64 shades for a total of 65,536 possible colors. As good as most photos looked on a computer monitor, these were some photos that demonstrated a basic problem with limited bit resolution.
When the image contained lots of little details, Hi-Color images looked very good, but as soon as we showed an image with a solid color, like a deep blue sky at one end and a washed out blue at the other end, the problem with the limited bit resolution showed up.
What should have looked like a nice blue sky, ended up looking like a set of different shades of blue bands across the image. The problem is that the human eye can detect differences as little as 1% from one color to the next. The 5 bits used for blue and red resulted in changes of about 3% from one color to the next, more than enough to see. Green used 6 bits which comes to about 1.5% difference per step, so those changes could be seen too.
In order to create a series of steps where the eye can not see the difference between the steps, at least 7 bits are required. With computers, we don't like to do things with odd numbers, so 8 bits per color is chosen. True Color on a PC means 8 bits for red, 8 bits for blue, and 8 bits for green, for 16,777,216 possible colors.
Given that 8 bits per color produces twice as many shades per color than the eye can see, why does the Panasonic AG-HVX-200 say on page 4:
This may seem like overkill, but there is a very good reason for this. If the camera used just 8 bits for the A/D (Analog to Digital) Converter and the DSP (Digital Signal Processor), as long as we didn't make any major changes to the data, you wouldn't see any differences in the steps in the digital image.New DSP with 14-bit A/D Conversion and 19-bit
The AG-HVX200's newly developed digital signal processor for 1080/60p video signals uses 14-bit A/D conversion and 19-bit inner processing...
The problem is that the data coming out of the camera is like a set of numbers written on a rubber band from zero to the maximum number of steps. If the rubber band is stretched from the floor to the ceiling, the spacing between the numbers is uniform. Now grab the rubber band in the center and pull it down part way toward the floor. The numbers below your fingers are shoved closer together and the numbers above your fingers are moved father apart.
With 8 bits as the source, if I pull the rubber band too far one way or another, there will be places on the rubber band where the distance between each number exceeds the maximum of 1% variation for shades and people will see bands or banding in the image.
With 14 bits per color, Panasonic has lots of room to stretch or compress the output of the camera without showing any banding in the image. If 14 bits for the A/D (Analog to Digital) is needed, why do we have 19 bits for the DSP (Digital Signal Processor)?
With DSP, high calculation speed is very important. Integer calculations are fast and floating point calculations (calculations with a decimal point) are slow. Think of it this way, any number that falls below 1 or below the decimal point is dropped. 5/2 = 2.5 (Now drop the .5) = 2
If we take a sequence like 1, 2, 3, 4, and 5; divide each number by 2, drop anything less than 1, and multiply by 2, the result is: 0, 2, 2, 4,and 4. Notice how the numbers 1, 3, and 5 are now lost.
While this example may seem a bit lame, it's kept simple to make it easy to show how information is lost due to integer calculations and rounding error. However, if we had extra bits added to our numbers in the DSP, the rounding error will occur, but it's a very small amount.
The important thing to remember is that you should always make the major adjustments in the image in the camera before you record the image. You can make some minor adjustments in post, but be careful. If you make too radical a change, you may see banding appear in the image.
In the future, P2 recording could become 12 bits or even 16 bits for the video. This would increase the data rate to 150Mb/second and 200Mb/second. In fact if we took the highest data rate, 200Mb/second and used 4:4:4 for the color, the data rate would be 300Mb/second; high, but not impossible with P2. For now, memory size limits us from having such high data rates. Also, editing software would need to be modified to take advantage of the extra bits. This high data rate recording would enable us to shoot without any real internal adjusts and do all the fancy fine tuning in post. It will be a minimum of 5 years before memories are big enough to support this feature.
If we record audio using just 6 bits, the step size from one number to the next is a bit too much and the play back sound is a little raspy. If we add another bit (7 bits) the step size is cut in half and the sound is reasonable. Using 8 bits for audio is the minimum I would use for a clear signal. While voice might be OK, music will not have the wide dynamic range when limited to just 8 bits. For music, 12 bits would be the minimum I would recommend. (The minimum number of bits for a good quality sound is not a hard number and can be debated.)
The camera records audio using 16 bits. If 8 to 12 bits provides good audio, then the 16 bits the camera records provides enough room in post to process the audio and sweeten it up in post without going into distortion.
Thread: Tech Talk: Why Every Bit Counts
Results 1 to 1 of 1
01-18-2006 03:32 PM