cbrandin

06-23-2010, 12:01 PM

I think a short tutorial on how to test electronic devices might be in order.

First of all you can’t really test for success. It amounts to trying to prove a negative – that no errors exist. If you wanted to be confident that you will not get errors in 1000 hours of shooting you would have to shoot 1000 hours and hope for the best. This is not viable. Failures almost never completely go away, they just become more improbable. Hopefully so improbable that they don’t matter. This is not the problem you might suppose it is because once you get to the point where a particular component’s reliability exceeds that of the rest of the system you are essentially there.

There is a way to get around this empirical testing problem though. Failures usually follow some kind of Gaussian distribution (half of a bell curve). Let’s say you have a test that fails in 5 seconds. You back off on a parameter and the failure only happens every 10 seconds. Back off a little more and it only fails after 30 seconds. After you have gathered several data points you can start getting an idea of how these failures fit on a distribution curve. Then you can extrapolate that out to whatever reliability level you want.

Now this is a bit of an oversimplification because you have to integrate the failure characterizations of all components (or parameters, as the case may be) and factor in interactions between them to develop an overall reliability statistic. If you’ve ever wondered how a manufacturer can claim a mean time between failure of years without testing for years, this is basically how it is done. That and stressing environmental factors.

Chris

First of all you can’t really test for success. It amounts to trying to prove a negative – that no errors exist. If you wanted to be confident that you will not get errors in 1000 hours of shooting you would have to shoot 1000 hours and hope for the best. This is not viable. Failures almost never completely go away, they just become more improbable. Hopefully so improbable that they don’t matter. This is not the problem you might suppose it is because once you get to the point where a particular component’s reliability exceeds that of the rest of the system you are essentially there.

There is a way to get around this empirical testing problem though. Failures usually follow some kind of Gaussian distribution (half of a bell curve). Let’s say you have a test that fails in 5 seconds. You back off on a parameter and the failure only happens every 10 seconds. Back off a little more and it only fails after 30 seconds. After you have gathered several data points you can start getting an idea of how these failures fit on a distribution curve. Then you can extrapolate that out to whatever reliability level you want.

Now this is a bit of an oversimplification because you have to integrate the failure characterizations of all components (or parameters, as the case may be) and factor in interactions between them to develop an overall reliability statistic. If you’ve ever wondered how a manufacturer can claim a mean time between failure of years without testing for years, this is basically how it is done. That and stressing environmental factors.

Chris