Create a free Manufacturing.net account to continue

Accuracy: The Good, Bad, And Ugly

The subjective nature of accuracy allows for a wide variety of claims that may technically be true, but misleading at the same time.

Measurement accuracy is a subjective concept applied to all kinds of measurements. Generally speaking, accuracy refers to the degree of closeness of the measurement to a true value. This definition is usually good enough, but not always. The subjective nature of accuracy allows for a wide variety of claims that may technically be true, but misleading at the same time.

Let's consider this in the context of measuring relative humidity (RH). RH is expressed as a percentage, so the possible range of RH is 0 to 100 percent. Manufacturers of RH measurement equipment typically specify accuracy as ± a value of some percent. For example, one may say a certain device is accurate to ±3 percent RH. Due to the competitive nature of this business, makers of a different device may claim to be more accurate with ±2 percent RH. Superficially, with all other things being equal, this ±2 percent device would appear to be better. Beware, though, because this is not always the case and can be misleading until you dig deeper.

Measurement Characteristics

Accuracy specifications come in many forms. For example, accuracy may or may not include other measurement characteristics. One characteristic is the difference in the measurement value when the true value is approached from a higher value vs. a lower value, otherwise known as hysteresis. If a device has a lot of hysteresis, this can be left out of the accuracy specification and justified by reporting accuracy based on measurement values that always start from a higher value (or a lower value). This may be accurate, but it is misleading because it does not address a significant element of the measurement performance.

The Concept of True Value

Another issue with measurement accuracy is the concept of true value. When a device is calibrated, it is compared to a reference standard that can be considered to be the true value. However, all reference standards embody some imperfection. There is always some variation from the true value that we hope to achieve.

What if the variation in one reference standard is different from another standard? In this case, a measurement device calibrated and adjusted to one standard may achieve its stated accuracy, but when compared to a different standard, it could be out of tolerance. This is where the concept of measurement uncertainty becomes helpful.

Measurement Uncertainty

A simple (and incomplete) explanation of measurement uncertainty is that multiple measurements made in the same way with one device are never precisely the same. As a result, the measurement device is likely to provide a range of values centered around the true value or offset from it. Similarly, all reference standards vary from the true value in some way. Because the reference standard is never precisely the true value, its variation has to be considered when specifying the overall performance of any given measurement device.

When using measurement uncertainty, it is possible to say that the uncertainty (variability) of the reference standard and the process of calibration is a specific value, such as ±0.5 percent RH. This can be statistically combined with the instrument accuracy to arrive at a range of measurement performance that is likely to be correct 95 percent of the time. This value is always bigger than the accuracy of the measurement device, regardless of how accuracy is defined. (Keep in mind that this is in the calibration laboratory, not in the real world.)

Additional Uncertainties

Measurement uncertainty actually applies to individual measurements. In the real world, calibration uncertainty and device accuracy are not the only influencers of a specific measurement. Additional factors may include environmental conditions (different from the conditions in the calibration laboratory), operator error, inconsistent methodology between operators and unknown additional variables. These, and other, additional uncertainties from the real world can be statistically factored in (if they are known) to the overall measurement performance. Again, the total value of uncertainty increases.

There is more. Returning to the concept of the combined calibration uncertainty and device accuracy, consider that this may vary when the value of the reference standard is adjusted to achieve multiple calibration points. For example, a generated RH value of 20 percent RH at 25°C may have less uncertainty than a generated value of 80 percent RH at 40°C. Similarly, performance of the measurement device may change at the extreme ends of its measurement range. If known, this goes into the uncertainty budget. Total measurement uncertainty almost always increases when devices are used at the extremes of their operating range.

Let's add an additional complication. Measurement uncertainty, as described above, provides a statistical probability as to how often the measurement is within specification. If this value is 95 percent, what about the other 5 percent of measurements? It's possible to use a different statistical model to achieve 99 percent probability, but once again, the total value of uncertainty increases even more. In fact, this value is likely to be substantially greater than the accuracy that we started with, perhaps by multiples.

The takeaway here is that accuracy never tells the entire story about measurement performance. If measurement performance is critical, scrutinize the device and manufacturer's specifications, and ask questions about anything that is unclear or seems inadequately defined.

___

For more information, please visit www.vaisala.comwww.ncsli.org and www.bipm.org (PDF).