One thing we notice when supplying and calibrating instruments, is that there is often some confusion relating to accuracy specifications and this article seeks to clarify some key points.
The key points are as follows:
No instrument is totally accurate. Accuracy specifications aim to define how close to being correct each instrument is.
Error is defined as the difference between the ‘true value’ and the measured value. If we take 5 measurements on (for example) the length of a piece of string, we’d probably get 5 slightly different figures hopefully centered around the true value.
The more measurements we take, the less the error becomes. Hence the old expression, “measure thrice, cut once”. If plotted on a graph, the errors will form a ‘Bell curve’ where 68% of the results will fall within close range of the true result. In actuality there are several types of error distribution, but to go deeper into this is beyond the scope of this paper.
One of the limits to making a measurement is resolution. For example when we look at the mm graduations on a ruler if 1 mm is the smallest graduation, we have to decide how much we can split that graduation to make an informed decision – can we read to half or a quarter of a graduation to get a closer reflection of the actual item we wish to measure – when measuring we wish to distinguish the smallest magnitude from the measured value.
Another example is a watch that has no second hand, the minimum graduation is 1-minute so if the time is required to be read to a value better than ± 1 minute we need to increase the resolution by reading between the lines.
Accuracy describes the error between the real and measured value.
Precision describes the random spread of measured values around the average measured values.
It is often easier to manage a precise instrument than an imprecise instrument because applying a correction factor will enable you to improve the accuracy on a precise instrument. For example one of the Homersham company car’s speedo consistently reads 5 km/h faster at 100 kph than actual (using a GPS as the reference). The key is that this is ALWAYS so. So we can say that the precision is very good, but the accuracy is poor.
This is the specification sheet for a Center 300 thermometer. We’ve highlighted some of the relevant data.
And then for comparison, also attached is the specification sheet for another thermometer, the Center 370.
Because the Center 370 is supplied with a probe, note that there is a separate specification sheet for the probe, however….
In BOTH cases, the specification sheet for the instrument refers ONLY to the instrument and not the probe attached. Hence if accuracy is important to your process, it’s important to allow for the accuracy of both the instrument AND the accuracy of the probe.
So let’s look at a live example. Our customer has a requirement to accurately measure temperature to within ± 0.5 °C at 50 °C. Are either of these thermometers suitable?
CALIBRATION NEGATES THE NEED FOR THIS. The specification represents the envelope of performance, so shows a “worst-case-scenario”.
For example, below are the test results for an ACTUAL calibration of a Center 370 thermometer.
Note that its actual measured performance is often far better than the worst case scenario that a conservative specification shows. This again relates to the Bell curve above. Most instruments will achieve really good results perhaps in the first shaded area of the bell curve. However a few might fall at the out extremes of the Bell curve and have errors that approach the specification.
So using the certificate of calibration below we can see that this thermometer does far, far better than the requirement of ± 0.5 °C maximum error (at least across our customer’s working range where the tests were done).
The results of an IANZ calibration on the Center 370 shows its corrections were < ± 0.2 °C which meant the instrument was easily within the customer’s required specifications.