Accuracy, resolution and repeatability: the common pitfalls
The accuracy, resolution and repeatability of a position transducer are critical selection factors, but often cause confusion amongst engineers. Mark Howard, General Manager at Zettlex, explains some of the terminology and common misconceptions surrounding this.
Perhaps you were absent from college the day instrumentation theory was being taught. But if you don't understand accuracy, resolution, repeatability and all that stuff you are in good company as many engineers have either forgotten or have never really understood this area of engineering.
The terminology and fairly esoteric technical concepts applied to instrumentation can be confusing. Nevertheless, they are crucial in selecting the right measuring instruments for an application - especially for position and speed transducers. Get the selection wrong and you could end up paying way over the odds for over-specified transducers. Conversely, your product or control system may lack critical performance if the position or speed sensor does not meet the specification.
First, some important definitions: an instrument's accuracy is a measure of its output's veracity; an instrument's resolution is a measure of the smallest increment or decrement in position that it can measure; a position measuring instrument's precision is its degree of reproducibility; and a position measuring instrument's linearity is a measurement of the deviation between a transducer's output to the actual displacement being measured. Most engineers get their knickers in a twist about the differences between precision and accuracy. Using the analogy of an arrow fired at a target, accuracy describes the closeness of an arrow to the bullseye. If many arrows are fired, precision equates to the size of the arrow cluster. If all arrows are grouped together, the cluster is considered precise. A perfectly linear measuring device is also perfectly accurate.
Accuracy or precision
So, that's pretty straightforward then - just specify very accurate and very precise measuring instruments every time and you'll be okay. Unfortunately, there are some big snags with such an approach. First, high accuracy, high precision instrumentation is always expensive. Second, high accuracy, high precision instrumentation may require careful installation and this may not be possible due to vibration, thermal expansion/contraction, etc. Third, certain types of high accuracy, high precision instrumentation are also delicate and will suffer malfunction or failure if there are changes in environmental conditions - most notably temperature, dirt, humidity and condensation.
The optimal strategy is to specify what is required - nothing more, nothing less. In a displacement transducer in an industrial flow meter for example, linearity will not be a key requirement because it is likely that the fluid's flow characteristics will be non-linear. More likely, repeatability and stability over varying environmental conditions are the key requirements. In a CNC machine tool, it is likely that accuracy and precision will be key requirements. Therefore, a displacement measuring instrument with high accuracy (linearity), resolution and high repeatability even in dirty, wet environments over long periods without maintenance, are key requirements. A good tip is always to read the small print of any measuring instrument's specification - especially about how the claimed accuracy and precision varies with environmental effects, age or installation tolerances.
Another useful tip is to find out exactly how an instrument's linearity varies. If this variation is monotonic or slowly varying, the non-linearity could be easily calibrated out using a few reference points. For example, for a gap measuring device this could be achieved using some slip gauges. But it might take more than 1000 points for such a rapidly varying measurement characteristic to be linearised. Such a process is unlikely to be practical with slip gauges but it might be practical to compare the readings in a lookup table against a higher performance reference device such as a laser interferometer.
Optical encoders work by shining a light source onto or through an optical element - usually a glass disk. The light is either blocked or passes through the disk's gratings and a signal, analogous to position, is generated. The glass disks have tiny features that allow manufacturers to claim high precision. What is often not explicit is what happens if these tiny features are obscured by dust, dirt, grease, etc. In reality, even very small amounts of foreign matter can cause mis-reads. There is seldom any warning of failure - the device simply stops working altogether and suffers from 'catastrophic failure'. What is less well known is the issue of accuracy in optical encoders and optical encoder kits.
Consider an optical device using a 1in nominal disk with a resolution of 18 bits (256k points). Typically, the claimed accuracy for such a device might be ±10 arc-seconds. However, what should be in big bold print (but surprisingly never is) is that the stated accuracy assumes that the disk rotates perfectly relative to the read head and that temperature is constant. If we consider a more realistic example (Figure 3), the disk is mounted slightly eccentrically by 0.001in (0.025mm).
Sources of eccentricity
Eccentricity comes from several sources, including the following:
A perfectly mounted optical disk requires such fine engineering that cost becomes prohibitive. In reality, there is a measurement error because the optical disk is not where the read head thinks it is. If we consider a mounting error of say 0.001in, then the measurement error is equivalent to the angle subtended by 0.001in at the optical track radius. To make the maths easy, let's assume that the tracks are at a radius of 0.5in. This equates to an error of 2 milliradians or 412 arc-seconds. In other words, the device with a specification accuracy of 10 arc-seconds is more than 40 times less accurate than its data sheet.
If you get an optical disk to position accurately to within 0.001in of an inch you are doing really well. Realistically, you're more likely to be in the range 2-10 thousandths of an inch, so the actual accuracy will be 80-400 times worse than you might have originally calculated.
The measurement principle of a resolver or a new generation inductive device is completely different. Measurement is based on the mutual inductance between the rotor (the disk) and the stator (reader). Rather than calculating position from readings taken at a point, measurements are generated over the full face of both the stator and rotor. Consequently, discrepancies caused by non-concentricity in one part of the device are negated by opposing effects at the opposite part of the device. The headline figures of resolution and accuracy are often not as impressive as those for optical encoders. However, what's important here is that this measurement performance is maintained across a range of non-ideal conditions.
The quoted measurement per-formance of some of the new generation inductive devices are not based on perfect alignment of rotor and stator but realistically achievable tolerances (typically ±0.2mm) are accounted for in any quoted resolutions, repeatabilities and accuracies. Furthermore, stated performance for inductive devices are not subject to variation due to foreign matter, humidity, lifetime, bearing wear or vibration.
Other News from Zettlex UK Ltd
Latest news about Sensors