Resolution and Accuracy (App Note)
Resolution is Not the Same as Accuracy
Resolution describes how small of changes you can detect in a measurement, while accuracy describes the difference between a measurement and the actual value. For example, you might make a thermocouple measurement where you can detect temperature changes of 0.1 degrees, and thus have a resolution of 0.1 degrees, but the accuracy of the system might only be ±2.0 degrees. If you get two readings of 49.9 and 50.0 degrees, you know the temperature changed by 0.1 degrees, but you only know that the temperature is somewhere in the range of 48.0 to 52.0 degrees.
For more about resolution, see the "Noise and Resolution" application note.
Most devices will have a higher resolution than they have accuracy. Often, the resolution limits are more important than accuracy:
Many times the relative difference between 2 values is more important than the absolute value of each.
If doing your own calibration, which is often required to account for all the different errors in a system, the calibration is typically only limited by resolution and linearity, and even linearity errors can be minimized by calibrating across a range smaller than full-span.
Noise and Accuracy
DMMs will often specify accuracy in terms of some percent of the measurement plus some fixed value. For example they might say ±0.005% of reading ±0.000400 volts. The spec for our devices is simply a percent of full span. For example, with the T7's ±0.1V range the full-span is 0.2V, so the ±0.01% accuracy in terms of volts is ±0.0001 * 0.2V = ±0.000020 volts (±20uV). The calibration we do on the U6 & T7 is limited by a few things:
Noise of the calibration source. To get a signal low-noise enough to calibrate the 100mV and 10mV ranges we need to use a specially modified LJTick-DAC.
Accuracy and noise of the reference device. We use the HP 34401 and are pushing the limits of it to calibrate the U6/T7 family.
Linearity of the U6/T7. 0.01% is the limit of linearity so don't expect to be able to do your own tighter calibration and have it valid across the entire span. You could do a tighter calibration across a smaller portion of the span but not the entire span.
Device Calibrations
Many devices provide a specific time period for their calibration. Does this mean they will re-calibrate for free if it goes out of spec before this time, or perhaps they have done testing to know that some high percentage of devices stay within spec for this time period? In any case, our opinion is that once-a-year is an industry standard interval for re-calibration and our recommendation when a calibration paper trail is needed for regulatory compliance. A more practical opinion is that there are no parts in the signal chain that have an expected change over time, so in reality change is likely not dominated by time but by physical and environmental stress. We would expect things like temperature swings, physical bending, and dropping, to have the most affect on the signal chain.
On the U6 and T7 a voltage of 0.0 is actually midpoint internally. For example, on the ±0.1V range, that span is presented to an internal instrumentation-amplifier that produces a ±10V output and that is presented to an op-amp circuit that shifts ±10V to 0-2.5V for the ADCs. That suggests that if there is any change in analog input calibration it is highly likely that change will show up in a reading at 0.0, so if you want to monitor for change you can monitor the value of the internal ground channel AIN15.
One other thing you can do with the -Pro versions only (U6-Pro and T7-Pro) is compare the 2 converters to watch for problems with accuracy or linearity. Jumper a DACx (0-5V) or LJTick-DAC (±10V) output to some analog input, then grab a bunch of readings from that channel using both converters (Resolution Index 8 and 9). If any of the readings differ from each other too much you know there is a problem. This will detect problems with one ADC, but not problems with the rest of the signal chain.