Process variables and the art of calibrating instruments

Designing and validating procedures for device calibration.

By Jim McCarty, Optimation

4 of 5 1 | 2 | 3 | 4 | 5 View on one page

These are good examples of what you might first miss, because it’s fairly difficult to do a quick experimental investigation on how this might affect the output, as it’s probably difficult to directly control ambient humidity and barometric pressure. In this case, your air—either compressed nitrogen or highly filtered and dried air vs. dirty air—and water—deionized water vs. city water—supplies are also considerably different, to name a couple more. This, of course, is an extreme example, but even a 10° shift in ambient temperature between environments may cause differences in sensor outputs.

The same model DAQ device used on the test system should be used for calibration. Furthermore, ideally, the same device itself should be used, but there are many situations where that would be either impossible or highly impractical, so I wouldn’t say this is a strict requirement, only a preference if it’s possible and worth it; you can include this in your initial investigation of things that affect the measurement. The reason for this is that it can be difficult to replicate the measurement method between two dissimilar devices. Wiring differences to the DAQ device may be obvious—for example, you might use a higher-gauge wire on the test system. You might think that this should affect the output of a current sensor, and in most cases you’d be right, but what if the sensor struggles to output 20 mA at the upper end because the input power isn’t quite high enough on the test system to do this, but that same input power is sufficient during calibration, where you’re using thick wires?

Building off of that idea and circling back to using identical DAQ devices, how do you know things such as excitation voltage are the same? There are many settings that aren’t so obvious in the software, or hidden entirely, that most people don’t or just can’t even look at, and they will probably only look into them if there’s a problem. You can spend time scrutinizing these settings, but something else might come up. For example, if you set everything you can possibly find in these settings equal between your two DAQ devices, how do you know one of them isn’t doing some sort of hardware filtering?

The moral of the story is to keep as much as you can the same between your calibration and test system.

Pass/fail criteria

Now we get into the implementation of your calibration procedure and perhaps most importantly establishing passing and failing criteria, which are often overlooked, but essential parts of a thorough calibration process.

Now that you’ve looked into almost everything you could think of that does or doesn’t affect your sensor outputs, you’ve put together the procedure yourself, and you have become Grand Master Ninja Champion of instrumentation skid calibration, how will you be assured that, say, a newly hired tech will perform your calibrations correctly, having not gone through these same rigorous steps?

A key part of this is the establishment of passing and failing criteria. You’ve built calibration capabilities into your software, but, with no pass/fail criteria, this means that a tech can connect a pressure transducer and click go, let the DUC sit on a table while the software runs its sequence and generates the sensitivity and offset, and be done. Junk numbers will be generated, but numbers nonetheless that can work in the test software without giving anything meaningful. Possibly worse, the calibration could be performed incorrectly in such a way that the calibration error is several percent—large enough to cause problems, but not large enough to be blatantly obvious at first, so you need a way to check for a bad calibration. Furthermore, the passing and failing criteria should be entirely data-driven—mostly so that the process is scalable and doesn’t require constant human judgment.

Luckily, there are some simple statistical methods to help us interpret calibration data and sort out the natural randomness—the drift and jitter we expect and have determined should be limited to be small enough to tolerate—from signs of potentially faulty sensors. First, let’s run through how to generate sensitivity and offset, considering the case of a pressure transducer.

If you take your data and plot current (mA) on the x-axis and reference (actual) pressure on the y-axis and then do a least squares fit of a first degree polynomial to the data (fit a straight line through it), then the slope is your sensitivity in units of psig/mA and your offset is the intercept in psig (Figure 5). This means that, when you make a test measurement:

Your coefficient of determination, or R², should be noted, as well. This is a measure of how well you were able to fit the line to the calibration data, and is calculated as:

coefficient of determination

where yi is an individual data point, fi is the calculated fit for an individual data point and y-bar is the average of all data points. If the fit goes through every data point, then:

perfect linear fit

and this is indicative of a perfect linear fit. We want R² = 1, but in the real world, which isn’t perfect, they can’t be equal. Also, looking at the variance of a straight line, which we can say is nominal, as the sensitivity will be nominally the same for all sensors of the same type, we can say that the upper term just is what it is. So we want to minimize the lower term, or data variance, which means we want the variance of our data to be that of our straight line. Hence, R² is one measure of how well the data’s variance can be attributed to the fact that it should be a straight line, and we want all of its variance to be attributed to this fact.

However, coming up with one gold standard R² value above which is good for all data isn’t possible. R² can be misleading. When each function has the same fit error (using the same error generation function for both curves) but the variance of the red line (shallower slope) is smaller, it results in a lower R² value. This demonstrates why you will need to do some investigating to set a meaningful minimum R² value (Figure 6).

Now, let’s establish limits on the sensitivity and offset and also consider checking each individual calibration data point.

Start with the sensor data sheet. If it’s nominally a 0-160 psig sensor with a 4-20 mA output, then your nominal sensitivity and offset values will be 10 psig/mA and -40 psig, respectively. This means that you should be generating numbers close to these if you perform good calibrations.

The next part is optional, but it’s suggested as part of a thorough repeatability and reproducibility study.

Run calibrations on many of these sensors, if available, and run each sensor through several calibrations, recording the calculated sensitivities and offsets. This will give you the expected sensitivity and offset values and variances within the entire population of sensors, as well as how these values behave for an individual sensor—its natural randomness.

4 of 5 1 | 2 | 3 | 4 | 5 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments