Process variables and the art of calibrating instruments

Designing and validating procedures for device calibration.

By Jim McCarty, Optimation

3 of 5 1 | 2 | 3 | 4 | 5 View on one page

If not, look for any way you can induce a known flow and measure the DUC via some unique system feature—for example, if you can allow fluid at a known pressure go through a choke of a known size. If you’re still out of luck, then your best options may be to bite the bullet and either pull the DUC out of the system or measure the flow with some reference unit next to the DUC. Both will likely involve breaking connections, unlike going with the first option and using an ultrasonic flow sensor, which attaches to the outside of the pipe, but can cost between $2,000 and $10,000. This cost can be pretty easily justified by calculating the cost of labor associated with making and breaking connections, if calibration is done frequently enough.

If a temperature sensor is stuck on a system, then thermal paste or grease is very helpful. Apply some to your reference and you can put it next to the DUC and get an accurate reading. Need to make a second measurement at an elevated temperature? A cartridge heater with a thermocouple on the end of it may do the trick. If your DUC is stuck in some component of the system, then you need to do some head scratching. The reality is that you can’t measure temperature at a point in space that isn’t accessible, so there is no easy way out of validating a method for this one. However, there are close approximations that can be proven with data. For example, if a thermocouple is embedded in a ½-in plate, place reference thermocouples mounted on both sides of the plate and heat one side. How far off are the plate temperatures from each other? If they are within some acceptable tolerance, then you can measure with your reference on the surface of the plate to measure the DUC. From here on out, without getting into thermal modeling of the system, it boils down to trying things like this and seeing what will be a close-enough approximation.

The “brute force” approach to temperature calibration, which I usually equate to extra time and effort, would be to heat up the entire system while it’s idle, perhaps in an environmental chamber, and wait for thermal stabilization before making your elevated measurement. Establishing this wait time requires some investigation, for the sake of repeatability and reproducibility. A good place to start would be taking a test system at ambient temperature, putting it in an environmental chamber and recording the sensor temperatures vs. time. Uncalibrated signals are fine; you’re just looking for the signals to plateau. If your lab is set up for it, to get a worst-case stabilization time, try first stabilizing the test system at some considerably lower temperature and then put it in the hotter environmental chamber and determine the stabilization time. Just be aware of the possibility of thermal shock the hotter and colder you go when you try this. Because accuracy is fairly well guaranteed, all sensors will be at the same temperature simultaneously, and not much, if any, re-wiring or attaching/re-attaching is required; this certainly isn’t a terrible idea, but nonetheless it requires the most time for thermal stabilization, and I highly doubt you’ll find an environmental chamber at the dollar store.

With the proper fixturing and the process being controlled in your software, you’ll be guaranteed good repeatability and reproducibility for your calibrations, which typically means the same for your measurements.

Stable conditions and stimuli

You need to be patient and aware of the nuances that can affect particular measurements. However, there are some important general pointers. It’s a good idea to at least be able to measure the output of a sensor continuously at a relatively high sampling rate for an indefinite amount of time in order to study your system.

Short-term stability: The high sampling rate will give you a good indication of the nature of your measurement noise, and measuring for a long time will yield information on stability. Noise is sometimes well understood (for example, 50/60 Hz common mode), and sometimes not. The key to understanding the nature of the noise unique to your test environment and sensors is to select a high enough sampling rate. If you zoom in on the time axis and autoscale the amplitude axis of just about any idle sensor (measuring a constant value) and the points appear to have no flow, then the sampling rate is probably too low for the goals of this pseudo-study. Increase the sampling rate until the points appear to form a more natural looking line, even if the line appears to go up and down at random. Next, look at the periodicity in the noise, imagining the dips and spikes as half cycles of a sine wave. If you were to take a guess at the minimum and maximum frequencies of these imaginary, superimposed half-sine cycles, what would they be?

Your raw sampling rate should be, at the very least, twice the maximum frequency of the noise, per the Nyquist Theorem. Your reporting sampling rate will be much lower—half of the minimum noise frequency, at most. One reported point will be the average of multiple raw points. This way, you’ll be assured that you’re taking the average across at least one “noise cycle,” and the data that you report won’t be so noisy. If you need to report data as quickly as possible, repeat this for measurements under a variety of conditions to find worst-case noise scenarios. You never know when or how noise will affect your measurement. For example, are there any semiconductor devices in your instruments or that affect your instrument readings? Noise in semiconductor circuits generally goes up with temperature, so repeating this at the upper end of the operating temperature range is normally a good idea.

Long-term stability: Sampling indefinitely will give you an idea as to what kinds of stabilization times are required. Think of slowly varying things that could possibly affect your measurements and experimentally determine how much it actually does. Write down anything you can think of, even if it sounds stupid, that could possibly have an effect on the measurement.

For example, do you need to let your pressure transducers warm up in order for them to be accurate? Find out by electrically connecting to one that isn’t pressurized and waiting about 20 minutes while recording data. Look for any changes in the data—does the noise get better, get worse or stay about the same? Does the transducer output rise or fall and then stabilize at some value? What about if you repeat this with a transducer that’s attached to a pressurized vessel? If you’re using the transducer tree from the previous section and, starting at atmospheric pressure, pressurize the tree to 100 psig and record data the whole time, how long does the current signal take to stabilize? What if you used your system to tie all the transducers together at one pressure instead of the tree? Would you have to wait longer for stabilization because of the increased vessel volume? If you just tried that with a transducer that was off just before you started, try it again—does the behavior change if you’re using a transducer that’s already warmed up? What about doing everything with a flow meter? With a thermocouple?

Ask more questions like this, and then go out and get the answers. Theory can answer some questions that may pop up—for example, the answer to “do you have to wait longer for stabilization because of the increased vessel volume?” is a definite yes because of the laws of compressible flow—and guide you to some factors that are somewhat likely to have an effect. Remember what I said about semiconductors and temperature? Knowing this, wouldn’t you think you should at least look into instrument stabilization time after power-up? This will also weed out things that don’t matter. If you’re wondering whether thermocouple warm-up time has any effect, a little research on the Seebeck effect will show you that the thermocouple itself is never powered. However, if it’s converted to a 4-20 mA signal, the signal conditioners may require some warm-up time.

Stepping through these experiments will determine your instrument warm-up times; identify the stabilization period required at a condition such as pressure, flow or temperature to measure a data point; and also shed light on things that should be avoided during calibration, as well as during the measurement itself.

Test-range conditions and sensor linearity

A fundamental concept of numerical approximations is that you can make anything linear by zooming in close enough in the time domain. What this means is that you can plot just about anything vs. anything—for example, resistance vs. applied pressure, which is what’s going on at the device level in a pressure transducer—zoom in on a small range on the x-axis, and the points will form what appears to be a straight line. You can also do this with functions that are clearly not linear. If you plot y = Sin(t) for 0 < t < 0.01, it will look like a straight line. You can fit a straight line to this range and come up with an equation (y = x, or the small-angle approximation, assuming you’re working in radians), so we can say Sin(t) ≈ t over this range (Figure 1).

However, expand the higher end of the domain to π/2, and it looks nowhere near linear. Furthermore, if we use the same y = x equation to predict the output of y = Sin(t) in this range, you can see that, once we’re get past ~0.5, our error gets very large. You can go through the motions of fitting a straight line to this new data set (y = 0.692x + 0.101), but you’ll notice your R² value is pretty far off from 1 (0.967), and the line really doesn’t come close to the vast majority data points over the whole range (Figure 2).

To summarize, if we stay within some range, we can use a linear equation to accurately determine the output of the function, but if we wander too far outside of that range, that linear function may yield bad results compared to the actual value. This also highlights the importance of taking multiple points between the minimum and maximum. Would we have had any idea what the curve was doing in between these points? If we had only a couple in between, would this tell us that our linearity is off?

What if we fit a straight line to experimental data from, say, a sensor whose output is known to be linear over some large range, but only use data from a limited range? Look at plots of what the output of a nominal 0-160 psig pressure transducer with a 4-20 mA output might look like, the first only up to ~10 psig and the second covering the whole range (Figures 3 and 4).

In this case, the output of the DUC is linear over the entire range, but, by using only the lower range data, we cut our sensitivity by more than 20% and our offset by ~8 psig. The short and valuable lesson here is that you always want to calibrate over your entire measurement range.

In the previous case, though, you could argue that, instead of using a simple linear function, you could fit a quadratic or cubic polynomial or, better yet, interpolate between your data points so that you theoretically have zero calibration error. I would highly advise against this because the majority of sensors are designed so that their output is linear, and, if it isn’t linear, then something is probably very wrong with the sensor. You may be able to try this with a sensor that’s on the edge of going bad and even be able to repeat the calibration a few times and get about the same results for the function coefficients, but there’s a good chance that the sensor will go completely bad soon, and the law of probability says that this will most likely be during a test, not a calibration—you’ll spend a lot more time trying to chase down the issue after you first see the funny results, after it finally goes bad.

Always calibrate over the entire range, and take as many data points as necessary to detect nonlinearities in the sensor output in order to minimize calibration error, as well as detect faulty sensors.

Calibration under test conditions

This ties back into the investigations on what can affect your measurements. You can spend a great deal of time investigating what will have appreciable effects, but I almost guarantee you’ll miss something in the upfront investigation when you’re laying out this process. However, if you do your best to calibrate under the same conditions you’re testing in, the things you do miss are more likely to be undiscovered curiosities, instead of problems you eventually spend a great deal of time trying to track down and fix. Look at it this way: if you’re at this point and you don’t know if and how humidity affects pressure transducers, it’s better not to find out at all because calibration and test happen in the same environment, rather than eventually finding out after being tasked with explaining why you’re calibration process isn’t working. Is your lab actually in a clean room, while the test systems are housed in, more or less, a garage type of environment? If so, it’s probably a bad idea to perform instrument calibrations in the lab and then use the instruments in the garage. One possibility is that humidity and barometric pressure can affect the sensor output.

3 of 5 1 | 2 | 3 | 4 | 5 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments