# How to verify transducer readings are translated correctly

Feb. 8, 2024
Machines rely on accurate analog-to-digital conversion of 4-20 mA signals

In a control system, a transducer converts a physical phenomenon from the real world into a signal that a controller can process, act on or display. It can be difficult for the person programming or troubleshooting the system to know if this conversion process is being done correctly. Here are some tools to ensure that the readings from the transducer are accurately translated into values that you and your control system can rely on.

Humans live in an analog world. Everything your senses can discern can be captured by a transducer in a continuous signal. This signal, such as the temperature outside, does not exist in discrete steps. We may say it is 19 °C, but of course it is really some infinite fraction between 18 and 20. We round this to a whole number because it is unnecessary to say, “It is 19.0125 °C outside.” Try saying this to a human and see their response.

Get your subscription to Control Design's print magazine, free to qualified individuals in North America.

Machines live in a digital world. In their world everything is in the form of a one or a zero. An analog sensor will convert the real temperature into a voltage, typically, 0-10 Vdc, or current, typically, 4-20 mA. This analog signal is translated through an analog-to-digital converter (ADC), where it is converted to an integer value. The conversion is based on a bit resolution. The higher the resolution, the more precise the integer value will be.

For example, a 12-bit converter ranges from 0 to 4,095. This means there are 4,096 discrete steps that a voltage or current value can resolve to. A 16-bit converter ranges from 0 to 65,535. This means there are 65,536 steps that a voltage or current value can resolve to.

These 65,536 steps must then be scaled to the values that make sense to a human. This is done with the formula of:

((value-min) / (max-min)) * (scale max – scale min) + scale min = value in engineering units

The min values are needed because in a 4-20 mA scale the min is 4 mA, not 0. The 4 mA will correspond with a 0 in the scaled value.

Let us assume that we are working with a 4-20 mA current signal and a 16-bit ADC. The temperature reading of 19.0125 °C will correspond to a current level of 7.042 mA from the transducer. The range of the temperature probe is 0-100 °C, so those will be the min and max scale values. We want to have a decimal point in the reading so a 32-bit floating point number is chosen for the output.

For an ADC with 16-bit resolution, the value of 7.042 mA will land on a step value of 23,074. This can be calculated with the formula:

Max * (current/max current), or 65,535 * (7.042/20) = 23,074

The scaling math will then be:

((23,074-13,106) / (65,535-13,106)) * (100 – 0) + 0 = 19.0125

For easier reference, I created a spreadsheet to calculate these values. The sheet contains calculated values for digital step values in 1 milliAmp increments. There are also data entry fields for specific value calculations of mA to digital value and digital value to mA. Also open are inputs for scale minimum and scale maximum to calculate the correctly scaled value (Figure 1).