Artificial intelligence and industrial automation are inevitably a double-edged sword. How so, you might ask. There is no stopping the push to include artificial intelligence in platform design, but no one has measured if the artificial intelligence-based tools are good for reliability in the long run. For instance, people claim a cost savings, increased output and streamlined operations, but is this true?
What the question is, in essence, is whether there is a current understanding of the extra time and effort being put in to use AI tools and whether people are mapping if they are having to reformat or spend more time with data for the purpose of machine model learning or to interpret AI mistakes? It is too early of a time to tell. An example is drawings that are automated.
Computer-aided design (CAD) drawings that are automatically drawn by the software may depict the information but may be hard to read, so they must be touched up by a human. Is this a worthwhile measure? For example, when robots first started welding, we still had to have a human finish the unit.
For equipment reliability and engineering, we have models that can tell us when the system is broken and make predictions, but how is it helping when we still have machine downtime? Even live folks have had trouble getting downtime, no matter how many ways they said the machine needed a break for maintenance. AI cannot fix that.
For reliability applications, the trend recently has been to take natural language processing (NLP) and customize it for engineering data. This technical language processing (TLP) allows analysis of maintenance work orders to identify patterns and inform better decision making. Again, to play devil’s advocate, these things only work if the maintenance department has populated the database and loaded proper cost center codes and types in what was broken and what the fix action was. AI cannot cause that.
Predictive analytics is where AI is most prevalent now. This is based on real time data collection, data preprocessing, pattern detection, model visualization and updates and visualization. How can machine builders utilize these ideas? Machine builders would have to integrate these concepts into the human-machine interface (HMI) or supervisory control and data acquisition (SCADA) systems and then pinpoint with the customer which data should be tracked.
For instance, which values can be monitored at the programmable logic controller (PLC) to understand or predict when a device will fail? If an actuator is monitored for error on a 4-20 mA loop, the calculation is: Regulation Error E(t) = |Command Signal SP(t) – Feedback PV(t).
Get your subscription to Control Design’s daily newsletter.
This is based on the idea that the actuator opens 0 to 100% on a regular basis. If we put logic in that monitors this error and then set a point that we want to act on that error, then the code in the PLC can make a prediction that there is a problem with the device.
// Calculate regulation error
RegError := ABS(Setpoint - Feedback);
// Store error in a trend buffer (for example, 1-minute average)
TrendBuffer := MOVING_AVERAGE(RegError, 60s);
// Alarm if average error exceeds threshold for sustained period
IF TrendBuffer > 5% FOR 10 minutes THEN
ActuatorAlarm := TRUE;
END_IF
Plotting the regulation error over time and then setting an alarm can allow for the occurrences to be monitored. From that point, the system environment can be monitored for the conditions, and an HMI or SCADA could report that the actuator is failing and should be changed out. Tie that in to using the system to say where the input is coming in and direct a maintenance person to the drawing for the field location. Voila, predictive maintenance.
The assumption is that the maintenance person reads the screen or sees the trend and makes the decision to go and validate the valve. The machine code could point out the part number to be replaced, as well. However, all of this must be validated by a human because part numbers are changed out without updating drawings or systems, and someone could change the code to decrease or increase the average for the deviation monitoring.
Downtime still must be requested. Also, capital projects must want to put the added software on the machine. Not to mention, this description of monitoring actuator regulation is not artificial intelligence. It is simply good programming, if you are using actuators that are in a critical function and you want to monitor for when they start to fail. Distributed control systems have been doing this for a while with dangerous applications like chemicals and oil and gas. It relates back to looking at loops to understand if a failure is stiction, other than mechanical, hydraulic, or pneumatic, or electrical or controls. It would be great if the system could monitor and predict leaks or other indications of upset by using a summary of alarm data to decide.
The point here is not to put down the artificial intelligence endeavor but to bring awareness that it may be more conducive to train the non-AI people in good practices and then apply software applications to help them. Otherwise, artificial-intelligence tools may become cumbersome and not used to their fullest capacity. Artificial intelligence is like that one color of lipstick that you only use once a year when you wear sequins and tuxedos. Is the cost to have that color worth it?