AI introduces cybersecurity questions in industrial controls
Key Highlights
- Securing data integrity for manufacturing AI requires both technical safeguards, such as hardware roots of trust and encryption, and vigilant operational practices focused on closely monitoring data trends to detect anomalies from both malicious and accidental faults like dirty sensors.
- Protecting proprietary AI models on edge devices necessitates a multi-layered approach that includes storing them in secure, encrypted hardware zones, implementing strict access controls and continuously monitoring their runtime behavior for signs of tampering or theft.
- Mitigating supply-chain risk from third-party AI components requires transparency and due diligence, including obtaining a complete bill of materials, performing code analysis, scrutinizing security certifications and planning for secure integration with existing, often legacy, industrial control systems.
Engineers working with artificial intelligence (AI) in manufacturing equipment controls need to focus on the unique vulnerabilities and operational impact within an industrial setting. Thomas Kuckhoff, automation product manager with Omron Automation, answers questions about tackling cybersecurity in the age of AI (Figure 1).
In a manufacturing environment, a compromised or manipulated data feed from an IoT sensor or an historian could trick a quality-control or PdM AI and incorrectly train the AI model, for example, teaching it that defective parts are normal or cause an AI model to make an incorrect and potentially dangerous real-time inference, such as ignoring a safety alarm, by subtly altering the input data. How can we ensure the integrity of the sensor data and control parameters used by AI/machine learning models to prevent data poisoning and adversarial attacks that could lead to physical damage or production errors?
Thomas Kuckhoff, automation product manager, Omron Automation: Ensuring data integrity for AI and machine learning models in manufacturing involves two main strategies: monitoring the data inputs and refining the software development process. Most issues arise not from malicious actors, but from accidental faults such as dirty sensors, damaged cables or incorrect replacements. Systems should be equipped to detect abnormal trends or sudden shifts in data, flagging potential issues for review. Hardware-based roots of trust and encryption can help, but the key is close monitoring and trend analysis to catch anomalies early.
On the software side, robust and agreed-upon reward functions during model training are essential. These functions define non-negotiable safety or quality standards, ensuring the AI cannot inadvertently be trained to compromise critical parameters. For example, setting strict boundaries for safety reaction times prevents the model from learning unsafe behaviors. Digitally signed firmware, encrypted data transmission and hardware-based trust mechanisms further minimize tampering risks. The goal is to encrypt and verify as much as possible, combining technical safeguards with vigilant operational practices.
In short, secure sensor data by encrypting it and verifying its source using digital signatures. This ensures the data hasn’t been changed. Also use software at the edge—close to the machines—to detect unusual patterns that might indicate someone is trying to trick the AI.
The AI model, a valuable asset containing years of proprietary operational knowledge, must be protected. In manufacturing, the AI model is often deployed directly on the equipment, in the form of edge computing, for low-latency control. An attacker could steal the model and obtain its parameters to replicate the proprietary manufacturing process or tamper with the model and alter the model's logic to introduce subtle, long-term failures or quality issues. This requires security measures beyond traditional IT, such as hardware-based security for model storage and secure, encrypted communication for model updates. What mechanisms should be in place to secure the proprietary AI models deployed on edge devices within the operational technology (OT) network from model theft or reverse engineering?
Thomas Kuckhoff, automation product manager, Omron Automation: Protecting proprietary AI models on edge devices requires a multi-layered security approach. Data is the currency, and the models themselves represent significant intellectual wealth. Security should begin with a thorough assessment of the factory floor, identifying and strengthening the weakest links. Once models are ready for deployment, security measures must run in parallel with scaling efforts to avoid exposing competitive advantages.
Key mechanisms include secure enclaves with hardware-backed encryption, encrypted model updates, strict access policies and runtime monitoring for adversarial manipulation. Technologies such as subnet capabilities, container topology lockdowns and virtual machine security add further layers of protection. Regular risk assessments and continuous evaluation are vital, as deploying AI securely is often more resource-intensive than development itself.
In short, store AI models in secure hardware zones that are locked down and encrypted. Only authorized updates are allowed. And monitor the model’s behavior to catch any signs of tampering. This protects the model from being copied or misused.
AI in industrial manufacturing components likely uses libraries, pre-trained models and specialized hardware from multiple vendors, creating the potential for a supply-chain risk. Manufacturing environments often contain legacy devices that can't be easily patched or updated. How is the security of third-party AI components verified before deployment?
Thomas Kuckhoff, automation product manager, Omron Automation: Verifying the security of third-party AI components hinges on transparency and due diligence. Manufacturers should provide a bill of materials for both hardware and software, enabling teams to review the origins and composition of each component. Code analysis, along with scrutiny of security certifications, are essential steps. Teams must understand how the software was developed and trained and be aware of any biases introduced by societal data.
Security checks should include searching for zero-day vulnerabilities and reading the fine print to avoid unintended data sharing. Backups of all systems and strong recordkeeping are crucial in case a rollback is needed. Ultimately, trust but verify; ensure you know exactly what you’re integrating, as third-party software often goes to the heart of critical processes. This level of diligence helps organizations avoid exposing sensitive data or introducing new vulnerabilities.
Get your subscription to Control Design’s daily newsletter.
Are AI components and their network traffic securely segregated from older, more vulnerable ICS and PLCs?
Thomas Kuckhoff, automation product manager, Omron Automation: AI and advanced process controls must be compatible with existing legacy systems to deliver value without requiring costly overhauls. Many industrial assets are custom-built and irreplaceable, making it impractical to demand upgrades or replacements. New technologies must integrate with established protocols and hardware, such as older fieldbuses like CAN bus, to ensure seamless operation.
Safety is not guaranteed by compatibility alone; teams must consider CPU workloads, firewall placement, network bandwidth and protocol consolidation to reduce complexity and risk. Proper inventory and consolidation of fieldbuses and protocols help minimize translation lags and system complexity. While AI can be safely deployed on legacy systems, careful planning and awareness of operational constraints are essential for successful adoption.
What is the end-to-end security posture of the AI supply chain, and how do we manage the risk introduced when integrating third-party AI components with existing, often unpatched, industrial control systems?
Thomas Kuckhoff, automation product manager, Omron Automation: End-to-end security for AI in manufacturing involves managing risks throughout the entire lifecycle, from problem identification and solution development to deployment, scaling and eventual retirement. Teams must assess what they want to augment, what risks they’re willing to accept and what is non-negotiable, such as safety systems. Continuous validation and monitoring are necessary to ensure vulnerabilities remain controlled and value is maximized.
Vendor vetting, secure development practices, runtime monitoring, zero-day vulnerability assessments and ongoing due diligence are all part of a robust security posture. Different organizations have varying appetites for risk, depending on their competitive position and operational priorities. The process requires clear limits on acceptable risk and a commitment to maintaining or improving security standards over time.
In short, it includes vendor vetting, secure development practices, runtime monitoring and lifecycle management. Risk is mitigated through zero-trust architecture, continuous vulnerability scanning and patching strategies tailored for OT environments.
Tell us about your company’s state-of-the-art product that involves artificial intelligence/cybersecurity.
Thomas Kuckhoff, automation product manager, Omron Automation: Our approach to AI and cybersecurity in manufacturing is organized into four main categories.
- foundational data — how to capture high-value process insight non-intrusively
- prototype AI — how to create robust designs for edge deployment
- scaled AI — how to methodically deploy without ripping and replacing current automation
- evergreen security — making new security compliance industry standard.
The first focuses on meeting factories where they are today, primarily through the DX100 Data Flow Edge Device by Omron. This product connects to as much technology on the factory floor as possible, collects comprehensive data and visualizes it securely, often on the edge or isolated from the broader network to minimize vulnerabilities. The DX helps factories analyze data used at different intervals to maintain quality, optimize output and control costs.
The second and third categories involve products that are slightly ahead of current market adoption, illuminating the path for future AI use on the factory floor. These include technologies for prototyping, validating and scaling software securely, with hardware tailored to computational needs at specific locations.
The fourth category is focused on compliance, particularly with the upcoming European Union Cyber Resiliency Act. Omron is committed to meeting these regulations ahead of schedule, enhancing security without disrupting production.
All four taken together make a comprehensive approach that ensures both innovation and operational continuity.
About the Author
Mike Bacidore
Editor in Chief
Mike Bacidore is chief editor of Control Design and has been an integral part of the Endeavor Business Media editorial team since 2007. Previously, he was editorial director at Hughes Communications and a portfolio manager of the human resources and labor law areas at Wolters Kluwer. Bacidore holds a BA from the University of Illinois and an MBA from Lake Forest Graduate School of Management. He is an award-winning columnist, earning multiple regional and national awards from the American Society of Business Publication Editors. He may be reached at [email protected]



