OT edge defense-in-depth can secure AI models against cyber attacks
Key Highlights
- Securing proprietary AI models deployed on edge devices requires a defense-in-depth strategy that includes physically securing equipment, using secure hardware, hashing model weights to detect tampering and sanitizing data before model retraining.
- To mitigate supply chain risk, engineers must choose between self-managing the constant patching and updating of open-source AI components or shifting that lifecycle security responsibility to a trusted automation supplier via bundled solutions.
- For industrial AI solutions, specialized industrial PCs (IPCs) are crucial for providing the necessary high-performance computing at the edge, protecting sensitive proprietary data from the cloud and ensuring the low latency required for critical control.
The intersection of artificial intelligence (AI) and cybersecurity within operational technology (OT) environments will have a profound effect on manufacturing equipment controls, such as the necessity of running AI models at the edge to protect proprietary data and achieve low latency. Engineers working on AI cybersecurity for manufacturing equipment controls need to focus on the unique vulnerabilities and operational impact within an industrial setting.
Samuel Prescott, product manager for Emerson’s Machine Automation Solutions business, answers questions and outlines a defense-in-depth strategy to secure valuable models and manage supply chain risks (Figure 1). Prescott joined Emerson as a software engineer in 2018 and earned his bachelor's degree in computer science from the University of Virginia.
The AI model, a valuable asset containing years of proprietary operational knowledge, must be protected. In manufacturing, the AI model is often deployed directly on the equipment, in the form of edge computing, for low-latency control. An attacker could steal the model and obtain its parameters to replicate the proprietary manufacturing process or tamper with the model and alter the model's logic to introduce subtle, long-term failures or quality issues. This requires security measures beyond traditional IT, such as hardware-based security for model storage and secure, encrypted communication for model updates. What mechanisms should be in place to secure the proprietary AI models deployed on edge devices within the OT network from model theft or reverse engineering?
Samuel Prescott, product manager, Emerson Machine Automation Solutions: Cybersecurity is always at its strongest when companies practice defense-in-depth. An effective cyber solution for AI incorporates multiple layers of protection to secure against a wide array of attack vectors, both physical and digital.
Model weights can be secured by calculating their hash value, which can be stored in the protected environment or on another trusted device. Every time a model is started, it can be compared to make sure weights did not get altered.
In addition, data for the model retraining needs to be carefully sanitized. It starts with making sure that data connections to the sensors use protocols that support encryption and authentication, such as OPC UA or message queuing telemetry transport (MQTT), secured using certificates. Data sanitization prior to training can involve the existing model or, better yet, an alternative model, to review the values and flag the user if suspicious or inconsistent values are detected. After retraining, the new model should be validated by having test cases that include both good and bad data samples to make sure the new model remains accurate.
Going beyond digital protection, teams must also secure the physical layer, ensuring both personnel and individuals outside of the company do not have access to critical equipment. On the most basic level, this can be accomplished by securing facilities—locking buildings, securing control cabinets, installing cameras and implementing other physical security around the manufacturing site.
Then, teams can start focusing on the security posture of individual pieces of equipment. Selecting technologies like IPCs built with secure design principles helps simplify the process of securing against outside threats. User authentication is also a critical component of an overall cybersecurity strategy. Individual, secure accounts limit access to technologies like AI models deployed on edge devices. Those devices should also offer secure data connections to sensors, other IPCs or even the cloud to prevent interception of critical data or compromise via vulnerable connections.
AI in industrial manufacturing components likely uses libraries, pre-trained models and specialized hardware from multiple vendors, creating the potential for a supply chain risk. Manufacturing environments often contain legacy devices that can't be easily patched or updated. How is the security of third-party AI components verified before deployment?
Samuel Prescott, product manager, Emerson Machine Automation Solutions: Though there are some proprietary models being designed for specific industrial applications, a lot of AI software is open source. With open-source software there is always a choice. Teams can manage their open-source software supply chain themselves, but that means monitoring and keeping up with security updates through the lifecycle of all the products. From the operating system to tools and packages used on top of the operating system, and potentially even the connections between them, there will be many points that need continual monitoring, patching and updating. It takes constant effort and vigilance, and there is always risk.
Alternatively, many organizations are looking to shift some of that security responsibility from their team to a trusted partner. These groups are implementing bundled solutions where the automation supplier handles the security solutions for the user. The supplier applies the latest updates from the operating-system vendor and for the open-source tools included in the package. More importantly, the solution provider performs extensive testing before deploying those updates to the field, so teams can feel more confident that changes won’t impact operations.
Get your subscription to Control Design’s daily newsletter.
Realistically, in today’s environment, ensuring systems are secure requires a well-designed and efficient update method. No matter how hard one tries, it is only a question of time until a serious vulnerability is exposed, and systems need to be updated quickly and efficiently. The best modern software can help navigate these challenges.
What is the end-to-end security posture of the AI supply chain, and how do we manage the risk introduced when integrating third-party AI components with existing, often unpatched, industrial control systems?
Samuel Prescott, product manager, Emerson Machine Automation Solutions: End-to-end security posture of the AI supply chain is as dependent on defense-in-depth as any other area of cybersecurity posture. On-site, teams should ensure that they have layers of protection in place—network segmentation, network monitoring, antivirus and malware solutions, whitelisting and network security. They should also be focused on providing the physical security and digital access control—user account management, password restrictions, two-factor authentication—necessary to ensure systems are protected from exposure. Partnering closely with expert automation suppliers in this step can help avoid conflicts with existing industrial control technologies. Those suppliers can help teams navigate the challenges of working with legacy systems, identifying places where conflicts and challenges might arise based on decades of experience.
Across the supply chain, teams also need to ensure that they carefully vet the suppliers of their technology solutions. Teams should regularly evaluate their suppliers to ensure that each one has a plan to deliver security across the lifecycle of their products. This is an area where bundled solutions from expert automation suppliers can provide a significant advantage. The most experienced automation solution providers perform risk assessments on all vendors across their supply chains, continually evaluating and reevaluating their cybersecurity maturity and ability to handle issues as cyber threats increase and evolve.
When working with AI, it can also be very hard to test or evaluate how a model will behave based on different inputs. Closed models that were trained using undisclosed datasets are particularly hard to predict. One recommendation is to use models from trusted suppliers that have a lot of exposure in the open-source community. The more teams use, evaluate and test a model, the higher the probability is that hidden risks and weaknesses get discovered.
Tell us about your company’s state-of-the-art product that involves artificial intelligence/cybersecurity.
Samuel Prescott, product manager, Emerson Machine Automation Solutions: Many of the most popular large language models (LLM) are hosted in data centers on the cloud. Because cloud LLMs are both public and shared, there is not a lot of guarantee around the security of the data put into them. However, much of the data that is most useful for analysis by AI and machine learning (ML) technologies is proprietary data, so manufacturers need AI and ML solutions that can keep their data safe and contained to protect trade secrets. Moreover, many of the most powerful ML solutions, which often deliver some of the best value in driving operational excellence, require the lower latency communication only available at the edge.
Fit-for-purpose industrial PCs (IPC) like Emerson’s PACSystems IPC 6010,7010 and 8010 are ideally suited for the purpose of providing AI and ML capabilities on-site at the edge without having to rely on a third-party cloud provider (Figure 2). The IPCs provide high-performance computing and graphical capabilities in a ruggedized footprint suitable for any environment. In addition, expansion capabilities allow for the use of graphics processing units and neural processing units to drive the high processing power necessary for AI and ML tasks, without the need to expose sensitive data beyond the walls of the organization.
IPCs can also be bundled with the PACEdge software platform that provides a collection of tools and services to make it easier for teams to build their own analytics, ML, dashboards, graphics and data visualizations. Intuitive tools are available out of the box and incorporate security and operating system updates to keep systems as cybersecure as possible.
Group Manager features in bundled solutions allow grouping of devices and then updating each group with OS patches or uploading new application container images.
About the Author
Mike Bacidore
Editor in Chief
Mike Bacidore is chief editor of Control Design and has been an integral part of the Endeavor Business Media editorial team since 2007. Previously, he was editorial director at Hughes Communications and a portfolio manager of the human resources and labor law areas at Wolters Kluwer. Bacidore holds a BA from the University of Illinois and an MBA from Lake Forest Graduate School of Management. He is an award-winning columnist, earning multiple regional and national awards from the American Society of Business Publication Editors. He may be reached at [email protected]




