By Paul Miller, contributing editor
This is Part II of a two-part examination of OPC UA. Part I appeared in the Q3 2008 issue of Industrial Networking. It can be read online at IndustrialNetworking.net/opcua1.
In the 10 years since the initial OPC specifications were made public, the industry has learned how important it is to have a certification process in place. Such a process works to verify that different vendors products built upon a specific standard comply with the specification to ensure interoperability.
Were building up our certification process for OPC UA, says Jim Luth, OPC Foundation technical director. When we started with DA, we didnt have anything in place from a certification perspective. Initially, this resulted in too many OPC applications that didnt fully comply with the standard. Subsequently, we created a compliance test tool for OPC servers. This allowed OPC server vendors to self-certify their products and was effective but didnt address client-compliance issues. Recently, we created our first independent test lab to certify both clients and servers for compliance to all OPC standards. These efforts are preparing us for a robust third-party compliance program for OPC UA products right out of the gate. Our goal is to have a complete certification process in place before UA products begin to ship en masse.
This well-equipped test lab at Ascolab in Germany will provide rigorous compliance certification testing services for products based on all OPC Foundation specifications, including OPC UA.
All products newly certified by the test lab will carry a new OPC Foundation Certified logo. The certification program means users should expect reduced system installation costs and products that will perform reliably in multi-vendor installations.
Weve spent too much time in the past playing referee between suppliers who point fingers at each other instead of coming together to deliver solutions, says Bruce Honda, process control advisor at Weyerhaeuser, Federal Way, Wash. Certifying to a common set of functions and features should assure that manufacturers like us get true compatibility between different suppliers products."
Best Practices Critical
Third-party certification of emerging OPC UA-enabled products will play an important role in ensuring device and application interoperability within and across networks. However, since UA involves a much richer information setboth across plant networks and between plant and enterprise networksit will be important for all parties to be aware of and follow best practices for networked applications.
There are more conversations on the wire, so we will have to design networks that route the information where and when it is needed, says Stephen Briant, product manager at Rockwell Automation. We need to design networks using best practices to prevent unintentional devices or packets from showing up on our wires. I dont see best practices relative to firewalls, traffic limitations and so on changing all that much. We need to continue using todays best practices and consider the impact that new open devices will have on network design moving forward to assure that it remains performant and secure.
There are silos that have to come down for this to work well. You develop barriers in large operations but you have to get engineers and IT folks to work together, comments Keith Jones, product marketing manager at Wonderware (wonderware.com). Different organizations have different policies. The goal is to keep things simple enough so points of confrontation dont develop between engineers and operations and IT folks.
Examples can include opening ports, firewalls and user accounts. OPC UA addresses these point of conflict by being easy to engineer, performant and interoperable. The OPC UA discovery service, for example, helps monitor network performance. UA also is very easy to connect into PLCs, alarming applications and ERP systems.
From a networking perspective, adds Jones, the best practices used today will stand sound, including correct network configuration and sizing networks appropriately to be able to handle bursts.
There always are opportunities to design things the wrong way, particularly when you offer so much flexibility and extensibility, says Burke. Thats why its important to have a clear-cut architecture or design for system deployment. One of the things thats being done is to scale down OPC to be able to deploy UA embedded devices. Lets take a higher-level device like a PLC. Several PLC vendors already have embedded services for basic device information communication that provides a conduit to the enterprise in the PLC. Is that the right place for it in a large distributed system? Absolutely not. If you have hundreds of PLCs doing exception-based processing and propagating all this information upward, you can flood the network and break the significant control algorithms that are running. Everyone has to intelligently deploy technology. Just because you can drive your car 120 miles an hour without oil in it, doesnt make it a good idea. You need to use common sense.
The technology, memory and hardware are so cheap that people tend to overload their systems, says Burke. What were trying to do now is develop best practices at the end-user level to properly deploy the technology.
The OPC Foundation and the Microsoft Manufacturers Users Group (MSMUG) have created a working group of end users to help develop best practices and to address the requirements and processes for successful plug and play interoperability.
At the consortium level, we have to develop the best technology that prohibits people from violating good system design practices, continues Burke. We have to assume that people wont necessarily do it the best way and make sure that systems stay reliable, stay redundant and move forward. Maintaining determinism also is important because, while OPC used to be limited to moving data and information, we now have people using it to do control.
While OPC UA will, in many respects, eliminate a number of current pain points to help make life easier for those who design, use and support industrial networks, it also will require some discipline to help ensure that networks, applications and users dont get overwhelmed by extraneous data.