My company recently implemented a control system for a $38 million project that included several multi-station machining centers. At each machining center, the station controller is connected to the master controller via Ethernet. In addition to performing basic process control, the master station also controls the positioning of the fixture at each machining station base. Therefore, the communication between each base and the master is integral to the performance of the actual machining function.
Each machining center has its own isolated Ethernet network, and there is no communication from one machining center to the next. The highest number of addresses on any of the networks is seven. Each station controller is connected to the master through an Ethernet switch.
When we initially set up the machine, everything appeared to be functioning properly. But when we started putting it through its paces, we learned that everything wasn't as rosy as we first thought.
Since the fixture positions are controlled out of the master controller, we depend on the station controller to wait until the fixture is at the commanded position (as determined by the master controller) before continuing the CNC program. To do this, we used an interface bit set by the master controller that indicated if the fixture for the corresponding station was in position or not.
Understanding there was a possibility of network collisions, we programmed a built-in latency period into our PLC to ensure that communications had ample time to complete. This latency was set for 500 ms, but that proved insufficient. We experienced network delays that sometimes could be measured in seconds, let alone milliseconds.
Subsequently, without any command updates, the commanded position always equaled the actual position. As a result, on one occasion, the machine crashed into the fixture because the signal from the master controller never indicated an out-of-position condition.
These communication challenges would not be an issue had we implemented a deterministic network such as a Profibus or DeviceNet. While there would have been some additional upfront hardware costs for either of these network technologies, a variety of downstream costs would have been avoided. We would have avoided hardware purchases needed to help reduce network collisions, repair costs due to network-caused machine crashes, engineering expenses related to debugging variations in the communication cycles, and costs associated with schedule delays from Ethernet failing to perform at expected levels.
We did build a very reliable machine tool, but there is still a lag on interactions between each station controller and the master controller. So, from a controls engineering standpoint, Ethernet failed to produce the process reliability we expected.
In hindsight, there were no compelling factors dictating we specify Ethernet on these machines. We weren't going to link to an MRP system, we didn't need remote monitoring via the Internet, and we weren't looking to transfer programs or data from an engineer's desk to the machine—even at the plant level. We selected Ethernet based on the machine manufacturer's recommendations and the peer pressure that came from an apparently overwhelming industry shift toward Ethernet.
This was Liberty Precision Industries' first experience using Ethernet for actual process control, and by the end, we found ourselves asking, "Hasn't anyone else had these problems?"
Ethernet might be my protocol of choice someday, but until greater reliability is added to it, it will have to remain in the backseat.