By John Rezabek, Contributing Editor
In the land of telecommunications and information technology, our wide-area-network brethren have been specifying and installing thousands of miles of fiberoptics of all flavors for decades. If you're lucky, your cable TV provider is converting over from copper to "Fiber TV" in your neighborhood.
Our profession has been using fiber for nearly as long, but it can account for fewer than 5% of the total terminations in our networks. Once installed, the infrastructure can be so reliable, our documentation can become lost or forgotten, and our troubleshooting skills can get pretty rusty. Such was the case this past winter at the complex where I work, when some "remodeling" interrupted what had been a quiet and efficiently functioning interconnection with our neighbor.
They're huge compared with us, so as a feedstock supplier as well as a customer of our excess energy, we do what we can to keep them happy. Ten years ago, when their fate became tied to ours (where our upsets or shutdowns could have a serious impact on them), they wanted real-time monitoring and even control of some of the loops in our plant—especially the ability to "sever the cord" (shut a valve) if it looked like our ship was going down. Thanks to the fact our site had been integrated under a single owner, we found an existing albeit circuitous network of fiber—installed by our IT people—that had some spare cores.
The original scheme used some proprietary LonWorks-based I/O, which used Raytheon's Control by Light (now CBL Systems) routers. Though this hardware was devised and marketed for building automation, it supported some nifty accommodations for communication diagnostics and fault tolerance (for example, degrading from full duplex to simplex if a fiber was bad or disconnected). We were impressed to find no perceptible difference in latency, whether the 12 in. control valve was stroked from our control house, or by the real-time analog output from our neighbor on the other end of about 2 miles of multi-mode fiber.
So when I was called out of a meeting on a bitterly cold January day this year, I didn't even suspect the fiber when our operators said our data historian must be down because our neighbor's signal was showing "bad." The interconnection between us had since been converted to ordinary 100 MB Ethernet (converted to use the same fibers we'd used for the LonWorks routers) and remote I/O, and sure enough the rack over in their control house was showing no communications. Between our sites, we probably had a half dozen of us checking power supplies, swapping remote I/O processors, cycling power on switches, and unseating and re-seating cards. Our neighbors thought they had some bad fiber, and when they brought their time-delay reflectometer (TDR) to our site, it showed my fiber only went about 1,600 ft before it abruptly stopped. A day or two later I found some aerial runs coiled near a pole where an old office trailer had been demo'd the day this all began. Guess who was buying the next time we met at the local watering hole.
There are a lot of choices today when considering what kind of network infrastructure to deploy. For Ethernet-based networks, wired solutions are easy enough when the distances are less than 100 m. In our offices and certainly our homes, we like the additional convenience and flexibility of wireless 802.11a/b/g/n. But we've seen firsthand where wireless bandwidth starts to take its toll on data rates and quality of service. After experiencing the near-instantaneous call-up times and better-than-one-second updates of operator HMIs on wired networks, we can get irritated when wireless bandwidth succumbs to low-wattage clients, intervening buildings and vessels, weather or, worst of all, unwelcome intruders. Wireless is great technology and has unleashed a cadre of killer apps, but it's still not the panacea for every longer-distance network challenge encountered in industrial applications.
In a complex where a number of independent large process entities share critical utilities such as natural gas and superheated steam, the day can turn from sunny to glum in seconds. The ability to understand the situation and quickly take appropriate measures means the difference between staying online or the real possibility of struggling through a sudden shutdown and lengthy restart—maybe in the dead of winter, or at a period of unusual profitability. When the mission is critical, the speed, reliability and durability of fiberoptic communications are hard to beat.