By Jeremy Pollard, CET
We have talked about thin-client/server-based applications, VB vs. COTS software, databases and other issues that can make projects more efficient, usable, cost-effective and innovative.
That's all good. But what about the bad stuff? It's time to do a session on what to think about when deciding what the system should do.
If you're going to be creating a server-based system with some clients on the floor and using any connection method, then you'll need to find the bottlenecks.
A server is a central computer that is the boss. It does not have to run server-based software such as Windows Server 2008 R2. It could be running Windows XP, Apache, Linux or HP UX, but we'll keep this in the Microsoft world for pretty obvious reasons.
There is a patch to Windows XP that permits four users. If your server has it, then three remote users can log in using RDP protocol, and you have a small, unlimited client-server environment.
Windows XP can be a file server, so clients can log into an SQL database, for instance, but with a 10-user connection limit. Go figure that Microsoft sees a number of connections larger than that as being an environment for a true server-based OS.
The Elusiva Terminal Server is unlimited (per license), and a true server-based OS can handle many users. Application servers such as 2X also permit multiple users to run apps from any server.
So, where are the bottlenecks and the big issues? Server hardware, network reliability and software stability are the show stoppers. Server hardware must be able to shift execution easily. This could be as simple as a network cable switchover, moving the RAID drives to the backup server. A server farm could solve this, but it probably takes control of the server away from the control domain and into the IT domain.
Networking is key. All clients communicate over this network. Typically, it is one cable from the client to a switch or router. One client down? That's no problem.
Trouble with the main switch/router that connects the server? Big problem. Have the tools in place to be sure you can track down an issue should one arise. Network monitoring software tools such as IntraVue from Network Vision will help. Likewise, packet monitoring software such as Wireshark also can be very helpful.
Software stability is very key. A rogue application can take down a server in a heartbeat. If it is a COTS package, you don't have much control over the solution, and maybe a reboot is in order. Maybe worse, there's the blue screen of death. Once this happens, the server cannot serve the clients, and they all are dead—the worst situation you can have.
This is why a solid strategy is so very important. Contingency also is very important.
Vendors sometimes forget that just because it's a Windows application and runs on Ethernet, this doesn't mean full access to all of the bandwidth. A common HMI uses multicast messages for all communications. A network hog like that will kill response time due to data collisions.
Remember that, when you make changes to the software application, you might have to restart the application on the server, and that will take down the client side for that application. Can your strategy stand that interruption?
Typically, you would want to do the upgrade during a soft time, but that might not be possible.
System reliability/availability is the most important measure of client and server-side applications. They might work great for five years, and then "pow"—a disruption arises and the system is whacked like a stiff Tony Soprano backhand.
This is where the strategy and documentation of that process come in. It's likely that the people who put the system in place have moved on, hardware is no longer available, the software platform has changed, and there is no budget for replacement—only repair.
So, remember, while all of these really great tools for development and implementation of very cool applications and platforms allow us to shine, the lurking issues also must be acknowledged.