Interface, interface, interface. Oh, what Windows and the mouse did for the interface. But now, gestures and multi-touch on small, portable, surface-based devices have changed the game. They will create, if they haven't already, a generation of non-verbal and frazzled participants.
So what happens when the lines of communications get broken or simply get hidden on the really big, three-inch screen of a smart phone?
Would a machine or process operator really use one? Do they stop trying? Or do they get weary or complacent and just forget to respond?
If you believe that this interface type is the real deal, why would any company put its future in the hands of a commercial third-party like Google (Android) to provide the window into "my" world, not knowing if the window is going to be shut at a moment's notice? We've done it, I know, but this time will be different.
Sounds drastic? Maybe, but I fear that not enough of us are at least thinking that their worldwide access platform might be a compromised arena.
I moved into the present day by grabbing a Blackberry Q5 smart phone. It runs BB10 and has many really cool features and apps, so the conundrum of "free" lives on in such esoteric and non-pervasive apps as Flashlight. However, as I noted last month, it "needs" to know your location and personal information so it can turn on. Interface? No. Intrusion? Yes. But the flashlight did come in handy while I was on an emergency start-up, and had to peer into the dust-laden panel and pour over drawings.
Also Read: Think Like an Industrial Operator
So a Q5 phone employs multi-touch and scalability amid the illusion of modern. By that I mean that everyone knows that a multi-touch gesture of two fingers 'grows' the screen.
I write this thinking that our industrial operator stations are OK with these facts. "Oh, crap, is it a right finger or left finger hold or..?" This while Rome burns, and the system goes out of control.
The responsibility of control can be so easily given to those who have trouble remembering what they had for breakfast. So I guess it's clear that these new phones and tablets give us the interface we need for remote access and mobility with the interface of web-based commonality. Really?
Remember the F1 key? That was for help in any application. But in this new touch-based world, does F5 mean refresh in every web-based, remote, mobile HMI app? I'm pretty sure the answer is no.
So where have we gone wrong?
When we got our new phones we were promised 10 hours a month of web-based real TV. The screen size is 3-in. diagonal, and I am well beyond 45. What were they thinking? Can't wait for Surround Sound from these bubbas.
One wonders how operators might respond to any alarm, issues, page, setpoint deviation alarm and setpoint change when they've used 24-in. screens for years, and I would suggest that they might not have dealt with or interfaced with them well.
The gestures are odd, since there is no keyboard/mouse as such. Tap is left-click (easy). Tap and Hold is right click, and there are four others. Once you get used to it, then all is good.
The Q5, however, is not the same. Tap is used often. Tap, hold and drag from various positions on the screen do different things, as well as introduce various components.
My biggest concern is visual availability. You can't see anything worthwhile because of the screen size. So an application such as Teamviewer accessing a normal PC with 100 tags on it would be silly.
You can wonder how that works. It's kind of like a mobile device vs. a fixed device accessing a normal website. You can get to the same data, but who knows where it is?
Make no mistake. It's not that we as a group can't learn, but just because we can, doesn't mean we should. As I said, the majority of us are not spring chickens.
Long live the 17-in. laptop with mouse!