rich_merritt
rich_merritt
rich_merritt
rich_merritt
rich_merritt

Is your HMI lying to your customers?

June 10, 2005
Senior Tech Editor Rich Merritt writes in this month's SpecMate column that if your HMI doesn’t tell the truth about machine control conditions, trouble might be right around the corner.
By Rich Merritt, Senior Techcial Editor

YOU THINK THE control system has your machine’s operating variables right where the customer wants them. That’s usually good. But, in a sophisticated control scheme, you might not know just how hard the system is working to keep that control. If your HMI doesn’t tell the entire truth about machine control conditions, trouble might be right around the corner.

Operating an automated machine often is described as long periods of boredom, interrupted by moments of sheer panic. I recently read a book like that: Taming HAL: Designing Interfaces Beyond 2001 (By Asaf Degani, St. Martin’s Press, ISBN 0 312 29574 X). The book has several long, boring passages explaining how autopilots work, interrupted by moments of horror, when the author describes how ships run aground and airplanes crash. In all cases, the mishaps are because an operator interface wasn’t up to the job.

The author’s premise is that the HAL 9000 supercomputer in the movie 2001: A Space Odyssey is just the most well-known example of an automation system that ran amuck without warning its human operators via an HMI. The computer made mistakes, did not inform its operators that it had control problems, and eventually went berserk, endangering the mission. The book demonstrates how automation systems have been screwing up in a similar fashion for years, especially in airplanes.

The author, a research scientist at NASA, specializes in flight-deck procedures. What he shares about airplane automation will make you consider driving or taking the train to your next big event.

All automation engineers should read this book. You’ll learn about basic HMI problems such as why those annoying VCR programming screens don’t work or why you can’t set the alarm clock in your hotel room. The author explains how HMI design problems lead to user frustration, confusion, accidents, and sometimes death.

You will learn terms and parameters they don’t teach at your control system vendor’s HMI display configuration classes. Here, you’ll learn about non-deterministic behavior, automatic transitions, coupling, population stereotypes, mode engagement vs. mode activation, walk-in interfaces, states and regions, envelope protection, and “automation surprise.”

You’ll also learn about the dreaded “automation lock.” That’s when an automated system drives itself into an unsafe region where, no matter what it does, failure or disaster will occur.

In an airplane, for example, the automatic pilot may be correcting for a situation such as wing icing, but not be informing the pilots that it is having a difficult time. When it no longer can keep the plane flying safely, it suddenly disengages, and the plane corkscrews toward the ground. The pilots, who might have been schmoozing with the flight attendants all this time, suddenly are presented with a violently acting airplane in a dangerous flight condition and they have no idea why.

The parallel with automation is clear: Suppose a system is controlling temperature in an injection molding machine by a combination of coolant flow, agitation, level and pressure. For one reason or another, it reaches the limits of all the controlling variables, but the temperature continues to increase. It has reached automation lock: no matter what it does now, it cannot bring down the temperature, so it sounds an alarm.

By the time a human operator responds to the alarm, an explosion or failure might just be seconds away.

In cases where the author says human error was involved, there was nothing wrong with the automated system. Even when an airplane crashed or a ship went aground, the automation was doing exactly what it was programmed to do right up to the moment of impact. Its HMI did not, however, warn anyone that it was not doing what it was expected it to do. A tiny indicator light that flashes for three seconds is not as informative as alarm bells and horns.

In cases when a disaster occurred because of a control problem, the operator interfaces didn’t tell their human operators that a problem existed until it was too late to correct. The HMI should have said, “The airplane is barely under control and I am running at the limit. Please take a look.”

When a modern automation system runs reliably day after day, users might come to rely on it so much that they develop an over-trust condition, where they might not monitor the controls any more. Worse, the author says, they might even dismiss clues that the automation is not working. An HMI designer cannot let such a situation occur.