Enough data to be dangerous

CONTROL DESIGN Editor in Chief, Joe Feeley, says an incomplete palette of operating data is just enough to be dangerous, particularly if that data is used to predict profit influences.

Joe Feeley, Editor in ChiefBy Joe Feeley, Editor in Chief

I RECENTLY attended a vendor presentation that isn’t directly related to what we normally cover, but it has wide-ranging implications that could spill over into Machine Builder Nation, particularly in the way your customers define the machine performance they need.

The premise involves the always frustrating ordeal manufacturing companies put themselves through when they try to capture relevant factory-floor data, bring it into the enterprise, and use it to optimize the way they do business, from supply chain to asset management.

What’s made this largely unachievable is, first of all, the difficulty and cost of getting various generations of machines and processes and protocols to talk to each other and the enterprise reporting tools. If you can’t get all the data, you can’t expect to identify those notorious KPIs (key performance indicators) that should tell companies how they’re really doing.

An incomplete palette of operating data is just enough to be dangerous, particularly if that data is used to predict profit influences. So the factory normally organizes itself with a manufacturing silo, a maintenance silo, and, often still, a quality silo. The manufacturing team will be evaluated by equipment utilization % -- they must have the machines and processes running all the time. How else can the company make money?

The maintenance group will be judged on machine and process availability. Keep the machines healthy and ready-to-go at a moment’s notice, and make sure there are no breakdowns. Do what needs to be done during the time the machines aren’t being used.

Picture a graph with utilization % in the y-axis, and availability % on the x-axis. When utilization goes up, availability goes…down. There’s less time to keep the machines healthy. The likelihood of unplanned downtime goes up. The machine or process fails. Utilization plummets, orders are lost. But availability skyrockets and maintenance can do its thing, until machine demand rises again. Everyone loses, but thanks for playing the game.

Some of you know that I ran a factory or two in my pre-magazine days, and, like many operating managers, the annual expectations on me for profit improvement were packaged with mandated corporate-wide expectations to raise things like utilization and availability percentages.

I largely ignored utilization and availability KPIs because that was the only way to be profitable. Results from 80-ish% utilization and 80% availability, usually, if not always, produced more quality product and exceeded profit targets far better than achieving mid-90% objectives. Any ops guy worth anything understood that.

I sometimes was told that we would have done better if the KPIs were met. We didn’t have the data to prove otherwise. Make the company more money, lose some bonus, and thanks for playing. It’s not necessarily that corporate management was dumb, though sometimes...

Companies don’t have the means to get and analyze data they need, and therefore often can’t understand the need to accurately relate factors like availability and uptime to asset management and plant loadings.

The briefing was done by one of a few automation software suppliers that say they’ve cracked the code on how to do this at a reasonable cost.

The new software tools can conceivably gather the info that makes a company better at predicting failure; it can predict optimum usage levels that avoid failure, and it can load its plants accordingly. It can provide the maintenance and manufacturing with complementary KPIs that the entire company can get behind.

My question to you is: do you think this would have some effect on the expectations customers put on you to build machines that run forever and never need upkeep? And, oh yeah, your price is too high. Thanks for playing.