What to do with all that data once it’s available.
Steve Wise is the Vice President of Statistical Methods at InfinityQS, a leading provider of statistical process control (SPC) software. He recently sat down to discuss the best practices for implementing data-driven software and how manufacturers can most effectively use the information it provides.
What do you think is the #1 obstacle manufacturers face in implementing new software platforms?
There are three major obstacles: inertia, lethargy and leadership.
It is always easier to do nothing; this is true in our personal and business lives. If we can get by for another day without having to do too much, we'll take that path. As one day leads to another, good plans simply do not get implemented.
In the manufacturing world, the reality is that implementing a new software platform can be a BIG project when you start talking about retooling for the new system, unplugging the old system, and developing a backup plan in case there are glitches after the go-live date. Software can be a powerful enabler that allows you to understand things about your environment that you could only guess about before.
But doing it right requires vision, energy and time, all of which are at a premium for a lot of companies.
A recent Manufacturing.net survey found that nearly two-thirds of respondents felt their organization wasn’t effectively using all the data housed in their ERP system. This was neither surprising nor new, but why does this trend continue?
I can’t tell you how rewarding it is when a company implements a plan and the light bulb goes on and they finally see how data can drive truly effective action. When data-based systems are installed, the implementation is too often considered successful once the data are flowing and people can get to the data.
Yes, watching data automatically flow into a database is a beautiful experience, but visualizing data is just the first step. There are three milestones I look for in order to most effectively put manufacturing data to work.
The first is to process all the streams of data to determine their usefulness. Some data are useful for making real-time decisions, some data are useful for making long-term strategic decisions, and some data are just taking up space. Real-time data need special analysis and alerting tools to warn you if a change is needed and also to provide feedback that indicates true outliers so you don’t change anything that screws up a stable situation.
Data that are used for longer-term decisions need to be cobbled together in a way that exposes all key trends and indicators, while useless data need to be turned off, throttled back or repurposed.
Second, data in an ERP system are only one source. What about data from Quality and Manufacturing Execution systems? Are there meaningful ways to integrate that data to aid in decision making? If so, what needs to happen to bring those data together?
And lastly, what about the metrics? Do they support decision-making both up stream and down? Do they reward the right behavior?
If you had to identify one data set that is often over-looked or under-utilized, what is it and why do you think that’s the case?
The most overlooked data opportunity is to use variables data sampling rather than attribute. Many quality-related data streams are measuring weak indicators such as yield, go/no-go, pass/fail, or conforming/nonconforming. These types of data are all attribute data; more specifically, defectives data. These data boil down to a 1 or a 0.
Defectives data sets only help someone feel good or feel bad about whatever those data represent. If the yield is bad, more powerful variables data are needed to isolate and help solve the problems. Variables data can be measured on a continuous scale, such as temperature, diameter and cycle time.
One of the biggest mistakes companies make with industrial data is to take a stream of variables data and dumb it down to a pass or fail. A better way to report and visualize data is to understand the data set's distribution and predictability. This is efficiently achieved only with variables type data.
At the recent InfinityQS user’s conference, some light-hearted remarks were made about the conflicts between operations and IT. What advice would you offer in helping these two areas of manufacturing to work together more effectively?
The truth is that Quality and IT have an opportunity to make each other look very good. The complexities of today’s manufacturing systems result in so many visibility gaps that it becomes a huge win when a quality platform is able to deliver relevant intelligence to the people that can most effectively use it. Quality and IT both have an interest in taking the guesswork out of manufacturing operations and both greatly benefit when a company’s culture becomes more data driven.
That said, many of the most successful software-based projects are led by the IT department. The most successful of those are when the IT leaders rely on Operation's user requirements as their guide. The problems between Operations and IT are magnified when IT is brought into a project late that was ill conceived by someone in Operations who tried to bypass IT and just get the project done without "unnecessary IT delays." Eventually, non-IT folks realize their technical shortcomings, but the damage is already done.
If you could give U.S. manufacturers one thing, what would it be and why?
When it comes to data management, I would encourage manufacturers to ask basic questions that challenge the purpose of every stream of data.
- Who is the customer of the data?
- What decisions are going to be made with that data stream?
- If the data values start to increase (or decrease or remain the same), is that good or bad?
- If the data stream says that something needs to be done, is there any infrastructure in place to act on the data?