Create a free Manufacturing.net account to continue

Trusting Too Much In Data, Part 2

Metrics are often derived or calculated from a multitude of measures or from data. Metrics give us a progress report or a status level of some type of performance. Data is a raw output of a process or a test or an experiment. Most times data is not useful until it is turned into some form of information by an analysis of some kind.

In recent weeks I’ve run into multiple posts, articles, and discussions concerning some findings that employee morale does not equate to productivity. Apparently some of the research groups and “better management” consulting firms have recently assembled some data analyses that refute the assumption that higher employee morale will drive higher employee productivity. The findings are generating some significant debate and some criticism.

Read: Trusting too Much in Data

Because of the controversy, the subject inspires some discussion concerning the importance of understanding data and the analysis of it, rather than simply accepting the headlines the results inspire. In Part 1 of this discussion, I communicated the importance of investigating the data and reviewing the analysis before accepting the findings and conclusions. To do this requires us, and our decision-making leaders, to be adept at data investigation and analysis so we may ask critical questions.

In this post, I want to point to another important understanding that prevents us from being fooled or misled by data analyses and findings. That is metrics do not necessarily make meaningful data. The truth may be far more complex than our metrics will show.

In general, there is a big difference between “data” and the “metrics” we use. I like to explain the distinction thusly. Metrics are measures or indicators of status or progress. Data is diagnostic. That requires some explanation.

Metrics are often derived or calculated from a multitude of measures or from data. Metrics give us a progress report or a status level of some type of performance. Data is a raw output of a process or a test or an experiment. Most times data is not useful until it is turned into some form of information by an analysis of some kind.

Let’s use the topic of employee morale affecting productivity as an example to clarify further. Employee morale is a metric. It is difficult to say that we can truly measure morale. Ultimately we are making a judgment, not collecting an output. We answer a survey question about how we feel in order to provide an indicator of morale.

Likewise, for most organizations, a productivity number is a metric, not data, because it is calculated from other measures or numbers. If we report productivity in terms of units produced per man-hour, for example, we need a measure of units produced in a period of time and another measure of man-hours paid in the same period of time.

I don’t think of either measure as data because neither one is particularly diagnostic. In other words if one is unusually high or low there is no link to any sort of cause. Of course, if the ratio of morale to productivity changes because one metric drastically changes, we can point to either morale or productivity, but that still doesn’t give us any indication of cause.

To get to the cause of a performance measure we need more specific numbers, numbers that often have no meaning outside of the context of how they contribute to the performance. For example, the rate machine cycles, or the number of setup changes, or the number of reworked pieces, or the number of orders started are just numbers until we understand how they impact the performance value of productivity and their influence on it. Those numbers or measures are data.

Why is it important to perceive a difference? Because many different sources of data might contribute to a certain metric, if there is a change in a metric, it might be due to any number of possible causes. Furthermore, the relationship between one metric and another may or may not have one or more common causes or sources of data. Therefore, if we compare one metric’s influence on another, it can be difficult or impossible to understand the real relationship between them if we can’t see the different sources of data and cause and effect at the same time.

Because metrics have many sources of influence, and comparing two or more metrics is like comparing large sets of potential causes or confused or confounded relationships, the relationship between them is messy. In data analysis terms, messy means noisy. Noisy means that the likelihood of drawing meaningful, reliable, repeatable correlations is poor.

In other words, any time we see a headline that compares one metric consisting of a complexity of potential influences with another, it is wise to be skeptical of any findings that the comparison provides. Just because a metric appears as a measurable number does not mean that the relationship between metrics is as plain as the relationship between specific data.

Consider the comparison of data instead of metrics to finish the point. If we choose to compare the number of pieces produced with the number of pieces ordered and the time it took to produce them, we can clearly and logically perceive the connection. Contrarily, if we compare the number of sandwiches purchased from the vending machine with the number of blue product pieces manufactured, even if our math shows a correlation, we can clearly see that we should not accept that there is any relationship.

There are so many possible influences on morale that could drive the morale score, anything from recent production problems or time since performance evaluations to the weather. Likewise, there are a great many influences on productivity, not just employee morale. So when we receive news that there does not appear to be any meaningful correlation between employee morale and productivity, we should not let the findings influence our opinions, our actions, and certainly not our priorities. We should instead wag our finger at those who did the study because they wasted their time trying to compare two things amongst so much noise in the measures.

That makes the final point. Don’t make the mistake of expending time and energy comparing two or more measures influenced by so many possible causes of performance. If you do need to make a comparison, be very diligent in eliminating or controlling the influences that will confound the comparison. If you are considering the analysis of others where complex measures or metrics are compared, dig diligently and deeply into the details of the relationships.

  • What proves there is a cause/effect relationship?
  • Are there factors that affect both measures – were they controlled or base-lined to gage their influence?
  • What else could cause one metric or the other to change that is not explained by the relationship – was that controlled or measured?

I admit to taking artistic liberty in describing a difference between metrics and data in this post. Don’t get hung up on my use of vocabulary. Focus on the message that measures for which there are a great many sources of influence do not make good candidates for statistical correlation. The relationships are too complex for the resulting numbers to have any diagnostic meaning. Metrics generally do not make good candidates for cause/effect correlations.

Don’t allow yourself to be influenced by findings or headlines based on a comparison of complex measures; be skeptical. Instead, tear into the data that might truly demonstrate or refute any relationship.

Stay wise, friends.

If you like what you just read, find more of Alan’s thoughts at www.bizwizwithin.com

More in Operations