What’s The Purpose?
One of the reasons that continuous improvement programs fail, or innovation methodologies struggle to take root, is that everyone involved does not share the same set of expectations. Invite a debate within your organization to test the vision and establish your own norm.
The Greek philosophers of the ancient world believed that truth, as perceived by human culture, could only be determined through healthy, rational debate. I think that in matters of deciding what is important or determining common values, their belief is sound. Debate is a good method for determining what is right. Also, what is right for one organization might not be right for another.
A couple of weeks ago I enjoyed an opportunity to have a very skilled engineer describe to me his recent engineering project and the results of his engineered solution. I don’t do as much engineering as I used to, or as much as I might like. Beyond the opportunity to live vicariously through his presentation, I learned again a valuable lesson in driving culture change.
Allow me to share a little and set the stage for today’s conversation. The problem he was tasked to solve involved a manufacturing process that uses an injector to fill a reservoir with fluid. If fluid leaks out of the reservoir or misses the opening, not only does the manufacturing equipment and product get messy, it can create corrosion and resultant performance issues for the product.
Unfortunately, variation in the system invites opportunities for spills and leaks. If the injector does not line up horizontally with the opening in the reservoir, the seal can leak during the injection process. If the pressure on the opening seat is too much the seal fails and begins to leak. If it is too little, it leaks. In short, variation in both lateral movement and alignment, and vertical movement and pressure can cause manufacturing and performance problems.
The engineer’s solutions were simple, elegant, and effective. He made some brilliantly simple changes that were not only “robust” or immune to the inherent variation, but he eliminated a few components that tended to wear and increase manufacturing maintenance and expense. It was fun to witness.
Those readers versed in Design for Six Sigma or Lean Manufacturing might have perked up to the trigger phrase, “robust to variation,” in the paragraph above. Yes, in both methodologies we try to deliberately design out and eliminate variation, or to build systems that are not impacted by the natural variation of the process.
In fact, the organization to which the engineer belongs has a developing Design for Six Sigma (DFSS) program. I asked the engineer to tell be about the Sigma variation measurements before and after his solution. He confessed to me that he did not use the DFSS methodology on his project. He had been through the initial training in DFSS, but did not use it. I didn’t dig deeper; one of his mentors was standing with us and I didn’t intend to create an embarrassing scene for either of them in the public venue we inhabited.
His project, the problem he solved, was all about variation. It was the perfect DFSS certification opportunity. We can speculate that perhaps he took the training after his project was well under way, or perhaps he didn’t have a mentor engaged as he began his project. Perhaps there simply wasn’t enough incentive or communication of importance for him to choose to use the methodology on his project. All of those are common causes for missed opportunities to ingrain a method.
Even if he did not follow the project plan and metrics of DFSS, and didn’t document his project in a format that would earn him his DFSS certification, he still designed a solution that eliminated most or all of the impact of process variation on process output. Is that not the intent of the DFSS methodology? Thus we start the debate!
What is wrong with the engineer’s project if he did not follow the DFSS methodology completely, if he still achieved a “robust to variation” solution? Let’s explore.
Some might argue that nothing is wrong, that the primary objective of the DFSS methodology is to provide tools and an approach that aid in the understanding of variation and how to eliminate it, design systems robust to it, or to otherwise make variation impotent to affect the outcome. A methodology should not get in the way of good, common sense engineering, and as long as the end result is good, it shouldn’t matter how much or how little of the methodology was applied.
I have taken such a stance myself in certain contexts. It is a valid stance. It depends upon your purpose for having adopted the DFSS methodology.
The counter position perceives things differently. If the purpose of the DFSS methodology is to fulfill a more Motorola-like Six Sigma vision of controlling every process to near perfection in order to masterfully manage quality and costs, then the engineer’s project is not satisfactory. Without a statistical measure of the variation and errors that result, and a measure of the results after in terms of a Sigma score, we cannot know if his project is finished, or if there is more work to be done to achieve optimal performance.
A perception somewhere in the middle between a fully realized Six Sigma vision and the ultimate pragmatist might see a missed opportunity without worrying about Sigma scores. Had the engineer applied the Define-Measure-Analyze-Design-Verify approach and examined process capability scores, interactions of primary and contributing factors and the statistics that relate, he might have identified several viable design options and had more data-driven insight into which one most effectively addressed the variations in question. The methodology might have led to a more optimal solution than the common sense and experience-driven solution.
Which of the three stances, or another, is right? It depends upon your vision. It depends upon the purpose your organization had for adopting a methodology. Unfortunately, we don’t always clearly understand that vision.
The lack of common understanding of purpose is especially problematic within organizations where an executive team determined that a popular methodology would solve a continuous improvement need and decided to adopt it. Consultants are hired to train everyone, and they communicate the higher vision of the methodology, but no one really communicates the organization’s intent with regard to that methodology. Instead we just stumble around until we have been chastised, chided, punished, and possibly even rewarded, into some comfort zone of behavior.
That stumbling around to find a comfort zone very often results in a comfort zone where everyone chooses to pretend that the methodology doesn’t affect him or her. It leads to a failure to adopt it.
Thus, I recommend, very strongly, that the reader initiate the debate. Crack open the training materials that lay out the vision of your methodology and test that vision against the perceptions and intent of the organizational leadership. It doesn’t need to be a challenge to the methodology. Don’t approach it from a stance that will appear like you are trying to deflate or undermine the program. Just ask what the leaders want from it.
Are the leaders intent upon tracking a Sigma score for every production and office process? Do leaders want to have a more data-driven understanding of influential factors and drive a thorough vetting of design options for optimal performance (translation: spend a little more time to guarantee the best possible outcome)? Or, do leaders want to establish a proven system of problem solving to be used at a practical level and to the degree necessary to achieve a good-enough, better solution with an ultimately pragmatic philosophy?
It can be immensely powerful to be able to understand and communicate the intent we have for the system we want. Imagine trying to sell someone a car without knowing if they want to drive fast, transport a large family, or go on off-road adventures on the weekends.
The intent and purpose of our programs is fundamental to directing our organization to adopt and use it appropriately, but it is one of the most commonly overlooked or skipped elements of program deployment. A simple way to determine what the vision and purpose is, especially if the initial answer is, “I don’t know,” is a healthy, constructive, rational debate. Let the debate bounce around the boundaries of comfortable behavior instead of relying on actions and consequences to do the exploration.
Stay wise, friends.
If you like what you just read, find more of Alan’s thoughts at www.bizwizwithin.com.