Today we have many tools and technologies at our disposal to aid both viewpoints on product development. Which strategy we choose should be determined by the nature of our products, our product development needs, and our business strategy. Either way, success depends upon investment in both resources and skills.
Clearly, the debate over models versus prototypes has waged since the renaissance and the great masters such as Galileo who built numerous models and paper designs and calculations. Perhaps it goes as far back as the ancient Greeks and Egyptians who likely built models before venturing to build their grand masterpieces of architecture, mechanical motion, or hydraulic wonder. Perhaps it goes back further than that.
Today, we have sophisticated, computer-driven tools for building solid models, calculating stress and strain, anticipating motion and deflection, fluid flow, thermodynamic processes, and even estimating cost. We even engage or employ specialists to help us customize or develop specific simulation capabilities.
Certainly, with all of these sophisticated modeling and simulation capabilities, we should be steering our product development strategies away from prototyping and testing and toward modeling and simulation instead? Except that our tools for prototyping and testing are also advancing in sophistication and capability at equal pace.
We have numerous rapid prototyping methods from SLA to same-day custom molded or machined components. The same computerized calculation capabilities that enable advanced simulations also enable rapid programming and precise control of prototype development systems. We can design, print, and experiment on, several printed circuit boards in a single day if we have the right equipment and components on hand.
Similarly, real-time data acquisition and computerized control of configurable test equipment makes it much more practical for a wide variety of product scenarios to quickly and practically experiment or test. As much as simulations are becoming more powerful and possible, prototyping and testing is becoming more economical and quicker to perform.
For those of us developing product development capabilities or trying to further enable existing product development organizations and systems, it becomes an important strategic decision to pick which mode in which we should invest our limited resources. For most of us, building capability for both is just not financially feasible.
So how do we decide which is best? I have some insights to offer to help make the decision. No, I don’t have an opinion. I learned long ago that the right answer depends greatly upon the needs and constraints of the product development organization.
Before we address specific pros and cons for each strategy, let’s first come to an understanding about product development itself. Product development is a learning process.
Throughout the course of designing and developing our new wonders, we are constantly thinking up hypotheses, ideas that might work, and evaluating them to determine how to proceed further. We calculate, and we make changes. We brainstorm and we make selections and we make changes. We experiment, we ask questions of experts, we talk with suppliers, and we try things. Then we make more changes to our idea.
Always we are imagining and evaluating. We learn and we make decisions. Fundamentally, that is what the design and development process is. If we didn’t go through such a process we could simply draw up a design and release it to production, probably in a day or two. That doesn’t happen. New designs are based on new knowledge, learning.
So the fundamental question to ask and answer when deciding on your product development strategy is, “Which mode will allow me to learn better, faster, and economically?” The reason we simulate, model, prototype, and test is to learn and decide if our design will do what we expect and desire.
So let’s look at some of the strengths and limitation of the two strategies. The first we’ll call “simulations.” In this category will be any abstract, computerized or physical model that is a representation of the real design and is not a full-scale, working version. It includes calculations, computer models, computerized simulations, or scale models or non-functional models.
The second category we will call, “prototypes.” This category includes any accurate-scale, partially or completely functional version of the designed product. A prototype might be a working version with limited functionality where the intent is to simply try out a single function, or a small set of functions, or it can be a fully functioning prototype. A prototype might not necessarily be created using the same production processes intended for the final product.
Simply put, simulations are generally best for products that are very expensive and time-consuming to create and/or test. Simulations are very popular in the automotive industry where it is very expensive and difficult to design and develop the custom equipment necessary to manufacture frames and body parts, and where testing takes place in limited queues and is also very expensive. The automotive industry as a whole has invested greatly, over decades, in the development of knowledge and data to build better computerized simulations.
For the most part, car designs are developed, “tested,” and iterated in a computerized environment. Only when the design team is satisfied that the automobile will perform do they invest in the tooling and creation of a prototype.
In 2004, when Ford released the “re-designed” F-150 pickup, it released two models. When Ford sent its original design for crash-test certification, it did not perform to the standard that Ford intended. A re-design of the frame was necessary.
To recapture some of the cost of developing tooling and a frame design twice, the original tools and frame were put into production and those frames were sold in the “Heritage” version, a lower-cost, limited amenity version with a lower safety rating and sold primarily as a utility and fleet version. The second frame system tested very well and was put into production for consumer versions of the vehicle.
This example highlights several points about simulations. For very difficult or expensive-to-produce systems, it is a means to quickly and economically try a variety of designs or design elements, ideally limiting the development of actual systems to one. It has some limitations, however.
Simulations require a significant amount of knowledge about the materials, environment, conditions, and factors that will drive a product’s behavior. A simulation is only as good as the information, assumptions, and data that created it. Therefore, simulations require significant investment both in the tools themselves, and in the collection and understanding of real data.
Most of us are probably accustomed to using off-the-shelf software for calculating electronics performance or stress and strain in solid-material designs. These simulation tools are based on a long history of data and performance calculations and are very reliable.
However, if you need to understand vibration modes and frequencies of a complex structure, or the performance of your electronic supercomputer over a range of temperatures, you might need to invest in some experiments to gather some data profiles to feed into your models or to verify your constraints and assumptions. In short, the more you know about how your products behave in reality, the better your models can be.
This means that simulations are not necessarily the right answer for scenarios where the behavior of the design or the influence of inputs or environmental factors are not well known. I once designed a cooling system for an outdoor electronics enclosure. I poured through my thermodynamics textbooks, scrutinized my assumptions, included a safety factor of 2, and had a chief engineer check all of my assumptions and calculations. The cooler was a failure. There were just too many environmental factors for my calculations to address. Lesson learned.
If you develop more-or-less the same products repeatedly and have access to real performance data, and prototyping and testing is expensive and/or time-consuming, then a strategy focused on simulations is probably a good decision. Take a lesson from Ford, though, and have a contingency plan for those scenarios where the new design might be far enough out of the inference space of your simulations to provide true confidence.
Prototypes are not guarantees of accurate performance observance – there is still the chance of a fluke or production process influence, but they are generally more trusted than simulations. Prototypes are, generally, best for scenarios where the new design is vastly different from the design team’s experience and for systems that are reasonably easy to construct.
A very successful practice, when developing something innovative or unlike our experience, is to develop and test numerous prototypes with limited functionality. Often constructing fully functional prototypes is very expensive. However, prototypes that demonstrate only one function, or a limited subset of related functions, can be much easier to produce. Also, when we test these limited-function prototypes, it is easier to understand the behavior and constraints of that single function when the others are not also in play.
If a limited-function prototype fails, we know exactly what failed and failed first. When all the functions are going at once, it can be difficult and time-consuming to track down the fault, not to mention set-up the test itself to accommodate all of those functions.
An advantage of a prototype is that we can watch and observe the behavior of our device in a realistic condition. Sometimes our simulations cannot provide the same learning insight that a prototype can generate. Also, prototypes are far superior to put in front of customers and focus groups than models or simulations when we want to evaluate customer response.
A limitation of prototypes is the cost. Commonly it is not pragmatic to generate more than one prototype at a time, much less a statistically significant test sample. Therefore, the results of our prototype tests do not tell us the whole truth about where our final product will perform. Sometimes simulations that include random noise factors can give us good insight into the variation in output performance. A single prototype will get us in range, but won’t reveal variation.
If you need to learn the most you can about something fundamentally new, and producing accurate prototypes can be done in reasonable time, for reasonable cost, then the prototype strategy is probably your best bet. Still, don’t underestimate the value of a good, old-fashioned safety factor.
What About Both?
Naturally, some combination will be used by all of us. I venture that no one reading this post still uses paper and pencil to develop new designs. I’m sure that we all use some level of modeling and simulation capability which is built into our chosen design tools. Also, even when simulations are the strategy of choice, there is inevitably a confirmation performed at the very end to verify the design and/or acquire certification. Can we or should we pursue both strategies, though?
If your organization can afford to collect data, purchase or develop powerful simulation tools, and develop the skills within your engineering function for strong simulation capabilities, and also afford to invest in rapid prototype and test capabilities (test capabilities are sometimes the most expensive part), and skills then great. You are luckier than most of us.
I caution us all about pursuing both strategies. There is a phenomenon that takes place we must guard against if we have the luxury of choosing one path or the other on any given design.
I mentioned above that we tend to trust prototypes more than simulations. If we can do both, then what happens is that we often do both, which then doesn’t save us time or money, but instead costs us more time and money.
We might use simulation to drive a design to a decision point, but when it comes time to make the decision, someone will ask about a real test. So we perform the test to prove the results of the simulation. Likewise, if we do a prototype and test, someone will ask if the simulation would have predicted the same outcome and, in the interests of building better simulations for the future, the simulation will be built and run to find out. Both are noble ventures, but they defeat the purpose of having the skills in order to accelerate the design process.
It takes a strong project management discipline to make a decision that a particular element of the design will be either simulated or tested and to stick to that decision. A good practice is to ask, “If the answer is A, will we want confirmation from another method? If the answer is B?” Pick the method or strategy that will give the best bang-for-your-buck result and stick to it, don’t second-guess.
When choosing your strategy, determine whether simulations or prototypes will generally provide the most answers to design questions or doubts with the least investment in time and resources. If you do both incremental designs as well as breakthrough innovation designs, then hedge your bets toward the scenario that is most common. Consider if there are contracted services and expertise available that might be able to help with the other.
If you have the need or luxury to develop both capabilities, guard against the phenomenon of doing both just because you can. It leads to unnecessary design time and expense. As you are shopping your alternatives, step aside from the marvel of today’s technologies long enough to recall that the real need is to learn better and faster. Choose the solution that enables that and you won’t go wrong.
Stay wise, friends.