These are very good questions. This paper will describe what you really need to know about estimating the true potential of your process. The most common definition of Cpk & Ppk is that: Cpk is the short-term capability of a process and Ppk is the long term. The truth is that these statistical indices are much more than that. As you will see it is very important to understand what process and capability statistics really mean. However, in order to reveal the true state of your process you have to accurately assess and interpret your data.
Why is a process capability study done?
- To assess the potential capability of a process at a specific point or points in time to obtain values within a specification
- To predict the future “potential” of a process to create a values within specification with the use of meaningful metrics
- To identify improvement opportunities in the process by reducing or possibly eliminating sources of variability
When it comes to statistics: “There are three kinds of lies: lies, damned lies and statistics.” Mark Twain’s Own Autobiography: The Chapters from the North American Review, 1906.
Statistics can be powerful tools in discerning the truth within a myriad of data, and when used and interpreted properly, they are of invaluable help. But as Mark Twain so clearly observed: “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure, that just ain’t so.”
The danger comes from putting great confidence into a statistic that was not properly applied and interpreted. Therefore, what conditions must be satisfied to validate these process statistics?
Answer: There are four conditions that must be satisfied to make process capability and performance statistics meaningful metrics:
- The sample must be truly representative of the process. This includes the (6 M’s) i.e. man, machine, material, measurements, methods and Mother Nature (environmental)
- The distribution of the quality characteristic must be Gaussian, i.e. the data can be normally distributed in a probability curve. If the data does not conform, the question is: “Can it be normalized?” Various analytical methods are used to potentially normalize data or apply a non-parametric analysis – but these advanced techniques are for another paper.
- The process must be in statistical control. In other words, it is stable and: is variation generally random (common cause)? Note: A real time control chart should be verifying statistical stability as process capability data is captured, and in real time, or what’s the point? Don’t wait until the data is taken to create a control chart only to discover that the process trended out of statistical control along the way or had some other identifiable problem.
The sample size must be of sufficient size to build the predictive capability model. How big a sample size? We are sorry to report that this is not the right question. Remember that statistics, when applied correctly, is just an estimate of the truth. The right question to ask is: “How much confidence do I need in the estimate?” If a Cpk of 1.70 is observed we should report “I don’t know the true Cpk, but based upon a random sample of n = 30 points, I am 95-percent confident that it’s between 1.23 and 2.16 according to its confidence interval.” On the other hand, for the same data set and n = 500 points, I can say: “I am 95-percent confidant that Cpk is between 1.19 and 1.37 according to its confidence interval.” This is a rather large difference and brings us to the next logical question: How do you estimate the potential number of samples required for a specific capability confidence level? Unfortunately that question is very math intensive and way beyond the scope of this paper. However, see figure (1 & 2) to get an idea of what is being described.
So what exactly is Cpk & Ppk Anyway?
Cpk – is a snap shot or a series of snap shots of a process at specific points in time and is used to assess the “local and timely” capability of a process. Think of Cpk, as more of a point of insight, into a much larger and future, population of process data; i.e. think of Cpk as a movie trailer is to the movie.
Cpk is comprised of measurements produced as rational sub-groups. A subgroup is a series of measurements that represent a process snapshot. They are best taken at the same time, in the same way, in a controlled fashion. For example: gear shafts are made in a continuous process. One hundred parts are made every hour and a critical diameter is to be evaluated in a capability study. (10) shafts will be sampled every hour for (6) hours. This means that we will have (6) subgroups of (10) shaft measurements for (60) total measurements for the day.
The process capability metric for (60) data points or 1/10th of the daily output will be calculated with a WITHIN subgroup variation of the shaft measurement. It will not account for any drift or shift between the subgroups. This discrimination will be critical to understanding the difference between Cpk and Ppk.
Ppk – Will indicate what the potential process may be capable of in the future. In this case, calculating Ppk on the (60) measured data points will give us an estimate of OVERALL variation of the critical shaft measurement. Ppk includes subgroup variation and all process related variation including shift and drift. This is another vital discrimination: Cpk includes only common cause variation whereas; Ppk includes both common and special cause variation. What is the difference between these two?
- Common Cause Variation – are the many ever-present factors (i.e. process inputs or conditions) that contribute in varying degrees to relatively small, apparently random shifts in outcomes day after day, week after week, month after month. These factors act independently of each other. The collective effect of all common causes is often referred to as system variation because it defines the amount of variation inherent in the system. It is usually difficult, if not impossible to link random, common cause variation to any particular source.
These may include composite variation induced by noise, operational vibration, and machine efficiencies and are generally hard to identify and evaluate because they are random in nature. However, if only random variation is present, the process output forms a distribution that is stable over time.
- Special Cause Variation – are factors that induce disparities in addition to random variation. Frequently, special cause variation appears as an extreme effect or some specific, identifiable pattern in data. Special causes are often referred to as assignable causes because the variation produced can be tracked down and assigned to an identifiable source.
These include variation induced by special effects not always present or built into the process. Some examples include: Induced temperature and uncontrolled environmental factors, power surges, people, changes in process, tooling adjustments, measurement error or material variations.
Let’s take a look at what math reveals about Cpk & Ppk:
Looking at the formulas for Cpk and Ppk we can see that they are almost nearly identical.
The only difference is the way the standard deviation is calculated.
In the equation above, the calculation for the WITHIN Standard Deviation of Cpk for equal number subgroups is:
d = ( – k + 1)
Note: C4 (d) is to be read as: C4 of (d), not C4 times (d). So where do you get the C4 of (d): the unbiased standard deviation value? These come from readily available statistical tables. For example: If (d) = 28 then, C4 is taken at (0.990786).
Also note that the special case of calculating Cpk with a subgroup of 1.0 can be done with an estimate of standard deviation using the moving range equation (not shown).
The OVERALL standard deviation is much easier to calculate. Here it is taken without respect to subgroups and is given by:
Let’s take a look now at the potential difference between Cpk & Ppk.
In our example there were 600 shafts made, of which (10) out of 100 were measured each hour for (6) hours. Sixty data points were accumulated. There are (6) subgroups of data.
One operator makes parts on one machine, the same way on the same day. We can see that the average standard deviation within subgroups is very close to the overall standard deviation. Therefore, in this particular case, Cpk and Ppk should be very close in value – and they are.
Note: this example is a two-tailed distribution based on the upper and lower control limits (diameter specification). Therefore, Cpk & Ppk is taken as the lessor value of the two calculations.
Here is where the difference between “Within & Overall Standard Deviation” is apparent. If we pull (30) measurements out of the (60) total that were taken – irrespective of time order, to make (3) unique subgroups, (and we would never, do this), there could be wildly different results. The difference between the lower & upper Ppk values (1.72 Vs 1.37) is +25%. However, the difference for Cpk (3.13 Vs 1.39) is +125%. The result comes from the data in (fig.3).
This example shows the importance of truly understanding the data you are investigating. Ask: How is the data taken? How stable is it? Where is the variation coming from? Is the data trusted? How was it organized? Are there enough data points? Will this process generate like results lot after lot, time after time? Is the statistical confidence level of the capability indices acceptable? As so often is the case: If a decision is going to be made whether the parts and process are acceptable for production based on a series of initial subgroup “snap-shots”, then it behooves one to really dig into these specifics; so today’s Cpk is not wildly different from the future long-term process capability. And along the way the most important thing you may discover is – how to continuously and confidently, verify whether your process is actually in control and truly capable.
References
• Paret M. & Sheehy P. “Being in Control” Quality Canada, Summer 2009.
• Capability Analysis (Normal) Formulas – Capability Statistics, http://www.minitab.com/support/answers/answer.aspx?ID=294
• Paret M. Process Capability Statistics: Cpk vs. Ppk
• Bower, K.M. “Cpk vs. Ppk” ASQ Six Sigma Forum, April 2005, www.asq.org/sixsigma
• Symphony Technologies Pvt Ltd “Measuring Your Process Capability”. www.symphonytech.com
• Cheshire A. “Questions about Capability Statistics – Part 1” September 19, 2011
• Pyzdek, Thomas “The Six Sigma Handbook”, McGraw-Hill, 2003 pp. (467-484)
• Statistical Process Control 2nd Edition “Reference Manual,” DaimlerChrysler / Ford Motor Co. /General Motors Corp. Supplier Quality Task Force, July 2005