Capability is the term used to define how well a process matches the voice of the customer. To find this out, we need to be able to calculate the defect rate of the process. That is done with two main methods, we can firstly find the yield (the proportion of correct items) and derive the proportion of defects or we can directly analyse the number of defects in the process.
Traditional Yield (Y)
Traditional yield is a simplistic view of a process. It takes a view on the last step as to the overall process performance. So in the below, if we had 100 items in, 4 were scrapped we’d have 96 outputs & hence a yield of 96%.
First Time Yield (FTY)
First time yield paints a far better picture of process performance as it takes into account re-work in the process.
With this view, we can see that only 60% of our 100 items passed inspection first time. But note, we still have 100 items going in & 4 items as scrap, as we did in the traditional yield calculation. While traditional yield provides us with a result of 96%, first time yield provides just a 60% result.
Rolled throughput yield (RTY)
Rolled throughput yield helps us understand the yield of lots of interlinked processes. So, imagine we have the below scenario:
To calculate the RTY, we multiply the first time yield of all the processes in the chain. So in this case 0.70 * 0.90 * 0.85 = 0.53, So we can say that 53% of inputs will be successfully outputted.
If we were to work it through, we can see how this works. The first process would output 70% of our 100 items successfully (70). The 70 items then feed into process 2, of which 90% (63) successfully feed into process 3. Finally, process 3 will successfully output 85% of the 63 inputs – leaving us with the final 53 outputs (53%).
We need to ensure high FTY for every process step to deliver a high RTY.
We can say that if we have 90% yield, we have 10% defects. However, we have a few ways to better measure defects.
Adding the ‘per opportunity’ to the calculation enables us to level the playing field and make products of differing complexity comparable. Let’s say you’re producing a car engine with its hundreds of components – we can’t compare its 5 defects per unit (engine) with the defects per unit of a bike that is comprised of just a few parts. So, we use a defects per opportunity measure to make the two comparable.
Opportunities can include:
The Z Score
“Simply put, a z-score is the number of standard deviations from the mean a data point is. But more technically it’s a measure of how many standard deviations below or above the population mean a raw score is. A z-score is also known as a standard score and it can be placed on a normal distribution curve. Z-scores range from -3 standard deviations (which would fall to the far left of the normal distribution curve) up to +3 standard deviations (which would fall to the far right of the normal distribution curve).” Source.
So, we can calculate this with the following formula.
Where the vertical lines refer to ABS (forcing the result to be positive); Xbar is the mean and SL is the specification limit we have set (e.g. MAX of 3MM dimension).
We can only use a Z score when we have a variation / distribution that is relatively normal.
A low Z score is where a significant part of the distribution tail exceeds the specification limit, and a higher Z score represents fewer defects. The table below provides an overview:
It’s important to remember that process performance degrades over time. So, to represent this, we shift the Z value by using the formula ZShifted = Z – 1.5.
Content based on study of the Six Sigma Black Belt course and Six Sigma for Dummies