With businesses striving to focus on the customer and achieve competitiveness through consistently reliable products and services, it should be no surprise that the issue of six sigma quality is now attracting increasing attention from manufacturers and service providers in the UK and beyond. Professor Tony Bendell, of the University of Leicester and Services Ltd who is working with Smallpeice Enterprises Ltd, describes why six sigma is something we are going to hear a lot more about
In the past few years, major US corporations have made public the benefits attributed to their six sigma programmes. AlliedSignals saved $175 million in 1995, and nearly double that in 1996. In 1997, General Electric announced that it would save $500 million that year because of six sigma and by 1998 the programme savings had risen to $1.2 billion. The bottom line is that corporations moving toward six sigma levels of performance have saved billions of dollars, and boosted their stock values.
However, while the dollar signs do help to highlight the potential of this quality approach, they do little to resolve the confusion that often surrounds all such 'quality movements' or explain how the benefits are achieved.
The moment of conception
It was Motorola which conceptualised six sigma as a quality goal in the mid 1980s and first recognised that modern technology was so complex that old ideas about acceptable quality levels were no longer applicable. But the term, and the company's innovative six sigma programme, only came to real prominence in 1989 when Motorola announced it would achieve a defect rate of not-more-than 3.4 parts per million within five years. This claim effectively changed the focus of quality within the US, from one where quality levels were measured in percentages (parts per hundred) to a discussion of parts per million or even parts per billion. It was not long before many of the US giants - Xerox, Boeing, GE, Kodak - were following Motorola's lead.
While few dispute this history, one area of confusion is the interpretation of the term six sigma. The original industrial terminology is based on the established statistical approach which uses a sigma measurement scale (ranging from two to six) to define how much of a product or process normal distribution is contained within the specification. Essentially, the higher the sigma value the less likely it is for a defect to occur, because more of the process distribution is contained within the specification.
As the sigma scale describes defects in parts per million, the desire to achieve six sigma either side of the nominal target inside the specification relates to very tight production characteristics or equivalently a very low incidence of cases outside the specification, 'defects'. In fact, under the assumption of normality, a product or process operating at six sigma quality would have a 99.999998 per cent yield, or defects at 0.002 parts per million (two parts per billion). At the more typical three sigma quality level, the yield will be 99.73 per cent or 2,700 defects per million opportunities.
By then taking into account that the product or process mean might vary from the nominal target by up to 1.5 sigma, this translates into a yield at six sigma, of 99.99966 per cent or 3.4 defects per million - the target declared by Motorola and now regarded as 'six sigma' quality by industry in general. By applying the same 'worst case scenario' (a 1.5 sigma deviation) to the typical levels of three and four sigma achieved by many manufacturing companies, the gulf between the world-class goal and average performance is dramatically illustrated. At three sigma the yield falls steeply to only 93.32 per cent or 66,810 defects per million opportunities. Even at four sigma, the number of defects is 6,210 per million.
The six sigma philosophy
The other area where there is significant scope for misunderstanding is in the application