Evans, J. R., & Lindsay, W. M. (2016). Managing for Quality and Performance Excellence (10th Edition). Cengage Learning US.
PRODUCT DEVELOPMENT Most companies have some type of structured product development process. The typical product development process, shown in Figure 7.1, consists of six phases: 1. Idea Generation: New or redesigned product ideas should incorporate customer needs and expectations. However, true innovations often transcend customers’ expressed desires, simply because customers may not know what they like until they have it (think iPhone or iPad). Thus, idea generation often focuses on exciters and delighters as described in the Kano model in Chapter 3. 2. Preliminary Concept Development: In this phase, new ideas are studied for feasibility, addressing such questions as: Will the product meet customers’ requirements? Can it be manufactured economically with high quality? Objective criteria are required for measuring and testing the attributes associated with these questions. 3. Product/Process Development: If an idea survives the concept stage—and many do not—the actual design process begins by evaluating design alternatives and deter- mining engineering specifications for all materials, components, and parts. This phase usually includes prototype testing, in which a model (real or simulated) is constructed to test the product’s physical properties or use under actual operating conditions, as well as consumer reactions to the prototypes. Concurrently, companies develop, test, and standardize the processes that will be used in manufacturing the product or delivering the service, which include selecting the appropriate technology, materials, and suppliers and performing pilot runs to verify results.
Concurrent Engineering The importance of speed in product development cannot be overemphasized. To succeed in highly competitive markets, companies must churn out new products quickly. Nearly every industry is focused on reducing product development cycles. Whereas automakers once took as many as eight years to develop new models, most are now striving to do it within a year. Rapid product development demands the involvement and cooperation of many different functional groups within an organization, such as marketing, engineering, and manufacturing. Unfortunately, one of the most significant barriers to efficient prod- uct development is poor intra-organizationalcooperation. In many firms product development is accomplished in a serial fashion, as suggested in Figure 7.1. In the early stages of development, design engineers dominate the process. Later, the prototype is transferred to manufacturing for production. Finally, marketing and sales personnel are brought into the process. This approach has several disadvantages. First, product development time is long. Second, up to 90 percent of manufacturing costs may be committed before manufacturing engineers have any input to the design. Third, the final product may not be the best one for market conditions at the time of introduction. Concurrent engineering is a process in which all major functions involved with bring- ing a product to market are continuously involved with product development from concep- tion through sales. Such an approach not only helps achieve trouble-free introduction of products and services, but also results in improved quality, lower costs, and shorter product development cycles. Concurrent engineering requires multifunctional teams, usually consisting of 4 to 20 members and includes every specialty in the company. The functions of such teams are to perform and coordinate the activities in the product development process simultaneously, rather than sequentially. For example, Honda created a business manager officer for each of its six global regions to supervise all development, production, and pur- chasing. As Honda’s CEO noted, “It used to be that for every minor model change, R&D would first conduct a preliminary review, then automobile operations would give formal approval, and then development instructions would be issued before getting around to the actual development work. Now sales, manufacturing, R&D and purchasing associates work as a single team and quickly make decisions on their own.”5 Boeing A&T has more than 100 integrated product teams (IPTs) that oversee the design, production, and delivery of the C-17 aircraft’s more than 125,000 parts and supporting services. Similar approaches are also used in service organizations. At The Ritz-Carlton Hotel Company, customized hotel pro- ducts and services, such as meetings and banquet events, receive the full attention of local hotel cross-functional teams. These teams involve all internal and external suppliers, verify production and delivery capabilities before each event, critique samples, and assess results. Companies such as Apple and Jawbone (a producer of Bluetooth earpieces), which have appeared on Fast Company magazine’s list of the world’s 50 Most Innovative Companies, exploit concurrent engineering to achieve a competitive advantage.6 Typical benefits of concurrent engineering include 30 to 70 percent less development time, 65 to 90 percent fewer engineering changes, 20 to 90 percent less time to market, 200 to 600 percent improvement in quality, 20 to 110 percent improvement in white collar productivity, and 20 to 120 percent higher return on assets.
Design for Six Sigma Design for Six Sigma (DFSS) represents a structured approach to product development and a set of tools and methodologies for ensuring that goods and services will meet customer needs and achieve performance objectives, and that the processes used to make and deliver them achieve high levels of quality. DFSS helps designers and engineers better translate customer requirements into design concepts, concepts into detailed designs, and detailed designs into well-manufactured goods or efficient services. Through good communication and early involvement in the product development process, this approach leads to reduced costs, better quality, and a better focus on the customer. DFSS is a complementary approach to Six Sigma methods for process improvement, which we will learn about in Chapter 9. Most tools used in DFSS have been around for some time; its uniqueness lies in the manner in which they are integrated into a formal methodology, driven by the Six Sigma philosophy, and with clear business objectives in mind.
Concept Development: Concept development focuses on creating and developing a product idea and determining its functionality based upon customer requirements, technological capabilities, and economic realities. 2. Detailed Design: Detailed design focuses on developing specific requirements and design parameters such as specifications and tolerances to ensure that the product fulfills the functional requirements of the concept. 3. Design Optimization: Design optimization seeks to refine designs to identify and eliminate potential failures, achieve high reliability, and ensure that it can be easily manufactured, assembled, or delivered in an environmentally responsible manner. 4. Design Verification: Design verification ensures that the quality level and reliability requirements of the product are achieved. These activities are often incorporated into a process, known as DMADV, which stands for define, measure, analyze, design, and verify. Define focuses on identifying and understanding the market need or opportunity. Measure gathers the voice of the customer, identifies the vital characteristics that are most important to customers, and outlines the functional requirements of the product that will meet customer needs. Analyze is focused on concept development from engineering and aesthetic perspectives. This often includes the creation of drawings, virtual models, or simulations to develop and understand the functional characteristic of the product. Design focuses on developing detailed specifications, purchasing requirements, and so on, so that the concept can be produced. Finally, Verify involves prototype development, testing, and implementation planning for production. General Electric was an early adopter of DFSS. For example, back in its 1998 annual report, GE stated that “Every new product and service in the future will be DFSS…. They were, in essence, designed by the customer, using all of the critical-to-quality performance features (CTQs) the customer wanted in the product and then subjecting these CTQs to the rigorous statistical Design for Six Sigma Process.” One of the early applications of DFSS was at GE’s Medical Systems Division. The Lightspeed Computed Tomog- raphy (CT) System was the first GE product to be completely designed and developed using DFSS. Lightspeed allows doctors to capture multiple images of a patient’s anatomy simultaneously at a speed six times faster than traditional scanners. As a result, productivity doubled while the images had much higher quality.9
CONCEPT DEVELOPMENT AND INNOVATION Concept development is the process of applying scientific, engineering, and business knowledge to produce a basic functional design that meets both customer needs and manufacturing or service delivery requirements. Developing new concepts requires inno- vation and creativity. Innovation involves the adoption of an idea, process, technology, product, or busi- ness model that is either new or new to its proposed application. The outcome of inno- vation is a discontinuous or breakthrough change and results in new and unique goods and services that delight customers and create competitive advantage. The Small Business Administration classifies innovations into four categories: 1. An entirely new category of product (e.g., the iPod), 2. First of its type on the market in a product category already in existence (e.g., the DVD player), 3. A significant improvement in existing technology (e.g., the Blu-ray disc technology), 4. A modest improvement to an existing product (e.g., the latest iPad). Innovation has been the hallmark of Apple and the late Steve Jobs, whose inspiration was driven by simplicity, ease of use, using computers to do creative work, and making life easier.11 A BusinessWeek poll observed that a large majority of senior executives indicated that innovation was one of their top three priorities, and that the speed of implementation and ability to coordinate processes required to bring an idea to market were the biggest obstacles to successful innovation.12 Innovation is built upon strong research and development (R&D) processes. Many larger firms have dedicated R&D functions. Government agencies also promote innovation. For example, the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life. NIST laboratories conduct research that advances the nation’s technology infrastructure and is needed by U.S. industry to continually improve products and services; the Hollings Manufacturing Extension Partnership, a nationwide network of local centers offers technical and business assistance to smaller manufacturers; and the Technology Innovation Program provides cost-shared awards to industry, universities, and consortia for research on potentially revolutionary technologies that address critical national and societal needs.
Creativity is seeing things in new or novel ways. In Asian cultures, the concept of creativity has been said to translate as “dangerous opportunity.” Many creativity tools, such as brainstorming and “brainwriting,” its written counterpart, are designed to help
DETAILED DESIGN Conceptual designs must be translated into measurable technical requirements and, sub- sequently, into detailed design specifications. Detailed design focuses on establishing technical requirements and specifications, which represent the transition from a designer’s concept to a producible design, while also ensuring that it can be produced economically, efficiently, and with high quality. Dr. Nam Suh from MIT developed a methodology called axiomatic design, based on the premise that good design is overned by laws similar to those in natural science. Two axioms (statements accepted as true without proof) govern the design process: 1. Independence Axiom: good design occurs when the functional requirements of the design are independent of one another 2. Information Axiom: good design corresponds to minimum complexity These axioms guide the design process with the goal of creating the best possible product to achieve the desired functions. The method has been shown to reduce design time and achieve better designs and has been used successfully by many companies such as Ford Motor Company. The principles of axiomatic design help designers better apply tools such as TRIZ and quality function deployment, which we discuss next.
Quality Function Deployment A major problem with the traditional product development process is that customers and engineers speak different languages. Technical requirements, sometimes called design characteristics, translate the voice of the customer into technical language that provides a basis for design specifications such as dimensions and tolerances. A customer might express a requirement for a car as “easy to start.” The translation of this require- ment into technical language might be “car will start within 10 seconds of continuous cranking.” Or, a requirement that “soap leaves my skin feeling soft” demands translation into pH or hardness specifications for the bar of soap. Such specifications provide manufacturing with actionable information for designing and controlling processes. A powerful tool for establishing technical design requirements that meet customer needs and deploying them in subsequent production activities is quality function deployment (QFD). The term, which is a translation of the Japanese Kanji characters used to describe the process, can sound confusing. QFD is simply a planning process to guide the design, manufacturing, and marketing of goods by integrating the voice of the customer throughout the organization. Through QFD, every design, manufacturing, and control decision is made to meet the expressed needs of customers. QFD benefits companies through improved communication and teamwork between all constituencies in the value chain, such as between marketing and design, between design and manufacturing, and between manufacturing and quality control. QFD originated in 1972 at Mitsubishi’s Kobe shipyard site. Toyota began to develop the concept shortly thereafter, and has used it since 1977 with impressive results. Between January 1977 and October 1979, Toyota realized a 20 percent reduction in start-up costs on the launch of a new van. By 1982, start-up costs had fallen 38 percent from the 1977 baseline, and by 1984, were reduced by 61 percent. In addition, development time fell by one-third at the same time that quality improved. Xerox and Ford initiated the use of QFD in the United States in 1986. (At that time, more than 50 percent of major Japanese companies were already using the approach.) Today, QFD is used successfully by manufacturers of automobiles, electronics, appliances, clothing, and construction equipment, by firms such as Mazda, Motorola, Xerox, IBM, Procter & Gamble, Hewlett-Packard, and AT&T. Two organizations, the American Supplier Institute, Inc., a nonprofit organization, and GOAL/QPC, a Massachusetts consulting firm, have publicized and developed the concept in the United States.
QFD uses a set of linked matrixes to ensure that the voice of the customer is carried throughout the production/delivery process (see Figure 7.2). Because of the visual struc- ture, these are called “houses of quality.” The first house of quality relates the voice of the customer (customer requirements) to a product’s overall technical requirements; the second relates technical requirements to component requirements; the third relates compo- nent requirements to process operations; and the final one relates process operations to quality control plans. In this fashion, every design and production decision, including the design of production processes and the choice of quality measurements, is traceable to the voice of the customer. If applied correctly, this process ensures that the resulting product meets customer needs. We will focus on the first matrix, the customer requirement planning matrix (commonly referred to as the House of Quality) shown in Figure 7.3.
Building the House of Quality consists of six basic steps: 1. Identify customer requirements.
2. Identify technical requirements.
3. Relate the customer requirements to the technical requirements
. 4. Conduct an evaluation of competing products or services.
5. Evaluate technical requirements and develop targets
. 6. Determine which technical requirements to deploy in the remainder of the produc- tion/delivery process
To illustrate the development of the House of Quality and the QFD process, the task of designing a new fitness center in a community with two other competing organiza- tions is presented. Step 1: Identify customer requirements. The voice of the customer is the primary input to the QFD process. As discussed in Chapter 3, many methods can be used to gather valid customer information. The most critical and most difficult step of the process is to capture the essence of the customer’s needs and expectations. The customer’s own words are vitally important in preventing misinterpretation by designers and engineers. Figure 7.4 shows the voice of the customer in the House of Quality for the fitness center, perhaps based on a telephone survey or focus groups. They are grouped into five categories: programs and activities, facilities, atmosphere, staff, and other. These groupings can easily be done using affinity diagrams, for example.
Step 2: List the technical requirements that provide the foundation for the product or service design. Technical requirements are measurable design characteristics that describe the customer requirements as expressed in the language of the designer or engineer. Essentially, they are the “hows” by which the company will respond to the “whats”—the customer requirements that will determine customer satisfaction or delight, often called critical to quality characteristics (CTQs). They must be measurable, because the output is controlled and compared to objective targets. For the fit- ness center, these requirements include the number and type of program offerings and equipment, times, staffing requirements, facility characteristics and maintenance, fee structure, and so on. Figure 7.5 adds this information to the House of Quality.
The roof of the House of Quality shows the interrelationships between any pair of technical requirements•. Various symbols denote these relationships. A typical scheme uses the symbol to denote a very strong relationship, O for a strong relationship, and to denote a weak relationship. These relationships indicate answers to questions such as, “How does a change in a technical characteristic affect others?” For example, increasing program offerings will probably require more staff, a larger facility, expanded hours, and higher costs; hiring more maintenance staff, building a larger facility, and buying more equipment will probably result in a higher member- ship fee. Thus, design decisions cannot be viewed in isolation. This relationship matrix helps to evaluate trade-offs.
Step 3: Develop a relationship matrix between the customer requirements and the technical requirements. Customer requirements are listed down the left column; technical requirements are written across the top. In the matrix itself, symbols indicate the degree of relationship in a manner similar to that used in the roof of the House of Quality. The purpose of the relationship matrix is to show whether the final technical requirements adequately address customer require ments. This assessment is usually based on expert experience, customer responses, or controlled experiments. The lack of a strong relationship between a customer requirement and any technical requirement shows that the customer needs either are not addressed or that the final design will have difficulty in meeting them. Similarly, if a technical requirement does not affect any customer requirement, it may be redundant or the designers may have missed some important customer need. For example, the customer requirement “clean locker rooms” bears a very strong relationship to the maintenance schedule and only a strong relationship to the number of maintenance staff. “Easy to sign up for programs” would probably bear a very strong relationship to Internet access and only a weak relationship to the hours the facility is open. Figure 7.6 shows an example of these relationships.
Step 4: Add competitor evaluation and key selling points. This step identifies importance ratings for each customer requirement and evaluates competitors’ existing pro- ducts or services for each of them (see Figure 7.7). Customer importance ratings represent the areas of greatest interest and highest expectations as expressed by the customer. Competitive evaluation highlights the absolute strengths and weaknesses in competing products. By using this step, designers can discover opportunities for improvement. It also links QFD to a company’s strategic vision and indicates priori- ties for the design process. For example, if an important customer requirement receives a low evaluation on all competitors’ products (for instance, “family activities available”), then by focusing on this need a company might gain a competitive advantage. Such requirements become key selling points and the basis for formulating marketing strategies
Step 5: Evaluate technical requirements of competitive products and services and develop targets. This step is usually accomplished through intelligence gathering or product testing and then translated into measurable terms. These evaluations are compared with the competitive evaluation of customer requirements to determine inconsistencies between customer requirements and technical requirement but the evaluation of the related technical requirements indicates otherwise, then either the measures used are faulty or else the product has an image difference (either positive toward the competitor or negative toward the company’s product), which
arget and Tolerance Design After basic technical requirements have been established, designers must set specific dimensional or operational targets and tolerances for critical manufacturing or service characteristics. These might be based on product functionality that reflects the voice of the customer or other considerations such as safety. For example, the National Highway Traffic Safety Administration dictates standards for motor vehicles, such as requiring two windshield wiper speeds, one of which must be faster than 45 cycles per minute and the other at least 15 cycles per minute slower than the faster speed but no slower than one cycle every three seconds.16 Manufacturing specifications consist of nominal dimensions and tolerances. Nominal refers to the ideal dimension or the target value that manufacturing seeks to meet; tolerance is the permissible variation, recognizing the difficulty of meeting a target consistently. Tolerances are necessary because not all parts can be produced exactly to nominal specifications because of natural variations (common causes) in production processes due to the “5 Ms”: men and women, materials, machines, methods, and measurement
Tolerance design involves determining the permissible variation in a dimension. To design tolerances effectively, engineers must understand the necessary trade-offs. Narrow tolerances tend to raise manufacturing costs but they also increase the interchangeability of parts within the plant and in the field, product performance, durability, and appearance. Also, a tolerance reserve or factor of safety is needed to account for engineering uncertainty regarding the maximum variation allowable and compatibility with satisfactory product performance. Wide tolerances, on the other hand, increase material utilization, machine throughput, and labor productivity, but have a negative impact on product characteristics, as previously mentioned. Thus, factors operating to enlarge tolerances include production planning requirements; tool design, fabrication, and setup; tool adjustment and replacement; process yield; inspection and gauge control and maintenance; and labor and supervision requirements
The Taguchi Loss Function All too often, tolerance settings fail to account for the impact of variation on product functionality, manufacturability, or economic consequences. In a review of Audi’s TT Coupe when it was first introduced, automobile columnist Alan Vonderhaar noted, “There was apparently some problem with the second-gear synchronizer, a device that is supposed to ease shifts. As a result, on full-power upshifts from first to second, I frequently got gear clashes.” He observed others with the same problem, from reading Inter- net newsgroups and concluded, “It appears to be an issue that surfaces just now and again, here and there throughout the production mix, suggesting it may be a tolerance issue— sometimes the associated parts are close enough to specifications to get along well, other times they’re at the outer ranges of manufacturing tolerance and cause problems.”19 What Mr. Vonderhaar observed can be explained using the manufacturing-based definition of quality. For example, suppose that a specification for some quality characteristic is 0.500 ` 0.020. Using this definition, the actual value of the quality characteristic can fall anywhere in a range from 0.480 to 0.520. This approach assumes that the customer, either the consumer or the next department in the production process, would accept any value within the 0.480 to 0.520 range, but not be satisfied with a value outside this tolerance range (see Figure 7.10). But what is the real difference
DESIGN FOR RELIABILITY Reliability—the ability of a product to perform as expected over time—is one of the prin- cipal dimensions of quality. As the overall quality of products continues to improve, con- sumers expect higher reliability with each purchase; they simply are not satisfied with products that fail unexpectedly. Reliability is an essential aspect of both product and process design. Sophisticated equipment used today in such areas as transportation (airplanes), communications (satellites), and medicine (pacemakers) requires high reli- ability. High reliability can also provide a competitive advantage for many consumer goods. Japanese automobiles gained large market shares primarily because of their high reliability, and current models typically dominate the Consumer Reports annual ranking for predicted reliability. However, domestic manufacturers have made significant improvements.21 Likewise in manufacturing, the increased use of automation, complexity of machines, low profit margins, and time-based competitiveness make reliability in pro- duction processes a critical issue for survival of the business. However, the increased complexity of modern products makes high reliability more difficult to achieve. Formally, reliability is defined as the probability that a product, piece of equipment, or system performs its intended function for a stated period of time under specified operating conditions. This definition has four important elements: probability, time, per- formance, and operating conditions. 1. First, reliability is defined as a probability, that is, a value between 0 and 1. Thus, it is a numerical measure with a precise meaning. Expressing reliability in this way provides a valid basis for comparison of different designs for products and systems. For example, a reliability of 0.97 indicates that, on average, 97 of 100 items will per- form their function for a given period of time and under certain operating condi- tions. Often reliability is expressed as a percentage simply for descriptive purposes. 2. The second element of the definition is time. Clearly a device having a reliability of 0.97 for 1,000 hours of operation is inferior to one having the same reliability for 5,000 hours of operation, assuming that the mission of the device is long life. 3. Performance is the third element and refers to the objective for which the product or system was made. The term failure is used when expectations of performance of the intended function are not met. Two types of failures can occur: functional failure at the start of product life due to manufacturing or material defects such as a missing connection or a faulty component, and reliability failure after some period of use. Examples of reliability failures include the following: a device does not work at all (car will not start); the operation of a device is unstable (car idles rough); or the performance of a device deteriorates (shifting becomes difficult). Because the nature of failure in each of these cases is different, the failure must be clearly defined. 4. Thefinalcomponentofthereliabilitydefinitionisoperatingconditions,whichinvolves the type and amount of usage and the environment in which the product is used. Auto- mobiles, for example, “must run in temperatures ranging from -70F in Barrow, AK to 130F in the Arizona desert. They have to work while driving over gravel roads or wash- board concrete. Worse, they have to operate reliably, even when they are poorly maintained by owners who seem oblivious to their requirement
By defining a product’s intended environment, performance characteristics, and life- time, a manufacturer can design and conduct tests to measure the probability of product survival (or failure). The analysis of such tests enable better prediction of reliability and improved product and process designs. Reliability engineers distinguish between inherent reliability, which is the predicted reliability determined by the design of the product or process, and the achieved reliability, which is the actual reliability observed during use. Achieved reliability can be less than the inherent reliability due to the effects of the manufacturing process and the conditions of use.
Mathematics of Reliability In practice, reliability is determined by the number of failures per unit time during the duration under consideration (called the failure rate, λ). Some products must be scrapped and replaced upon failure; others can be repaired. For items that must be replaced when a failure occurs, the reciprocal of the failure rate (having dimensions of time units per failure) is called the mean time to failure (MTTF). For repairable items, the mean time between failures (MTBF) is used. We may compute the failure rate by testing or using a sample of items until all fail, recording the time of failure for each item, and use the following formulas: Failure rate 1⁄4 λ 1⁄4 Number of failures Total unit operating hours or alternatively, λ 1⁄4 Number of failures ðUnits testedÞ ðNumber of hours testedÞ
Many electronic components commonly exhibit a high, but decreasing, failure rate early in their lives (as evidenced by the steep slope of the curve), followed by a period of a relatively constant failure rate, and ending with an increasing failure rate. This is depicted in Figure 7.17, which is called a product life characteristics curve, and shows the instantaneous failure rate at any point in time. (This is often referred to as a “bath- tub” curve for obvious reasons.) We see that the failure rate is rather high at the beginning of product life, then levels out over a long period of time and then eventually begins to increase. This is a typical phenomenon for electronic components such as semiconductors and consumer products such as light bulbs. In Figure 7.17, three distinct time periods are evident: early failure (from 0 to about 1,000 hours), useful life (from 1,000 to 4,000 hours), and wearout period (after 4,000 hours). The first is the early failure period, sometimes called the infant mortality period. Weak components resulting from poor manufacturing or quality control procedures will often lead to a high rate of failure early in a product’s life. This high rate usually cannot be detected through normal test procedures, particularly in electronic semiconductors. Such components or products should not be permitted to enter the marketplace. The second phase of the life characteristics curve describes the normal pattern of random failures during a product’s useful life. This period usually has a low, relatively constant failure rate caused by uncontrollable factors, such as sudden and unexpected stresses due to complex interactions in materials or the environment.
System Reliability Many systems are composed of individual components with known reliabilities. The reli- ability data of individual components can be used to predict the reliability of the system at the design stage. Systems of components may be configured in series, in parallel, or in some mixed combination. Block diagrams are useful ways to represent system configurations where blocks represent functional components or subsystems. Engineers can use reliability calculations to predict performance and evaluate alternative designs to optimize performance within cost, size, or other constraints. We first consider a series system, illustrated in Figure 7.20. In a series system, all components must function or the system will fail. For example, inexpensive Christmas tree lights use a series system whereby if one light goes out, the entire string does. If the reliability of component i is Ri, the reliability of the system is the product of the individual reliabilities, that is RS 1⁄4R1 R2 …Rn ð7:14Þ This equation is based on the multiplicative law of probability
Redundancy offers backup components that can be used when the failure of any one component in a system can cause a failure of the entire system. Redundant components can increase reliability dramatically. Redundancy is crucial to systems in which failures can be extremely costly, such as aircraft or satellite communications systems. For example, airplanes have dual ignition systems and two spark plugs in each cylinder and two magenetos that produce a charge for the spark plugs. Redundancy, however, increases the cost, size, and weight of the system. Redundant components are designed in a parallel system configuration as illustrated in Figure 7.21. In such a system, failure of an individual component is less critical than in series systems; the system will success- fully operate as long as one component functions.
DESIGN OPTIMIZATION Designers of products and processes should make every effort to optimize their designs. A good analogy for understanding this concept is to consider the task of a major league baseball manager who must design the best player lineup. Although variation will be a factor among individuals as well as with the opposing team’s defense, the manager would like to set the lineup that best plays to their strengths and overcomes their weak- nesses. Robust design refers to designing goods and services that are insensitive to vari- ation in manufacturing processes and when consumers use them. Robust design is facilitated by design of experiments (see Chapter 6) to identify optimal levels for nominal dimensions and other tools to minimize failures, reduce defects during the manufactur- ing process, facilitate assembly and disassembly (for both the manufacturer and the cus- tomer), and improve reliability. In a celebrated case, Ina Tile Company, a Japanese ceramic tile manufacturer, had purchased a $2 million kiln from West Germany in 1953.23 Tiles were stacked inside the kiln and baked. Tiles toward the outside of the stack tended to have a different average size and more variation in dimensions than those further inside the stack. The obvious cause was the uneven temperatures inside the kiln. Temperature was an uncon- trollable factor. To try to eliminate the effects of temperature would require redesign of the kiln itself, a very costly alternative. A group of engineers, chemists, and others who were familiar with the manufacturing process brainstormed and identified seven major controllable variables that could affect the tile dimensions: 1. Limestone content 2. Fineness of additive 3. Content of agalmatolite 4. Type of agalmatolite 5. Raw material quantity 6. Content of waste return 7. Content of feldspar
Design Failure Mode and Effects Analysis – sample of one is listed below
Safety in consumer products represents a major issue in design, and certainly an important part of a company’s public responsibilities. All parties responsible for design, manufacture, sales, and service of a defective product are now liable for damages. In a survey of more than 500 chief executives, more than one-third worked for firms that canceled the introduction of products because of liability concerns. According to the theory of strict liability, anyone who sells a product that is defective or unreasonably dangerous is subject to liability for any physical harm caused to the user, the consumer, or the property of either.24 This law applies when the seller is in the business of selling the product, and the product reaches the consumer without a substantial change in condition even if the seller exercised all possible care in the preparation and sale of the product. The principal issue is whether a defect, direct or indirect, exists. If the existence of a defect can be established, the manufacturer usually will be held liable. A plaintiff need prove only that (1) the product was defective, (2) the defect was present when the product changed ownership, and (3) the defect resulted in injury. In 1997, Chrysler was ordered to pay $262.5 million in a case involving defective latches on minivans; thus, the economic consequences can be significant. Attention to design quality can greatly reduce the possibility of product liability claims as well as provide supporting evidence in defense arguments. Liability makes documentation of quality assurance procedures a necessity. A firm should record all evidence that shows the designer established test and monitoring procedures of critical product characteristics. Feedback on test and inspection results along with corrective actions taken must also be documented. Even adequate packaging and handling procedures are not immune to examination in liability suits, because packaging is still within the manufacturer’s span of control. Managers should address the following questions:25 • Is the product reasonably safe for the end user? • What could possibly go wrong with it? • Are any needed safety devices absent? • What kind of warning labels or instructions should be included? • What would attorneys call “reasonable foreseeable use”
One tool for proactively addressing product risks is design failure mode and effects analysis (DFMEA), often simply called failure mode and effects analysis (FMEA).26 DFMEA was used by NASA in the 1960s and became popular in the automotive industry in the 1980s. Recently, it has found increasing application in health care. A Joint Commission for Accreditation of Healthcare Organizations standard lists DFMEA as a risk assessment tool, referring to it as “fault mode and effect analysis.” The Institute for Healthcare Improvement defines FMEA as “a systematic, proactive method for evaluating a process to identify where and how it may fail and to assess the relative impact of different failures, in order to identify the parts of the process that are most in need of change.”27 A DFMEA usually consists of specifying the following information for each design element or function: • Failure modes. Ways in which each element or function can fail. This information generally takes some research and imagination. One way to start is with known failures that have occurred in the past. Documents such as quality and reliability reports, test results, and warranty reports provide useful information. • Effect of the failure on the customer. Such as dissatisfaction, potential injury or other safety issue, downtime, repair requirements, and so on. Maintenance records, customer complaints, and warranty reports provide good sources of information. Consideration should be given to failures on the function of the end product, manufacturability in the next process, what the customer sees or experiences, and product safety. • Severity, likelihood of occurrence, and detection rating. These are subjective ratings best done by a cross-functional team of experts. The severity rating is based on how serious the impact would be if the potential failure were to occur. Severity might be measured on a scale of 1 to 10, where a “1” indicates that the failure is so minor that the customer probably would not notice it, and a “10” might mean that the customer might be endangered. The occurrence rating is based on the probability of the potential failure occurring. This might be based on service history or field performance and provides an indication of the significance of the failure. The detection rating is based on how easily the potential failure could be detected prior to occurrence. Figure 7.26 shows an example of a scoring rubric for these ratings. Based on these assessments, a risk priority number (RPN) is computed by multiplying the severity, occurrence, and detection ratings, resulting in a number from 1 to 1,000, which is used to identify critical failure modes that must be addressed. The lower the value, the lower the risk. • Potential causes of failure. Often failure is the result of poor design. Design deficiencies can cause errors either in the field or in manufacturing and assembly. Identification of causes might require experimentation and rigorous analysis. • Corrective actions or controls. These controls might include design changes, mistake proofing, better user instructions, management responsibilities, and target completion dates
ault Tree Analysis Fault Tree Analysis (FTA), sometimes called cause and effect tree analysis, is a method to describe combinations of conditions or events that can lead to a failure. In effect, it is a way to drill down and identify causes associated with failures and is a good complement to DFMEA. It is particularly useful for identifying failures that occur only as a result of multiple events occurring simultaneously. A cause and effect tree is composed of conditions or events connected by “and” gates and “or” gates as shown in Figure 7.29. An effect with an “and” gate occurs only if all of the causes below it occur; an effect with an “or” gate occurs whenever any of the causes occur.
Design for Manufacturability Product design can significantly affect the cost of manufacturing (direct and indirect labor, materials, and overhead), redesign, warranty, and field repair; the efficiency by which the product can be manufactured, and the quality of the output. Designers must pay particular attention to cost, quality, and manufacturability in order to meet price tar- gets that customers are willing to pay. A Samsung manager noted that 70 to 80 percent of quality, cost, and delivery time is determined in the initial design stages. This is one reason’s for the company’s obsession with reducing complexity early in the design cycle. As a result, Samsung has lower manufacturing costs, higher profit margins, quicker times to market, and more often than not, more innovative products than its competition.
d dust, may result in failures during testing or use. Design for manufacturability (DFM) is the process of designing a product for efficient production at the highest level of quality. DFM is typically integrated into standard design processes, but because of the need for highly-creative solutions, it might be addressed in specialized “think-tank” departments in a company. Samsung, for example, supports a Value Innovation Program (VIP) Center, which has been described as “an
Design and Environmental Responsibility Environmental concerns have an unprecedented impact on product and process designs. Hundreds of millions of home and office appliances are disposed of each year. The prob- lem of what to do with obsolete computers is a growing design and technological waste problem today.32 Pressures from environmental groups clamoring for “socially respon- sive” designs, states and municipalities that are running out of space for landfills, and consumers who want the most for their money all cause designers and managers to look carefully at the concept of design-for-environment, or DFE.33 DFE is the explicit consideration of environmental concerns during the design of products and processes, and includes such practices as designing for recyclability and disassembly. DFE offers the potential to create more desirable products at lower costs by reducing disposal and regulatory costs, increasing the end-of-life value of products, reducing material use, and minimizing liabilities. Recyclable products are designed to be taken apart and their components repaired, refurbished, melted down, or otherwise salvaged for reuse. For example, General Electric’s plastics division, which serves the durable goods market, uses only thermoplastics in its products.34 Unlike many other varieties of plastics, thermoplastics can be melted down and recast into other shapes and products, thus making them recyclable. Many products are discarded simply because the cost of maintenance or repair is too high when compared with the cost of a new item. Now design for disassembly promises to bring back easy, affordable product repair. For example, Whirlpool Corporation is developing a new appliance designed for repairability, with its parts sorted for easy coding. Thus, repairability has the potential of pleasing customers, who would prefer to repair a product rather than discard it. At the same time, companies are challenged to consider fresh approaches to design that build both cost-effectiveness and quality into the product. For instance, even though it is more efficient to assemble an item using rivets instead of screws, this approach is contrary to a design-for-disassembly philosophy. An alternative might be an entirely new design that eliminates the need for fasteners in the first place.
Guidelines for Quality Assurance Minimize Number of Parts • Fewer parts and assembly drawings • Less complicated assemblies • Fewer parts to hold to required quality characteristics • Fewer parts to fail Make Assembly Easy and Foolproof • Parts cannot be assembled wrong • Obvious when parts are missing • Assembly tooling designed into part • Parts are self-securing • No “force fitting” of parts ! Lower volume of drawings and instructions to control ! Lower assembly error rate ! Higher consistency of part quality ! Higher reliability ! Lower assembly error rate ! Lower assembly error rate ! Lower assembly error rate ! Lower assembly error rate ! Less damage to parts; better serviceability Minimize Number of Part Numbers • Fewer variations of like parts ! Lower assembly error rate Design for Robustness (Taguchi method) • Low sensitivity to component variability ! Higher first-pass yield; less degradation of performance with time Eliminate Adjustments • No assembly adjustment errors ! Higher first-pass yield • Eliminates adjustable components with high ! Lower failure rate failure rates Use Repeatable, Well-Understood Processes • Part quality easy to control ! Higher part yield • Assembly quality easy to control ! Higher assembly yield Choose Parts That Can Survive Process Operations • Less damage to parts ! Higher yield • Less degradation of parts ! Higher reliability Design for Efficient and Adequate Testing • Less mistaking “good” for “bad” product and vice ! Truer assessment of quality; less unneces- versa sary rework Lay Out Parts for Reliable Process Completion • Less damage to parts during handling and assembly ! Higher yield; higher reliability Eliminate Engineering Changes on Released Products • Fewer errors due to changeovers and multiple ! Lower assembly error rate revisions/versions Source: Adapted from D. Daetz, “The Effect of Product Design on Product Quality and Product Cost,” Quality Progress vol. 20, no. 6, pp. 63–67, June 1987. © 2005 American Society for Qual
Design for Excellence Design for Excellence (DFX) is an emerging concept that includes many design-related initiatives such as concurrent engineering, design for manufacturability, design for assembly, design for environment, and other “design for” approaches.35 DFX objectives include higher functional performance, physical performance, user friendliness, reliability and durability, maintainability and serviceability, safety, compatibility and upgradeability, environmental friendliness, and psychological characteristics. DFX represents a total approach to product development and design involves the following activities: • Constantly thinking in terms of how one can design or manufacture products better, not just solving or preventing problems • Focusing on “things done right” rather than “things gone wrong” • Defining customer expectations and going beyond them, not just barely meeting them or just matching the competition • Optimizing desirable features or results, not just incorporating them • Minimizing the overall cost without compromising quality of function
Design Reviews One approach often used to facilitate product development is the design review. The purpose of a design review is to stimulate discussion, raise questions, and generate new ideas and solutions to help designers anticipate problems before they occur. Generally, a design review is conducted in three major stages: preliminary, intermediate, and final. The preliminary design review establishes early communication between marketing, engineering, manufacturing, and purchasing personnel and provides better coordination of their activities. It usually involves higher levels of management and concentrates on strategic issues in design that relate to customer requirements and thus the ultimate quality of the product. A preliminary design review evaluates such issues as the function of the product, conformance to customer’s needs, completeness of specifications, manufacturing costs, and liability issues. Eastman Chemical reviews designs for safety, reliability, waste minimization, patent position, toxicity information, environmental risks, product disposal, and other customer needs. It also conducts a market analysis of key suppliers’ abilities to manage costs, obtain materials, maintain production, and ship reliably. AT&T Transmission Systems has a new product introduction center that evaluates designs based on manufacturing capabilities, recognizing that good designs both reduce the risk of manufacturing defects and improve productivity. After the design is well established, an intermediate review takes place to study the design in greater detail to identify potential problems and suggest corrective action. Personnel at lower levels of the organization are more heavily involved at this stage. Finally, just before release to production, a final review is held. Materials lists, drawings, and other detailed design information are studied with the purpose of preventing costly changes after production setup
Reliability Testing The reliability of a product is determined principally by the design and the reliability of the components of the product. However, reliability is such a complex issue that it can- not always be determined from theoretical analysis of the design alone. Hence, formal testing is necessary, which involves simulating environmental conditions to determine a product’s performance, operating time, and mode of failure. Testing is useful for a vari- ety of other reasons. Test data are often necessary for liability protection, as means for evaluating designs or vendor reliability, and in process planning and selection. Often, reliability test data are required in military contracts. Testing is necessary to evaluate warranties and to avoid high costs related to early field failure. Good testing leads to good reliability and hence good quality. Verizon Wireless advertises itself as being the “leader in network reliability.” It has its engineers travel around the country in unmarked vehicles to make millions of voice calls and perform data tests on the network and those of competitors each year. Product testing is performed by various methods. For example, Hewlett-Packard’s popular HP-12c financial calculator, which is essentially unchanged since 1981 and still is a popular seller, undergoes a drop test in which engineers repeatedly drop it from desk height onto a hard floor. They also subject the keyboard to mechanical button-pushers to simulate the effects of 5 to 10 years of use.36 Semiconductors are the basic building blocks of numerous modern products such as MP3 players, automotive ignition systems, computers, and military weapons systems. Semiconductors have a small proportion of defects, called latent defects, which can cause them to fail during the first 1,000 hours of normal operation (the infant mortality period in Figure 7.14). After that, the failure rate stabilizes, perhaps for as long as 25 years, before beginning to rise again as components wear out. These infant mortalities can be as high as 10 percent in a new technology or as low as 0.01 percent in proven technologies. Thus, electronic components are often tested for the length of the infant mortality period prior to being placed into service to eliminate early functional failures. This is called burn-in. Studies and experience have demonstrated the economic advantages of burn-in. For example, a large-scale study of the effect of burn-in on enhancing reliability of memory chips was conducted in Europe. The failure rate without burn-in was 0.24 percent per thousand hours, whereas burn-in reduced the rate to 0.02 percent per thousand hours. When considering the cost of field service and warranty work, for instance, reduction of semiconductor failure rates in a large system by an order of mag- nitude translates roughly into an average of one repair call per year versus one repair call per month. The purpose of life testing, that is, running devices until they fail, is to measure the distribution of failures to better understand and eliminate their causes. However, such testing can be expensive and time-consuming. For devices that have long natural lives, life testing is not practical. Accelerated life testing involves overstressing components to reduce the time to failure and find weaknesses. This form of testing might involve exposing integrated circuits to elevated temperatures or voltage in order to force latent defects to occur. For example, a device that might normally fail after 300 hours at 25°C might fail in less than 20 hours at 150°C. A more recent approach, called highly accelerated life testing, is focused on discovering latent defects that would not otherwise be found through conventional methods. For example, it might expose products to rapid, extreme temperature changes in temperature chambers that can move products between hot and cold zones to test thermal shock, or also extreme vibrations.37
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.Read more
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.Read more
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.Read more
Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.Read more
By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.Read more