Skip to main content

Currently Skimming:

Appendix D: Critique of MIL-HDBK-217--Anto Peter, Diganta Das, and Michael Pecht
Pages 203-246

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 203...
... This step was the inception of reliability prediction for electronics. By the 1960s, spurred on 1  The authors are at the Center for Advanced Life Cycle Engineering at the University of Maryland.
From page 204...
... The methodology first used in MIL-HDBK-217 was a point estimate of the failure rate, which was estimated by fitting a line through field failure data. Soon after its introduction, all reliability predictions were based on this handbook, and all other sources of failure rates, such as those from independent experiments, gradually disappeared (Denson, 1998)
From page 205...
... This approach was not surprising, because Bell Labs and Bell Communications Research (Bellcore) were the lead developers for the tele­ communication reliability prediction method.
From page 206...
... , which are now common, had not been invented at the time of the last MIL-HDBK-217 revision. Needless to say, the components and their use conditions, the failure modes and mechanisms, and the failure rates for today's systems are vastly different from the components for which MIL-HDBK-217 was developed.
From page 207...
... These methodologies, much like MIL-HDBK-217, use some form of a constant failure rate model: they do not consider actual failure modes or mechanisms. Hence, these methodologies are only applicable in cases where systems or components exhibit relatively constant failure rates.
From page 208...
... to modify a given constant base failure rate. The constant failure rates in the handbooks are obtained by performing a linear regression analysis on the field data.
From page 209...
... The MIL-HDBK-217F parts stress method provides constant failure rate models based on curve-fitting the empirical data obtained from field operation and testing. The models have a constant base failure rate modified by environmental, temperature, electrical stress, quality, and other factors.
From page 210...
... PRISM includes some nonpart factors such as interface, software, and mechanical problems. PRISM calculates assembly- and system-level constant failure rates in accordance with similarity analysis, which is an assessment method that compares the actual life-cycle characteristics of a system with predefined
From page 211...
... PRISM calculates non-operating constant failure rates with several assumptions. The daily or seasonal temperature cycling high and low ­ alues v that are assumed to occur during storage or dormancy represent the largest contribution to the non-operating constant failure rate value.
From page 212...
... Some other non-operating constant failure rate tables from the 1970s and 1980s include the MIRADCOM Report LC-78-1, RADC-TR-73-248, and NONOP-1. IEEE 1413 AND COMPARISON OF RELIABILITY PREDICTION METHODOLOGIES The IEEE Standard 1413, IEEE Standard Methodology for Reliability Prediction and Assessment for Electronic Systems and Equipment (IEEE Standards Association, 2010)
From page 213...
... Though only five of the many failure prediction methodologies have been analyzed, they are representative of the other constant failure-ratebased techniques. There have been several publications that assess other similar aspects of prediction methodologies.
From page 214...
... Are failure mechanisms identified? No No No No Yes Are confidence levels for the No Yes Yes No No prediction results identified?
From page 215...
... Does the methodology account for Quality levels are Four quality levels Quality is Part quality Yes part quality? derived from specific that are based accounted level is implicitly part-dependent data on generalities for in the addressed by and the number of the regarding the origin part quality process grading manufacturer screens and screening of process factors and the part goes through.
From page 216...
... and mysterious unexplained causes were used to dismiss anomalies. Proponents of the constant failure rate model believed that the hazard rates or instantaneous failure rates of electronic systems would follow a
From page 217...
... This was said to have been caused by "freaks." It was later explained as being an extended infant mortality rate. Bellcore and SAE created two standards using a prediction methodol ogy based on constant failure rate, but they subsequently adjusted their techniques to account for this phenomenon (of decreasing failure rates lasting several thousand hours)
From page 218...
... It is important to remember that the constant failure rate models used in some of the handbooks are calculated by performing a linear regression analysis on the field failure data or generic test data. These data and the constant failure rates are not representative of the actual failure rates that a system might experience in the field (unless the environmental and loading conditions are static and the same for all devices)
From page 219...
... (Kopanski et al., 1991) explored an example of design misguidance resulting from device failure rate prediction methodologies concerning the relationship between thermal stresses and microelectronic failure mechanisms.
From page 220...
... The constant failure rate reliability predictions have little relevance to the actual reliability of an electronic system in the
From page 221...
... constant failure-rate data, not root-cause, time-to-failure data. A proponent stated: "Therefore, because of the fragmented nature of the data and the fact that it is often necessary to interpolate or extrapolate from available data when developing new models, no statistical confidence intervals should be associated with the overall model results" (Morris, 1990)
From page 222...
... and failure sites for not do a very good job in an key failure mechanisms in a device absolute sense" (Morris, 1990)
From page 223...
... Relative Cost Cost is high compared with Intent is to focus on root-cause of Analysis value added. Can misguide failure mechanisms and sites, efforts to design reliable which is central to good design and electronic equipment.
From page 224...
... In order to estimate system-level reliability, MIL-HDBK-217 suggests that the individual reliabilities of components either be added or multiplied with other correction factors, all with the overarching assumption of constant failure rates. Pecht and Ramappan (1992)
From page 225...
... While the first edition of the MIL-HDBK-217 only featured a singlepoint constant failure rate, the second edition, MIL-HDBK-217B, featured a failure rate calculation that later became the standard for reliability prediction methodologies in the United States. In 1969, around the time that revision B of the handbook was being drafted, Codier (1969)
From page 226...
... : lp = lbpTpRpSpQpE failures/106 hours, (1) where lb is the base failure rate, pT is the temperature factor, pR is the power rating factor, pS is the voltage stress factor, pQ is the quality factor, and pE is the environment factor.
From page 227...
... Even with all its updates, the latest handbook revision, MIL-HDBK-217F, only features reliability prediction models for ceramic and plastic packages based on data from dual inline packages and pin grid arrays, which have rarely been used in new designs since 2003. Since the 1990s, the packaging and input/output (I/O)
From page 228...
... Such components and systems had been made available for significantly longer than contemporary commercial electronic products, such as computers and cell phones, which typically have a life cycle of 2-5 years. These shorter life cycles pose a challenge to failure rate evaluations because of the short time frame available for the collection of failure data and the development of failure rate models.
From page 229...
... are corollaries of the constant failure rate metric. For constant failure rates, the MTBF is simply the inverse of the constant failure rate, while an FIT rate is the number of failures in one billion (109)
From page 230...
... . They pointed out that not only were the assumptions made in the calculation for reliability prediction at the part level flawed, but they also had a significant role in contributing to the disparity between predicted and observed MTBF values at the system level.
From page 231...
... The results from their studies can be seen in Figure D-5. Similar results can also be seen in TABLE D-7  Ratio of Measured MTBFs to Handbook-Based Predicted MTBFs for Various Electronic Devices Measured Predicted MTBF MTBF Product Method (hours)
From page 232...
... 232 RELIABILITY GROWTH FIGURE D-5  Predicted MTBFs compared with field or measured MTBFs in Wood and Elerath (1994) , top, and in Charpenel et al.
From page 233...
... However, the results described in Table D-7 seem to indicate that handbook-based reliability prediction techniques can either arbitrarily underpredict or overpredict the true MTBF of a system in field conditions. Hence, handbook-based predictions are not always conservative.
From page 234...
... TABLE D-8  Results of the 1987 SINCGARS Nondevelopmental Item Candidate Test Vendor Predicted MTBF (hours) Observed MTBF (hours)
From page 235...
... 217Plus was also published as an alternative to MIL-HDBK-217F. 217Plus prediction model doubles the number of part-type failure rate models from PRISM, and it also contains six new constant failure rate models not available in PRISM.
From page 236...
... that account for noncomponent impacts of overall system reliability. "The goal of a model is to estimate the ‘rate of occurrence of failure' and accelerants of a component's primary failure mechanisms within an acceptable degree of accuracy" (Reliability Information Analysis Center, 2006, p.
From page 237...
... It is still not be possible to condense the results from the Bayesian analysis to give a point estimate of either the lifetimes or failure rates, because this point estimate would suffer from the same shortcomings that the constant failure rate estimates have, in that it would not be able to account for variations in field conditions. Although the 217Plus methodology was developed by RIAC, the inclusion of "217" term in the title of the RIAC handbook seems to imply that it is officially endorsed by DoD as a successor to MIL-HDBK-217F.
From page 238...
... This constant failure rate then became the premise of MIL-HDBK-217. Because the methodology was built around this premise, the reliability prediction techniques excluded any consideration of the root causes of failures and the physics underlying the failure mechanisms and focused instead only on the linear regression analysis of the failure data.
From page 239...
... The adoption and adaptation of constant failure rate models to evaluate the reliability of electronic systems was probably never a good idea. This practice has fundamentally affected how reliability prediction is perceived, both with regard to commercial and military electronics.
From page 240...
... . Commentary -- caution: Constant failure-rate models may be hazardous to your design.
From page 241...
... . Improved reliability predictions for commercial computers.
From page 242...
... . Constant failure rate -- A paradigm in transition.
From page 243...
... . A new reliability prediction model for telecommunica tion hardware.
From page 244...
... . Special Report SR-332: Reliability Prediction Procedure for Electronic Equipment, Issue 1.
From page 245...
... . What is wrong with the existing reliability prediction methods?


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.