Skip to main content

Currently Skimming:

7 Developmental Test and Evaluation
Pages 93-104

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 93...
... In our context, it is to assess whether it will be reliable when deployed. Developmental testing, like all forms of testing, is not a cost-effective substitute for thorough system and reliability engineering, as a system is developed from concept to reality.
From page 94...
... While early developmental testing emphasizes the identification of failure modes and other design defects, later developmental testing gives greater emphasis to evaluating whether and when a system is ready for operational testing. BASIC ELEMENTS OF DEVELOPMENTAL TESTING Several elements are important to the design and evaluation of effective developmental tests: statistical design of experiments, accelerated tests, reliability tests, testing at various levels of aggregation, and data analysis: • Statistical design of experiments involves the careful selection of a suite of test events to efficiently evaluate the effects of design and operational variables on component, subsystem, and system reliability.
From page 95...
... If, as we recommend, contractor test data are shared with the DoD program personnel, then those data can provide a sound basis for subsequent collaboration as further developmental testing is done in order to improve system reliability, if needed, and to demonstrate readiness for operational testing by DoD. There are also technical aspects to the recommended collaboration, such as having DoD developmental test design reflect subjectmatter knowledge of both the developer and the user.
From page 96...
... If an important subset of the space of operational environments was left unexplored during contractor testing -- such as testing in cold, windy, environments -- it would be important to give priority during developmental testing to the inclusion of test replications in those environments (see Recommendation 11 in Chapter 10)
From page 97...
... Therefore, to reduce the number of failure modes left to be discovered during operational testing, and at the same time have a better estimate of system reliability in operationally relevant environments, non-accelerated developmental tests should, to the extent possible, subject components, subsystems, and the full system to the same stresses that would be experienced in the field under typical use environments and conditions. This approach will narrow the potential DT/OT gap in reliability assessments and provide an evaluation of system reliability that is more operationally relevant.
From page 98...
... However, it is also important to communicate to decision makers the imprecision of the average estimated failure probabilities, which can be done through the use of confidence intervals. As mentioned above, to the extent possible, given the number of replications, it would also be useful to provide estimated probabilities and confidence intervals disaggregated by variables defining mission type or test environment.
From page 99...
... Nonrepairable Continuously Operating Systems For nonrepairable continuously operating systems, the goal of the developmental test data analysis is to estimate the lifetime distribution for the system, to the extent possible, as a function of mission design factors. Such an estimate would be computed from lifetime test data.
From page 100...
... Merging Data Because the time on test for any individual prototype and for any design configuration is often insufficient to provide high-quality estimates of system reliability, methods that attempt to use data external to the tests to augment developmental test data are worth considering. Several kinds of data merging are possible: (1)
From page 101...
... Finally, combining information over developmental tests is complicated by the fact that design defects and failure modes discovered during developmental testing often result in changes to the system design. Therefore, one is often trying to account not only for differences in the test environment, but also for the differences in the system under test.
From page 102...
... Oversimplifying, one would input the times when developmental tests were scheduled into a model of anticipated reliability growth consistent with meeting the requirement just prior to operational testing and compare the observed reliability from each test with the model prediction for that time period. Unfortunately, the most commonly used reliability growth models have deficiencies (as discussed in Chapter 4)
From page 103...
... It is likely that most developmental tests will be fairly short in duration and will rely on a relatively small number of test units because of the need to budget an unknown number of future developmental tests to evaluate future design modifications. As mentioned above, to supplement a limited developmental test in order to produce higher quality reliability estimates, assuming the tests are relatively similar, one could smooth the results of several test events over time, or fit some kind of parametric time series model, to model the growth in reliability.
From page 104...
... The determination of this lowest acceptable level would be done by the program executive office and would involve the usual considerations that make up an analysis of alternatives. Under this approach, a decision rule for proceeding to operational testing could be whether or not a lower confidence bound, chosen by considering both the costs of rejecting a suitable system and the costs of accepting an unsuitable system, was lower than this minimally acceptable level of reliability.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.