Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
4 Comparison and Contrast of the Use of QMU Task 3: Evaluate how the use of that [the QMU] methodology compares and contrasts between the national security laboratories. Earlier reports raised concerns that differences in the implementation of QMU among the three national security laboratories (both among the labs and among various groups within a single lab) could cause confusion and limit the efficacy of QMU as a framework for assessing and commu- nicating the reliability of nuclear weapons. Finding 3-1. Differences in the implementation of QMU method- ologies among the national security laboratories can be beneficial for promoting a healthy evolution of best practices. Sandia National Laboratoriesâ requirements are different from those of the design labs, because many of their components can be extensively tested under many of the required conditions (unlike the design labora- toriesâ nuclear explosive packages). âU.S. Government Accountability Office, NNSA Needs to Refine and More Effectively Manage Its New Approach for Assessing and Certifying Nuclear Weapons, GAO-06-261 (2006); U.S. Department of Defense, Report on the Friendly Reviews of QMU at the NNSA Laboratories, Defense Program Science Council, (March 2004). â slight revision in Finding 3-1 has been made to correct wording that mistakenly implied A all of Sandiaâs components could be extensively tested. Other changes were made in the second paragraph of text following this finding. 39
40 evaluation of qmu methodology Methods for implementing QMU continue to evolve, as they should, and the laboratories should explore different approaches as a means to determine the best approach for a given warhead. For example, LANL has focused on estimating uncertainties by a sensitivity analysis that examines the variations in simulated primary yield resulting from variations in input parameters (e.g., pit mass) for a given weapon. LLNL, on the other hand, has attempted to develop a comprehensive model that explains the test results for a variety of different primaries and thus addresses model- ing uncertainties as well as parameter uncertainties. Some differences in approach arise naturally from the different mis- sions of the laboratories. For example, much of SNLâs work is different from that of the design labs, because it involves warhead components other than the nuclear explosive package (e.g., the firing set and the neutron generator). In principle, SNL can test its systems under many of the relevant conditions; for these conditions SNL is not forced to rely on simulation codes to generate estimates of thresholds, margins, and uncer- tainties. For practical reasons, SNL cannot test statistically significant numbers of some of its components and therefore still uses computational modeling; however, the models can be challenged in many cases by full system tests. LANL and LLNL, on the other hand, cannot perform full system tests and must instead rely heavily on simulation codes in their assessments of margins and uncertainties. SNL also cannot test its com- ponents in âhostileâ environments, in which the warhead is subjected to a nearby nuclear explosion, and thus much of its hostile-environment work shares many of the challenges faced by the design laboratories. Recommendation 3-1. The national security laboratories should continue to explore different approachesâfor example, using dif- ferent codesâfor QMU analysis for estimating uncertainties. These different methods should be evaluated in well-defined intra- and interlaboratory exercises. Differences in methodology are potentially positive, leading to healthy competition and the evolution of best practices. To determine a best prac- tice, however, would require an ability to assess various competing prac- tices. The committee has not seen an assessment of competing uncertainty quantification methodologies at any of the laboratories, nor has it even seen an organized attempt to compare them. Finding 3-2. Differences in definitions among the national security labs of some QMU terms cause confusion and limit transparency. Table 4-1 shows that, in some cases, earlier concerns about inconsis- tencies continue to be valid. (More information on this topic is included
comparison and Contrast of the Use of QMU 41 Table 4-1â Comparison of QMU Usage at the Nuclear Weapons Laboratories for the 2007 Annual Assessment Item Sandia Usage Los Alamos Usage Livermore Usage Minimum primary N/A â¢â Ytot > 80% â¢â Ytot > 90% yield, Yp,min â¢â Valid model â¢â Valid model Magnitude of U One sigma Two sigma One sigma Determination Statistical Various calculations Various calculations of U observation and/or expert and/or expert judgment judgment Figure of merit K-factora M/U M/U aK = M/U with U measured at one sigma. in Table B-2 in the classified Annex.) For example, in their presentations made to the committee, it became clear that researchers at LANL and LLNL were using different definitions of uncertainty (one sigma vs. two sigma) and different definitions of the important threshold Yp,min (minimum pri- mary yield). The committee learned that the design labs have established a joint group on QMU and are working to establish common definitions and agreement on common metrics and failure modes, but clearly this is a work in progress. It is particularly important that common definitions be developed for parameters used in external communications. Recommendation 3-2. To enhance transparency in communicat- ing the results arising from the QMU analyses, national security laboratories should agree on a common set of definitionsâsuch as the sigma level designating the magnitude of uncertaintyâand terminology. Inconsistencies in the definition of uncertainty and common terms such as Yp,min (minimum primary yield) are unnecessary and confusing and should be eliminated. Finding 3-3. The consistency and transparency of the application of QMU are being inhibited by the lack of consistency within each lab and the lack of documentation. In their presentations, NNSA and each of the laboratories told the committee that draft QMU guidance documents were being prepared by each organization. At this writing, however, none of these documents had been completed. The committee believes that documentation of the QMU approaches used by each lab (even in draft form), as well as overarching
42 evaluation of qmu methodology policy guidance on the implementation of QMU from NNSA, will be essential for improving consistency and transparency in the implementa- tion of QMU and for facilitating peer review. Accordingly, documentation must be given high priority. As noted above, the use of different approaches to QMU can be a strength as long as methods are documented to make them more trans- parent and to assist researchers in communicating effectively with one another, with management, and with outside audiences. Recommendation 3-3. It is urgent that NNSA and the national secu- rity labs complete the development and issuance of QMU guidance documents in time for the current assessment cycle. This process should be used to drive consensus among lab scientists. The docu- ments should be updated as the methodology matures. Finding 3-4. The QMU framework has yet to be clearly defined by the national security laboratories collectively or individually. This framework must identify a more comprehensive set of performance gates and describe how QMU is used to analyze each. A possible outcome of this process is that QMU is not appropriate for a par- ticular performance gate. QMU is often conflated with the whole set of tasks and tools that must be carried forward for stockpile stewardship and design of the RRW. These tasks and tools exist independently of the way that QMU is defined. The tools used in the QMU process are, for the most part, already in wide- spread use; this is not the issue. Rather, the issue is that the overarching QMU process needs to take into account various divergent views on the essence of the process. An incomplete QMU methodology could also result in a situation in which the blind application of QMU increases the likelihood of missing an alternative failure mechanism or of hiding it altogether. If this hap- pened, efforts to increase a margin and improve the apparent confidence factor of a nuclear explosive package determined from the application of QMU could activate an alternative failure mechanism. For example, design changes that enhance yield margin could introduce one-point- safety concerns. Recommendation 3-4. The national security labs should carry out interlaboratory comparisons of different methods for finding and characterizing the most important uncertainties and for propagat- ing these uncertainties through computer simulations of weapons performance.