Skip to main content

Currently Skimming:

5 Model Validation and Prediction
Pages 52-85

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 52...
... In some cases, the verification effort can effectively eliminate the uncertainty due to solution and coding errors, leaving only the first three sources of uncertainty. Likewise, if the computational model runs very quickly, one could evaluate the model at any required input setting, eliminating the need to estimate what the model would have produced at an untried input setting.
From page 53...
... Estimating prediction uncertainty requires the combination of computational models, physical observations, and possibly other information sources. Exactly how this estimation is carried out can range from very direct, as in the weather forecasting example in Figure 5.1, to quite complicated, as described in the case studies in this chapter.
From page 54...
... and multiple sources of physical observations (Section 5.8) is also covered, as is the use of computational models for aid in dealing with rare, high-consequence events (Section 5.10)
From page 55...
... Initial ranges of 8 ≤ g ≤ 12 and 0.2 ≤ CD ≤ 2.0 are specified for the two model parameters. Measured drop times from heights of 20, 40, and 60 m are obtained for the basketball and baseball; measured drop times from heights of 10, 20, .
From page 56...
... . With these 20 computer model runs, a Gauss θ=g ian process is used to produce a probabilistic pre diction of the model output at untried input settings (x, θ)
From page 57...
... , embedded within a statistical framework aided by subjectmatter knowledge and available measurements. The relevant body of knowledge in the ball-drop example consists of measurements from three basketball drops, three baseball drops, and six bowling-ball drops, along with the mathematical and computational models.
From page 58...
... 5.1.3 Model Validation Statement In summary, validation is a process, involving measurements, computational modeling, and subject-matter expertise, for assessing how well a model represents reality for a specified QOI and domain of applicability. Although it is often possible to demonstrate that a model does not adequately reproduce reality, the generic term validated model does not make sense.
From page 59...
... 5.2 UNCERTAINTIES IN PHYSICAL MEASUREMENTS Throughout this chapter, reference is continually made to learning about the computational model and its uncertainties through comparing the predictions of the computational model to available physical data relevant to the QOI. A complication that typically arises is that the physical measurements are themselves subject to uncertainties and possibly bias.
From page 60...
... Standard statistical techniques can allow one to summarize the physical data in terms of the constraints that they place on reality, but a VVUQ analysis requires interfacing this uncertainty with the computational model, especially if calibration is also being done based on the physical data. Bayesian analysis (discussed in Section 5.3)
From page 61...
... . Suppose further that the analyst can build -- using the computational model -- the likelihood function π (yobs|θ)
From page 62...
... Box 5.2 shows how an emulator can reduce the number of computer model runs for the bowling ball drop application in Box 5.1. Here the measured drop times are governed by the unknown parameters, q (the acceleration due to gravity g, for this example)
From page 63...
... Formal approaches to dealing with model inadequacy can be characterized as being in one of two camps, depending on the information available. In one camp, evaluation is performed by comparing model output to physical data from the real process being modeled.
From page 64...
... , together with error bands quantifying the uncertainties in the estimates. For alternative but related formulations for combining computational models with physical measurements for calibration and prediction, see Fuentes and Raftery (2004)
From page 65...
... The conceptual and mathematical model accounts for acceleration due to gravity g only. A discrepancy adjusted prediction is produced by adjusting the simulated drop times according to the equation: Drop time = simulated drop time + α × drop height, where α depends on the radius and density of the ball (Rball, ρball)
From page 66...
... Finding: A discrepancy function can help adjust the computational model to give better interpolative predictions. A discrepancy function can also be beneficial in reducing the overtuning of parameters used to adjust or calibrate the model that can otherwise result.
From page 67...
... The way that one assesses the quality, or reliability, of a prediction and describes its uncertainty depends on a variety of factors, including the availability of relevant physical measurements, the complexity of the system being modeled, and the ability of the computational model to reproduce the important features of the physical system on which the QOI depends. This section surveys issues related to assessing the quality of predictions, their prediction uncertainty, and their dependence on features of the application -- including the physical measurements, the computational model, and the degree of extrapolation required to make inferences about the QOI.
From page 68...
... This notion of domain space enables one to estimate prediction uncertainty, or quality, as a function of position in this space. In Box 5.3, the domain space, describing initial conditions, is used as the support on which the model discrepancy term is defined, enabling a quantitative description of prediction uncertainty as a function of drop height, ball radius, and ball density.
From page 69...
... Mapping out such a domain space can help build understanding regarding the situations for which a computa tional model is expected to give sufficiently accurate predictions. It also may facilitate judgments of the nearness of available physical observations to conditions for which a model-based prediction is required.
From page 70...
... Great savings can be achieved if computer models of the vehicles, or components thereof, are used instead of prototype vehicles for design and testing. Of course, a computer model can be trusted for this only if it can be shown to provide a successful representation of the real process.
From page 71...
... 5.6.4 Modeling the Uncertainties To understand the uncertainties in predictions of the computer model, it is first necessary to model the uncertainties in model inputs, the real-process data, and the model itself. For the nine model input parameters, these uncertainties were given in the form of prior probability distributions, obtained by consultation with the engineers involved with the project.
From page 72...
... The dashed line is the mean discrepancy, and the solid lines are 90 percent FIGURE 5.4 Estimated discrepancy of the computer model from reality. SOURCE: Bayarri et al.
From page 73...
... To predict the road-load time trace for the new vehicle type, the computer model (in which the new vehicle type was close enough in design to allow the use of the same finite-element representation as for the previous vehicle type) was run at 65 values of the uncertain inputs.
From page 74...
... 5.7 INFERENCE FROM MULTIPLE COMPUTER MODELS In applications such as climate change, uncertainties in 20- or 100-year forecasts are likely dominated by structural uncertainty -- uncertainty due to the discrepancy between model and reality. Since there are few or no physical observations from which to estimate model discrepancy directly, predictions from a number of different climate models are often used to help quantify prediction uncertainty.
From page 75...
... . There is the opportunity to make use of these various sources of physical observations to address key issues such as model calibration, model discrepancy, prediction uncertainty, and assessing the quality of the prediction.
From page 76...
... TPS consumption is a critical issue in the design and operation of a reentry vehicle -- if the entire heat shield is consumed, the vehicle will burn up. TPS consumption is governed by a range of physical phenomena, including high speed and turbulent fluid flow, high-temperature aero-thermo-chemistry, radiative heating, and the response of complex materials (the ablator)
From page 77...
... At the PECOS Center, to make MMS useful for the verification of reentry vehicle codes, a highly reliable software library for implementing manufactured solutions (the Manufactured Analytic Solution Abstraction, or MASA) and a library of manufactured solutions, using symbolic manipulation software (e.g., Maple)
From page 78...
... At the PECOS Center, the calibration, validation, and prediction processes are closely related, interdependent, and at the heart of uncertainty quantification in computational modeling. A number of complications arise from the need to pursue validation in the context of a QOI.
From page 79...
... Thus, the issues that complicate extrapolative predictions are almost always present in predictions involving rare events. Still, computational models play a key role in safety assessments for nuclear reactors by the Nuclear Regulatory Commission (Mosleh et al., 1998)
From page 80...
... Methods for assessing and improving confidence in such model predictions are challenging and largely open problems, as they are for extrapolative predictions. Once a high-consequence event is identified, computational models can be viable tools for assessing its prob ability.
From page 81...
... ; or embedding physically motivated discrepancy terms within the model that can produce more reliable prediction uncertainties for the QOI and that can be calibrated with available physical observations; • A framework for efficiently exploiting a hierarchy of available experiments -- allocating experiments for calibration, assessing prediction accuracy, assessing the reliability of predictions, and suggesting new experiments within the hierarchy that would improve the quality of estimated prediction uncertainties; • Guidelines for reporting predictions and accompanying prediction uncertainties, including disclosure of which sources of uncertainty are accounted for, which are not, what assumptions these estimates rely on, and the reliability or quality of these assumptions; and • Compelling examples of VVUQ done well in problems with different degrees of complexity. A similar conclusion was reached by the National Science Foundation (NSF)
From page 82...
... • Principle: The uncertainty in the prediction of a physical QOI must be aggregated from uncertainties and errors introduced by many sources, including discrepancies in the mathematical model, numerical and code errors in the computational model, and uncertainties in model inputs and parameters. -- Best practice: Document assumptions that go into the assessment of uncertainty in the predicted QOI, and also document any omitted factors.
From page 83...
... 2007b. Computer Model Validation with Functional Output.
From page 84...
... 2008. Computer Model Calibration Using High-Dimensional Output.
From page 85...
... 2009. Bayesian Validation of Computer Models.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.