0


Research Papers

J. Verif. Valid. Uncert. 2017;2(2):021001-021001-13. doi:10.1115/1.4036496.

Natural convection is a phenomenon in which fluid flow surrounding a body is induced by a change in density due to the temperature difference between the body and fluid. After removal from the pressurized water reactor (PWR), decay heat is removed from nuclear fuel bundles by natural convection in spent fuel pools for up to several years. Once the fuel bundles have cooled sufficiently, they are removed from fuel pools and placed in dry storage casks for long-term disposal. Little is known about the convective effects that occur inside the rod bundles under dry-storage conditions. Simulations may provide further insight into spent-fuel dry storage, but the models used must be evaluated to determine their accuracy using validation methods. The present study investigates natural convection in a 2 × 2 fuel rod model in order to provide validation data. The four heated aluminum rods are suspended in an open-circuit wind tunnel. Boundary conditions (BCs) have been measured and uncertainties calculated to provide necessary quantities to successfully conduct a validation exercise. System response quantities (SRQs) have been measured for comparing the simulation output to the experiment. Stereoscopic particle image velocimetry (SPIV) was used to nonintrusively measure three-component velocity fields. Two constant-heat-flux rod surface conditions are presented, 400 W/m2 and 700 W/m2, resulting in Rayleigh numbers of 4.5 × 109 and 5.5 × 109 and Reynolds numbers of 3450 and 4600, respectively. Uncertainty for all the measured variables is reported.

Commentary by Dr. Valentin Fuster
J. Verif. Valid. Uncert. 2017;2(2):021002-021002-10. doi:10.1115/1.4036965.

Model validation is a vital step in the simulation development process to ensure that a model is truly representative of the system that it is meant to model. One aspect of model validation that deserves special attention is when validation is required for the transient phase of a process. The transient phase may be characterized as the dynamic portion of a signal that exhibits nonstationary behavior. A specific concern associated with validating a model's transient phase is that the experimental system data are often contaminated with noise, due to the short duration and sharp variations in the data, thus hiding the underlying signal which models seek to replicate. This paper proposes a validation process that uses wavelet thresholding as an effective method for denoising the system and model data signals to properly validate the transient phase of a model. This paper utilizes wavelet thresholded signals to calculate a validation metric that incorporates shape, phase, and magnitude error. The paper compares this validation approach to an approach that uses wavelet decompositions to denoise the data signals. Finally, a simulation study and empirical data from an automobile crash study illustrates the advantages of our wavelet thresholding validation approach.

Commentary by Dr. Valentin Fuster
J. Verif. Valid. Uncert. 2017;2(2):021003-021003-7. doi:10.1115/1.4037004.

This paper examines various sensitivity analysis methods which can be used to determine the relative importance of input epistemic uncertainties on the uncertainty quantified performance estimate. The results from such analyses would then indicate which input uncertainties would merit additional study. The following existing sensitivity analysis methods are examined and described: local sensitivity analysis by finite difference, scatter plot analysis, variance-based analysis, and p-box-based analysis. As none of these methods are ideally suited for analysis of dynamic systems with epistemic uncertainty, an alternate method is proposed. This method uses aspects of both local sensitivity analysis and p-box-based analysis to provide improved computational speed while removing dependence on the assumed nominal model parameters.

Commentary by Dr. Valentin Fuster
J. Verif. Valid. Uncert. 2017;2(2):021004-021004-15. doi:10.1115/1.4037313.

Model calibration and validation are two activities in system model development, and both of them make use of test data. Limited testing budget creates the challenge of test resource allocation, i.e., how to optimize the number of calibration and validation tests to be conducted. Test resource allocation is conducted before any actual test is performed, and therefore needs to use synthetic data. This paper develops a test resource allocation methodology to make the system response prediction “robust” to test outcome, i.e., insensitive to the variability in test outcome; therefore, consistent system response predictions can be achieved under different test outcomes. This paper analyzes the uncertainty sources in the generation of synthetic data regarding different test conditions, and concludes that the robustness objective can be achieved if the contribution of model parameter uncertainty in the synthetic data can be maximized. Global sensitivity analysis (Sobol’ index) is used to assess this contribution, and to formulate an optimization problem to achieve the desired consistent system response prediction. A simulated annealing algorithm is applied to solve this optimization problem. The proposed method is suitable either when only model calibration tests are considered or when both calibration and validation tests are considered. Two numerical examples are provided to demonstrate the proposed approach.

Commentary by Dr. Valentin Fuster
J. Verif. Valid. Uncert. 2017;2(2):021005-021005-11. doi:10.1115/1.4037671.
OPEN ACCESS

Computational modeling has the potential to revolutionize medicine the way it transformed engineering. However, despite decades of work, there has only been limited progress to successfully translate modeling research to patient care. One major difficulty which often occurs with biomedical computational models is an inability to perform validation in a setting that closely resembles how the model will be used. For example, for a biomedical model that makes in vivo clinically relevant predictions, direct validation of predictions may be impossible for ethical, technological, or financial reasons. Unavoidable limitations inherent to the validation process lead to challenges in evaluating the credibility of biomedical model predictions. Therefore, when evaluating biomedical models, it is critical to rigorously assess applicability, that is, the relevance of the computational model, and its validation evidence to the proposed context of use (COU). However, there are no well-established methods for assessing applicability. Here, we present a novel framework for performing applicability analysis and demonstrate its use with a medical device computational model. The framework provides a systematic, step-by-step method for breaking down the broad question of applicability into a series of focused questions, which may be addressed using supporting evidence and subject matter expertise. The framework can be used for model justification, model assessment, and validation planning. While motivated by biomedical models, it is relevant to a broad range of disciplines and underlying physics. The proposed applicability framework could help overcome some of the barriers inherent to validation of, and aid clinical implementation of, biomedical models.

Commentary by Dr. Valentin Fuster
J. Verif. Valid. Uncert. 2017;2(2):021006-021006-14. doi:10.1115/1.4037705.

We describe a framework for the verification of Bayesian model calibration routines. The framework is based on linear regression and can be configured to verify calibration to data with a range of observation error characteristics. The framework is designed for efficient implementation and is suitable for verifying code intended for large-scale problems. We propose an approach for using the framework to verify Markov chain Monte Carlo (MCMC) software by combining it with a nonparametric test for distribution equality based on the energy statistic. Our matlab-based reference implementation of the framework is shown to correctly distinguish between output obtained from correctly and incorrectly implemented MCMC routines. Since correctness of output from an MCMC software depends on choosing settings appropriate for the problem-of-interest, our framework can potentially be used for verifying such settings.

Commentary by Dr. Valentin Fuster

Technical Brief

J. Verif. Valid. Uncert. 2017;2(2):024501-024501-4. doi:10.1115/1.4037706.

Suggestions are made for modification and extension of the methodology and interpretations of ASME V&V 20-2009, Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer. A more conservative aggregation of numerical uncertainty into the total validation uncertainty is recommended. A precise provisional demarcation for accepting the validation comparison error as an estimate of model error is proposed. For the situation where the validation exercise results in large total validation uncertainty, a more easily evaluated estimated bound on model error is recommended. Explicit distinctions between quality of the model and quality of the validation exercise are discussed. Extending the domain of validation for applications is treated by interpolating/extrapolating model error and total validation uncertainty, and adding uncertainty from the new simulation at the application point. Model form uncertainty and epistemic uncertainties in general, while sometimes important in model applications, are argued to not be important issues in validation.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In