Newest Issue

Research Papers

J. Verif. Valid. Uncert. 2017;2(2):021001-021001-13. doi:10.1115/1.4036496.

Natural convection is a phenomenon in which fluid flow surrounding a body is induced by a change in density due to the temperature difference between the body and fluid. After removal from the pressurized water reactor (PWR), decay heat is removed from nuclear fuel bundles by natural convection in spent fuel pools for up to several years. Once the fuel bundles have cooled sufficiently, they are removed from fuel pools and placed in dry storage casks for long-term disposal. Little is known about the convective effects that occur inside the rod bundles under dry-storage conditions. Simulations may provide further insight into spent-fuel dry storage, but the models used must be evaluated to determine their accuracy using validation methods. The present study investigates natural convection in a 2 × 2 fuel rod model in order to provide validation data. The four heated aluminum rods are suspended in an open-circuit wind tunnel. Boundary conditions (BCs) have been measured and uncertainties calculated to provide necessary quantities to successfully conduct a validation exercise. System response quantities (SRQs) have been measured for comparing the simulation output to the experiment. Stereoscopic particle image velocimetry (SPIV) was used to nonintrusively measure three-component velocity fields. Two constant-heat-flux rod surface conditions are presented, 400 W/m2 and 700 W/m2, resulting in Rayleigh numbers of 4.5 × 109 and 5.5 × 109 and Reynolds numbers of 3450 and 4600, respectively. Uncertainty for all the measured variables is reported.

Commentary by Dr. Valentin Fuster
J. Verif. Valid. Uncert. 2017;2(2):021002-021002-10. doi:10.1115/1.4036965.

Model validation is a vital step in the simulation development process to ensure that a model is truly representative of the system that it is meant to model. One aspect of model validation that deserves special attention is when validation is required for the transient phase of a process. The transient phase may be characterized as the dynamic portion of a signal that exhibits nonstationary behavior. A specific concern associated with validating a model's transient phase is that the experimental system data are often contaminated with noise, due to the short duration and sharp variations in the data, thus hiding the underlying signal which models seek to replicate. This paper proposes a validation process that uses wavelet thresholding as an effective method for denoising the system and model data signals to properly validate the transient phase of a model. This paper utilizes wavelet thresholded signals to calculate a validation metric that incorporates shape, phase, and magnitude error. The paper compares this validation approach to an approach that uses wavelet decompositions to denoise the data signals. Finally, a simulation study and empirical data from an automobile crash study illustrates the advantages of our wavelet thresholding validation approach.

Commentary by Dr. Valentin Fuster
J. Verif. Valid. Uncert. 2017;2(2):021003-021003-7. doi:10.1115/1.4037004.

This paper examines various sensitivity analysis methods which can be used to determine the relative importance of input epistemic uncertainties on the uncertainty quantified performance estimate. The results from such analyses would then indicate which input uncertainties would merit additional study. The following existing sensitivity analysis methods are examined and described: local sensitivity analysis by finite difference, scatter plot analysis, variance-based analysis, and p-box-based analysis. As none of these methods are ideally suited for analysis of dynamic systems with epistemic uncertainty, an alternate method is proposed. This method uses aspects of both local sensitivity analysis and p-box-based analysis to provide improved computational speed while removing dependence on the assumed nominal model parameters.

Commentary by Dr. Valentin Fuster
J. Verif. Valid. Uncert. 2017;2(2):021004-021004-15. doi:10.1115/1.4037313.

Model calibration and validation are two activities in system model development, and both of them make use of test data. Limited testing budget creates the challenge of test resource allocation, i.e., how to optimize the number of calibration and validation tests to be conducted. Test resource allocation is conducted before any actual test is performed, and therefore needs to use synthetic data. This paper develops a test resource allocation methodology to make the system response prediction “robust” to test outcome, i.e., insensitive to the variability in test outcome; therefore, consistent system response predictions can be achieved under different test outcomes. This paper analyzes the uncertainty sources in the generation of synthetic data regarding different test conditions, and concludes that the robustness objective can be achieved if the contribution of model parameter uncertainty in the synthetic data can be maximized. Global sensitivity analysis (Sobol’ index) is used to assess this contribution, and to formulate an optimization problem to achieve the desired consistent system response prediction. A simulated annealing algorithm is applied to solve this optimization problem. The proposed method is suitable either when only model calibration tests are considered or when both calibration and validation tests are considered. Two numerical examples are provided to demonstrate the proposed approach.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In