0
Research Papers

Optimal Test Selection for Prediction Uncertainty Reduction

[+] Author and Article Information
Joshua Mullins

V&V, UQ, and Credibility Processes Department,
Sandia National Laboratories,
P.O. Box 5800, Mail Stop 0828,
Albuquerque, NM 87185-0828
e-mail: jmullin@sandia.gov

Sankaran Mahadevan

Department of Civil and
Environmental Engineering,
Vanderbilt University,
VU Station B #351831,
Nashville, TN 37235-1831
e-mail: sankaran.mahadevan@vanderbilt.edu

Angel Urbina

V&V, UQ, and Credibility Processes Department,
Sandia National Laboratories,
P.O. Box 5800, Mail Stop 0828,
Albuquerque, NM 87185-0828

Manuscript received October 6, 2015; final manuscript received October 27, 2016; published online December 2, 2016. Assoc. Editor: Scott Doebling.

J. Verif. Valid. Uncert 1(4), 041002 (Dec 02, 2016) (10 pages) Paper No: VVUQ-15-1044; doi: 10.1115/1.4035204 History: Received October 06, 2015; Revised October 27, 2016

Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecise data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. The proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.

FIGURES IN THIS ARTICLE
<>
Copyright © 2016 by ASME
Your Session has timed out. Please sign back in to continue.

References

Trucano, T. , Swiler, L. , Igusa, T. , Oberkampf, W. , and Pilch, M. , 2006, “ Calibration, Validation, and Sensitivity Analysis: What's What,” Reliab. Eng. Syst. Saf., 91(10–11), pp. 1331–1357. [CrossRef]
Higdon, D. , Kennedy, M. , Cavendish, J. , Cafeo, J. , and Ryne, R. , 2005, “ Combining Field Data and Computer Simulations for Calibration and Prediction,” SIAM J. Sci. Comput., 26(2), pp. 448–466. [CrossRef]
Roy, C. , and Oberkampf, W. , 2011, “ A Comprehensive Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing,” Comput. Methods Appl. Mech. Eng., 200(25–28), pp. 2131–2144. [CrossRef]
Sankararaman, S. , and Mahadevan, S. , 2012, “ Comprehensive Framework for Integration of Calibration, Verification and Validation,” AIAA Paper No. 2012-1366.
Romero, V. , Luketa, A. , and Sherman, M. , 2010, “ Application of a Versatile ‘Real-Space’ Validation Methodology to a Fire Model,” J. Thermophys. Heat Transfer, 24(4), pp. 730–744. [CrossRef]
Kennedy, M. , and O'Hagan, A. , 2001, “ Bayesian Calibration of Computer Models,” J. R. Stat. Society Ser. B, 63(5), pp. 425–464. [CrossRef]
Arendt, P. D. , Apley, D. W. , and Chen, W. , 2012, “ Quantification of Model Uncertainty: Calibration, Model Discrepancy, and Identifiability,” ASME J. Mech. Des., 134(10), p. 100908. [CrossRef]
Hartmann, C. , Smeyers-Verbeke, J. , Penninckx, W. , Vander Heyden, Y. , Vankeerberghen, P. , and Massart, D. , 1995, “ Reappraisal of Hypothesis Testing for Method Validation: Detection of Systematic Error by Comparing the Means of Two Methods or of Two Laboratories,” Anal. Chem., 67(24), pp. 4491–4499. [CrossRef]
Hills, R. G. , and Trucano, T. G. , 1999, “ Statistical Validation of Engineering and Scientific Models: Background,” Sandia Technical Report No. SAND99-1256.
Rebba, R. , and Mahadevan, S. , 2006, “ Validation and Error Estimation of Computational Models,” Reliab. Eng. Syst. Saf., 91(10–11), pp. 1390–1397. [CrossRef]
Rebba, R. , and Mahadevan, S. , 2006, “ Validation of Models With Multivariate Output,” Reliab. Eng. Syst. Saf., 91(8), pp. 861–871. [CrossRef]
O'Hagan, A. , 1995, “ Fractional Bayes Factors for Model Comparison,” J. R. Stat. Soc. Ser. B, 57(1), pp. 99–138.
Wang, S. , Chen, W. , and Tsui, K.-L. , 2009, “ Bayesian Validation of Computer Models,” Technometrics, 51(4), pp. 439–451. [CrossRef]
Ferson, S. , Oberkampf, W. , and Ginzburg, L. , 2008, “ Model Validation and Predictive Capability for the Thermal Challenge Problem,” Comput. Methods Appl. Mech. Eng., 197(29–32), pp. 2408–2430. [CrossRef]
Ferson, S. , and Oberkampf, W. , 2009, “ Validation of Imprecise Probability Models,” Int. J. Reliab. Saf., 3(1), pp. 3–22. [CrossRef]
Rebba, R. , and Mahadevan, S. , 2008, “ Computational Methods for Model Reliability Assessment,” Reliab. Eng. Syst. Saf., 93(8), pp. 1197–1207. [CrossRef]
Sankararaman, S. , and Mahadevan, S. , 2013, “ Assessing the Reliability of Computational Models Under Uncertainty,” AIAA Paper No. 2013-1873.
Liu, Y. , Chen, W. , and Arendt, P. , 2011, “ Toward a Better Understanding of Model Validation Metrics,” ASME J. Mech. Des., 133(7), p. 071005. [CrossRef]
Ling, Y. , and Mahadevan, S. , 2013, “ Quantitative Model Validation Techniques: New Insights,” Reliab. Eng. Syst. Saf., 111, pp. 217–231. [CrossRef]
Hills, R. G. , and Leslie, I. H. , 2003, “ Statistical Validation of Engineering and Scientific Models: Validation Experiments to Application,” Sandia Technical Report No. SAND2003-0706.
Mullins, J. , Li, C. , Sankararaman, S. , Mahadevan, S. , and Urbina, A. , 2013, “ Uncertainty Quantification Using Multi-Level Calibration and Validation Data,” AIAA Paper No. 2013-1872.
Helton, J. , and Sallaberry, C. , 2012, “ Uncertainty and Sensitivity Analysis: From Regulatory Requirements to Conceptual Structure and Computational Implementation,” Uncertainty Quantification in Scientific Computing, IFIP Advances in Information and Communication Technology, Vol. 377, Springer, Berlin, pp. 60–77.
Oberkampf, W. L. , Helton, J. C. , Joslyn, C. A. , Wojtkiewicz, S. F. , and Ferson, S. , 2004, “ Challenge Problems: Uncertainty in System Response Given Uncertain Parameters,” Reliab. Eng. Syst. Saf., 85(1–3), pp. 11–19. [CrossRef]
Kiureghian, A. , 2009, “ Aleatory or Epistemic? Does It Matter?,” Struct. Saf., 31(2), pp. 105–112. [CrossRef]
Jaulin, L. , Kieffer, M. , Didrit, O. , and Walter, E. , 2001, Applied Interval Analysis, Springer-Verlag, New York.
O'Hagan, A. , and Oakley, J. E. , 2004, “ Probability Is Perfect, But We Can't Elicit It Perfectly,” Reliab. Eng. Syst. Saf., 85(1–3), pp. 239–248. [CrossRef]
MacKay, D. J. C. , 2003, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, New York.
Berger, J. O. , 1985, Statistical Decision Theory and Bayesian Analysis, Springer-Verlag, New York.
Montgomery, D. C. , 2000, Design and Analysis of Experiments, 5th ed., Wiley, New York.
Fedorov, V. V. , and Hackl, P. , 1997, Model-Oriented Design of Experiments, Springer, New York.
Pukelsheim, F. , 2006, Optimal Design of Experiments (Classics in Applied Mathematics), Vol. 50, Society for Industrial and Applied Mathematics, Philadelphia, PA.
Asprey, S. P. , and Macchietto, S. , 2002, “ Designing Robust Optimal Dynamic Experiments,” J. Process Control, 12(4), pp. 545–556. [CrossRef]
Bingham, D. R. , and Chipman, H. A. , 2007, “ Incorporating Prior Information in Optimal Design for Model Selection,” Technometrics, 49(2), pp. 155–163. [CrossRef]
Skanda, D. , and Lebiedz, D. , 2010, “ An Optimal Experimental Design Approach to Model Discrimination in Dynamic Biochemical Systems,” Bioinformatics, 26(7), pp. 939–945. [CrossRef] [PubMed]
Tommasi, C. , and Lopez-Fidalgo, J. , 2010, “ Bayesian Optimum Designs for Discriminating Between Models With Any Distribution,” Comput. Stat. Data Anal., 54(1), pp. 143–150. [CrossRef]
Kullback, S. , and Leibler, R. A. , 1951 “ On Information and Sufficiency,” Ann. Math. Stat., 22(1), pp. 76–86. [CrossRef]
Metropolis, N. , Rosenbluth, A. , Rosenbluth, M. , Teller, A. , and Teller, E. , 1953, “ Equation of State Calculations by Fast Computing Machines,” J. Chem. Phys., 21(6), p. 1087. [CrossRef]
Hastings, W. , 1970, “ Monte Carlo Sampling Methods Using Markov Chains and Their Applications,” Biometrika, 57(1), pp. 97–109. [CrossRef]
Gilks, W. , and Wild, P. , 1992, “ Adaptive Rejection Sampling for Gibbs Sampling,” J. R. Stat. Soc. Ser. C, 41(2), pp. 337–348.
Neal, R. , 2003, “ Slice Sampling,” Ann. Stat., 31(3), pp. 705–741. [CrossRef]
Gosset, W. S. , 1908, “ The Probable Error of a Mean,” Biometrika, 6(1), pp. 1–25. [CrossRef]
Sankararaman, S. , McLemore, K. , and Mahadevan, S. , 2013, “ Test Resource Allocation in Hierarchical Systems Using Bayesian Networks,” AIAA J., 51(3), pp. 537–550. [CrossRef]
Weiss, N. A. , 2005, A Course in Probability, Addison-Wesley, Boston, MA.
Alrefaei, M. H. , and Andradottir, S. , 1999, “ A Simulated Annealing Algorithm With Constant Temperature for Discrete Stochastic Optimization,” Manage. Sci., 45(5), pp. 748–764. [CrossRef]
Rasmussen, C. E. , and Williams, C. K. I. , 2006, Gaussian Processes for Machine Learning, The MIT Press, Cambridge, MA.
Koslowski, M. , and Strachan, A. , 2011, “ Uncertainty Propagation in a Multiscale Model of Nanocrystalline Plasticity,” Reliab. Eng. Syst. Saf., 96(9), pp. 1161–1170. [CrossRef]
Kim, H. , Venturini, G. , and Strachan, A. , 2012, “ Molecular Dynamics Study of Dynamical Contact Between a Nanoscale Tip and Substrate for Atomic Force Microscopy Experiments,” J. Appl. Phys., 112(9), p. 094325. [CrossRef]

Figures

Grahic Jump Location
Fig. 1

Variance in model reliability for replicate validation data sets at the same input: (a) one observation, (b) 10 observations, (c) 100 observations, (d) 1000 observations, (e) 10,000 observations, and (f) 100,000 observations

Grahic Jump Location
Fig. 2

Student's t-distribution of the mean observation for sparse observation sets

Grahic Jump Location
Fig. 4

Known aleatory distributions of h for the six devices

Grahic Jump Location
Fig. 5

Parameter uncertainty for a particular data realization: (a) sample calibration of E and (b) distribution of the overall model reliability

Grahic Jump Location
Fig. 6

Family of predictions for a particular data realization d: (a) sample PDF family and (b) sample CDF family

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In