0

Accepted Manuscripts

BASIC VIEW  |  EXPANDED VIEW
research-article  
Wendy K. Caldwell, Abigail Hunter, Catherine S. Plesko and Stephen Wirkus
J. Verif. Valid. Uncert   doi: 10.1115/1.4042516
Verification and validation (V&V) are necessary processes to ensure accuracy of the computational methods used to solve problems key to vast numbers of applications and industries. Simulations are essential for addressing impact cratering problems, because these problems often exceed experimental capabilities. Here we show that the FLAG hydrocode, developed at Los Alamos National Laboratory, can be used for impact cratering simulations by verifying FLAG against two analytical models of aluminum-on-aluminum impacts at different impact velocities and validating FLAG against a glass-into-water laboratory impact experiment. Our verification results show good agreement with the theoretical maximum pressures, with relative errors as low in magnitude as 1.00%. Our validation results demonstrate FLAG's ability to model various stages of impact cratering, with crater radius relative errors as low as 3.48% and crater depth relative errors as low as 0.79%. Our mesh resolution study shows that FLAG converges at resolutions low enough to reduce the required computation time from about 28 hours to about 25 minutes. We anticipate that FLAG can be used to model larger impact cratering problems with increased accuracy and decreased computational cost on current systems relative to other hydrocodes tested by Pierazzo et al. [29].
TOPICS: Simulation, Engineering simulation, Errors, Aluminum, Glass, Resolution (Optics), Water, Computational methods, Computation
Technical Brief  
Jeffrey R. Beisheim, Glenn Sinclair and Patrick J. Roache
J. Verif. Valid. Uncert   doi: 10.1115/1.4042515
Current computational capabilities facilitate the application of finite element analysis to three-dimensional geometries to determine peak stresses. The three-dimensional stress concentrations so quantified are useful in practice provided the discretization error attending their determination with finite elements has been sufficiently controlled. Here we provide some convergence checks and companion a posteriori error estimates that can be used to verify such three-dimensional finite element analysis, and thus enable engineers to control discretization errors. These checks are designed to promote conservative error estimation. They are applied to twelve three-dimensional test problems that have exact solutions for their peak stresses. Associated stress concentration factors span a range that is larger than that normally experienced in engineering. Error levels in the finite element analysis of these peak stresses are classified in accordance with: 1-5%, satisfactory; 1/5-1%, good; and <1/5%, excellent. The present convergence checks result in 111 error assessments for the test problems. For these 111, errors are assessed as being at the same level as true exact errors on 99 occasions, one level worse for the other 12. Hence stress error estimation that is largely reasonably accurate (89%), and otherwise modestly conservative (11%).
TOPICS: Stress, Finite element analysis, Errors, Stress concentration, Engineers
research-article  
Kathryn Maupin, Laura Swiler and Nathan Porter
J. Verif. Valid. Uncert   doi: 10.1115/1.4042443
Computational modeling and simulation are paramount to modern science. Computational models often replace physical experiments that are prohibitively expensive, dangerous, or occur at extreme scales. Thus, it is critical that these models accurately represent and can be used as replacements for reality. This paper provides an analysis of metrics that may be used to determine the validity of a computational model. While some metrics have a direct physical meaning and a long history of use, others, especially those that compare probabilistic data, are more difficult to interpret. Furthermore, the process of model validation is often application- specific, making the procedure itself challenging and the results difficult to defend. We therefore provide guidance and recommendations as to which validation to use, as well as how to use and decipher the results. An example is included that compares interpretations of various metrics and demonstrates the impact of model and experimental uncertainty on validation processes.
TOPICS: Computer simulation, Simulation, Model validation, Uncertainty
research-article  
F. Scott Gayzik, Matthew Davis, Bharath Koya, Jeremy M. Schap and Fang-Chi Hsu
J. Verif. Valid. Uncert   doi: 10.1115/1.4042126
Objective evaluation (OE) methods provide quantitative insight into how well time history data from computational models match data from physical systems. Two techniques commonly used for this purpose are CORA and the ISO/TS 18571 standards. These ostensibly objective techniques have differences in their algorithms that lead to discrepancies when interpreting their results. The objectives of this study were 1) to apply both techniques to a dataset from a computational model, and compare the scores and 2) conduct a survey of subject matter experts (SMEs) to determine which OE method compares more consistently with SME interpretation. The GHBMC male human model was used in simulations of biomechanics experiments, producing 58 time history curves. Because both techniques produce phase, size, and shape scores, 174 pairwise comparisons were made. Statistical analysis revealed significant differences between the two OE methods for each component rating metric. SMEs (n=40) surveyed scored how well the computational traces matched the experiments for the three rating metrics. SME interpretation was found to statistically agree with the ISO shape and phase metrics, but was significantly different than the ISO size rating. SME interpretation agreed with the CORA size rating. The findings suggest that when possible, engineers should use a mixed approach to reporting objective ratings, using the ISO shape and phase methods, and size methods of CORA. We recommend to weight metrics greatest to least for shape, phase and size. Given the general levels of agreement observed, and the sample size, the results require a nuanced interpretation.
TOPICS: Weight (Mass), Matter, Engineers, Simulation, Biomechanics, Algorithms, Engineering simulation, Shapes, Statistical analysis
research-article  
Brantley Mills, Adam Hetzler and Oscar Deng
J. Verif. Valid. Uncert   doi: 10.1115/1.4041837
A thorough code verification effort has been performed on a reduced order, finite element model for 1D fluid flow convectively coupled with a 3D solid, referred to as the 'advective bar' model. The purpose of this effort was to provide confidence in the proper implementation of this model within the SIERRA/Aria thermal response code at Sandia National Laboratories. The method of manufactured solutions is applied so that the order of convergence in error norms for successively refined meshes and timesteps is investigated. Potential pitfalls that can lead to a premature evaluation of the model's implementation are described for this verification approach when applied to this unique model. Through observation of the expected order of convergence, these verification tests provide evidence of proper implementation of the model within the codebase.
TOPICS: Fluid dynamics, Errors, Finite element model
research-article  
Matteo Diez, Riccardo Broglia, Danilo Durante, Angelo Olivieri, Emilio F. Campana and Frederick Stern
J. Verif. Valid. Uncert   doi: 10.1115/1.4041372
The objective of the present work is the application of uncertainty quantification (UQ) methods for statistical assessment and validation of experimental and computational ship resistance and motions in irregular head waves, using both time series studies and a stochastic regular wave UQ model solved by a metamodel-based Monte Carlo method. Specifically, UQ methods are used for: (1) statistical assessment and validation of experimental and computational modeling of input irregular waves versus analytical benchmark values; (2) statistical assessment of both experimental and computational ship resistance and motions in irregular waves; (3) validation of computational ship resistance and motions in irregular waves versus experimental benchmark values; (4) statistical validation of both experimental and computational stochastic regular wave UQ model for ship resistance and motions versus irregular-wave experimental benchmark values. Methods for problem (1) include Fourier analysis for wave energy spectrum moments, analysis of the auto-covariance matrix and block-bootstrap methods for the uncertainty of wave elevation statistical moments, along with block-bootstrap methods for the uncertainty of mode and distribution. The uncertainty of wave height statistical estimators is evaluated by the bootstrap method. The same methodologies are used to evaluate statistical uncertainties associated to ship resistance and motions in problem (2). Errors and confidence intervals of statistical estimators are used to define validation criteria in problem (3) and (4). The contribution of the present work is the application and integration of UQ methodologies for the solution of problems from (1) to (4). Results are shown for the Delft catamaran.
TOPICS: Waves, Computational fluid dynamics, Ships, Uncertainty quantification, Uncertainty, Computer simulation, Time series, Wave energy, Errors, Fourier analysis, Monte Carlo methods

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In