0
Research Papers

Probability Bounds Analysis Applied to the Sandia Verification and Validation Challenge Problem OPEN ACCESS

[+] Author and Article Information
Aniruddha Choudhary

Mem. ASME
Aerospace and Ocean Engineering Department,
Virginia Tech,
460 Old Turner Street,
Blacksburg, VA 24061
e-mail: aniruddhac@gmail.com

Ian T. Voyles

Mem. ASME
Aerospace and Ocean Engineering Department,
Virginia Tech,
460 Old Turner Street,
Blacksburg, VA 24061
e-mail: itvoyles@vt.edu

Christopher J. Roy

Mem. ASME
Aerospace and Ocean Engineering Department,
Virginia Tech,
460 Old Turner Street,
Blacksburg, VA 24061
e-mail: cjroy@vt.edu

William L. Oberkampf

Mem. ASME
W. L. Oberkampf Consulting,
5112 Hidden Springs Trail,
Georgetown, TX 78633
e-mail: wloconsulting@gmail.com

Mayuresh Patil

Aerospace and Ocean Engineering Department,
Virginia Tech,
460 Old Turner Street,
Blacksburg, VA 24061
e-mail: mpatil@vt.edu

1Corresponding author.

Manuscript received February 8, 2015; final manuscript received July 31, 2015; published online February 19, 2016. Guest Editor: Kenneth Hu.

J. Verif. Valid. Uncert 1(1), 011003 (Feb 19, 2016) (13 pages) Paper No: VVUQ-15-1010; doi: 10.1115/1.4031285 History: Received February 08, 2015; Revised July 31, 2015; Accepted August 07, 2015

Our approach to the Sandia Verification and Validation Challenge Problem is to use probability bounds analysis (PBA) based on probabilistic representation for aleatory uncertainties and interval representation for (most) epistemic uncertainties. The nondeterministic model predictions thus take the form of p-boxes, or bounding cumulative distribution functions (CDFs) that contain all possible families of CDFs that could exist within the uncertainty bounds. The scarcity of experimental data provides little support for treatment of all uncertain inputs as purely aleatory uncertainties and also precludes significant calibration of the models. We instead seek to estimate the model form uncertainty at conditions where the experimental data are available, then extrapolate this uncertainty to conditions where no data exist. The modified area validation metric (MAVM) is employed to estimate the model form uncertainty which is important because the model involves significant simplifications (both geometric and physical nature) of the true system. The results of verification and validation processes are treated as additional interval-based uncertainties applied to the nondeterministic model predictions based on which the failure prediction is made. Based on the method employed, we estimate the probability of failure to be as large as 0.0034, concluding that the tanks are unsafe.

FIGURES IN THIS ARTICLE
<>

As described in the Sandia Verification and Validation (V&V) workshop challenge problem [1,2] (herein referred to as the “challenge problem”), MysteryLiquid Co., is a company that maintains a large number of storage tanks for storing “mystery liquid” at various locations around the world. The tanks are cylinders with two hemispherical end caps as shown in Fig. 1. At the junction where the cylindrical portion and the end caps meet, the tanks are supported by rings around the circumference. Locations on the tank surface are described by axial distance, x, measured from the central vertical plane and circumferential angle, ϕ, measured from the vertical down as shown in Fig. 1. During standard operation, the liquid level is limited to a certain fraction of the tank's height and the remaining space is filled with pressurized gas. During standard safety testing, one tank's measurements (out of many tanks) exceeded safety specification. This specification has been established from historical data, but is not a regulatory requirement. This out-of-spec tank (tank 0) never physically failed. The out-of-spec tank and its two neighboring tanks were taken out of service to conduct further testing. Also, four tanks, in four different locations, underwent multiple tests while still in service. Data from these tests are provided to the challenge workshop participants. A computer program is also provided that does inexpensive function evaluations and is assumed to be a proxy for an expensive finite-element model. All details regarding the challenge problem can be found in Refs. [1,2].

The objective of this analysis is to determine whether the tanks are at risk of failure and should they be replaced or can they remain in service for a few years while the replacements are ordered. This recommendation by the participants of the challenge problem is to be based upon the calculation of probability of failure. The final decision by the company will be taken based upon the results from experimental testing, study of physical models, and the results of computational simulation of the physical model.

Our view of simulation-informed decision-making is that uncertainty in the simulation results should be clearly conveyed to the decision maker without incorporating questionable or debatable assumptions within the analysis. Stated differently, the individuals using the results of verification, validation, and uncertainty quantification (VVUQ) analysis should clearly see the impact of all important uncertainties on the results of the simulation. Our approach differs dramatically from the traditional or common philosophy in uncertainty quantification (UQ) or risk assessment analyses. In the traditional approach, the analysis team makes seemingly reasonable and common assumptions concerning the conceptualization of the system of interest, numerical solution of the mathematical model, assimilation of experimental data, and then presents the simulation results to the decision maker. The assumptions and approximations may, or may not, be documented in footnotes or an appendix to a written report. Our philosophy is to make assumptions and approximations that are clearly defensible based on (a) information and experimental data on the system of interest and the conditions to which the system is exposed and (b) relevant experience and experimental data for closely related systems exposed to similar surroundings. If the information is lacking concerning any aspect of the analysis, then the uncertainty in the information should be characterized such that no additional assumptions are incurred in the characterization of the uncertainty itself. The net result of our philosophy is that the uncertainty in the predicted system response quantities (SRQs) will be larger, sometimes much larger, than a traditional approach. We argue that our philosophy is appropriate when the uncertainty is primarily caused by lack of knowledge as opposed to random variability, for example, in input data describing the system, the physics occurring in the system, and the conditions to which the system is exposed.

Some examples of uncertainties that are dominated by lack of knowledge, i.e., epistemic, uncertainty are: (a) spatial and temporal discretization error that is poorly estimated, (b) little or no experimental data exist for input parameters describing the system of interest, boundary conditions, or excitation of the system, (c) poor agreement between model predictions and experimental data on related systems, and (d) model predictions for physical conditions of interest that are far removed from the conditions where the experimental data are available. Epistemic uncertainties such as these are not properly characterized as random variables because they have little relationship to random processes. As a result, mathematical structures such as intervals or Dempster–Shafer structures [3] should be used to forthrightly represent the actual state of knowledge to the decision maker. We present an approach based upon PBA where the aleatory (i.e., random) uncertainties in the input quantities are treated probabilistically as usual, but the epistemic (i.e., lack of knowledge) uncertainties are treated as intervals. Our approach accounts for the effects of both types of uncertainties on the SRQs of interest, as well as model form and numerical uncertainty.

This paper is organized as follows. Before presenting our approach to the challenge problem, we make a few qualitative observations in Sec. 2 about the experimental and computational description provided in the challenge problem statement. In Sec. 3, we provide a discussion of important concepts and terminology relevant to our approach followed by an overview of various steps in our analysis. In Sec. 4, we describe each step of our analysis in detail with results. Here, we present the final estimation of total uncertainty and the decision about tank safety. We end our response to the 2014 Sandia challenge problem in Sec. 5 with some concluding remarks.

Experimental Data.

The quantities and types of available data (see Refs. [1,2]) for use in this problem introduce issues that could adversely affect the probability of failure estimate. Data are collected from only six tanks out of a total fleet of 450 tanks—only 1.33% of the population. Additionally, the data from each experiment are limited, especially from the experiments reporting the tank material properties and dimensions. While legacy data are available, no information is provided concerning the methods employed, tolerances allowed, or property distributions identified by the manufacturer. Measurement uncertainty assessments are available for some experiments, but not all. The main quantity of interest, the stress in the tank walls, is never measured and can only be inferred from other quantities.

Yield strength of the material, which is the primary quantity against which stress predictions from simulations are to be compared for reporting failure, is measured at only ten locations and only using the out-of-spec tank. It is not known how well any data from a given tank is correlated to the properties for another tank used in the experiment or to another tank within the fleet. No information is provided concerning the operating environments of the tank fleet. The effect that the mystery liquid may have on a tank (e.g., corrosion or oxidation) along with the environmental effects, such as damage due to a dry, sandy, or a humid environment, is not provided, nor is it known that the entire fleet experiences the same host of effects. While it is stated that the tanks in the fleet range from 4 to 12 yr old, the distribution of ages over the fleet and the age of any particular tank is not known, nor is there any information on tank fatigue due to use. The lack of knowledge concerning the tank environments and life cycle could potentially lead to large uncertainties in the prediction of the probability of failure.

Modeling and Simulation.

There are several known modeling and simulation (M&S) issues present that introduce uncertainty into a probability of failure prediction. A tank model has been provided as a python code which acts a proxy for an expensive finite-element code with four available mesh levels. This model uses the series solution evaluation based upon the Timoshenko–Krieger shell theory for cylinders [4]. The finite-element model provided considers only the straight, cylindrical portion of the tank and does not model the hemispherical tank end caps. In addition, the model is incapable of accounting for nonuniformity in the tank dimensions, e.g., variation of tank wall thickness or any tank damage. The uncertainties in SRQs due to the modeling assumptions have not been previously studied and are not known. Additionally, the use of the finite-element code introduces numerical uncertainties into the probability of failure prediction. Though the code is reported as having a first-order rate of convergence with consistent grid refinement on previous problems, the present problem being considered is more complex than prior verification tests. The simulation is confined to only four meshes on which a given SRQ is not necessarily in the asymptotic range. Since our analysis explicitly includes an estimate of numerical uncertainty in the SRQs, the result is that the numerical uncertainty is increased because of lack of knowledge in these issues.

An overview of our approach is provided in Sec. 3.5. However, first we present various concepts and terminology relevant to our approach in Secs. 3.13.4. Here, we also include a brief survey of relevant work from the literature to emphasize the reasoning behind the selection of techniques employed in this work.

Uncertainty Framework.

In order to quantify the total uncertainty in M&S predictions, it is important to identify and quantify all of the relevant uncertainty sources. The three main uncertainty sources are input uncertainty (also called parameter uncertainty), uncertainties due to the chosen form of the model (model form uncertainty), and uncertainties due to the numerical solution to the model (numerical uncertainty). Here, we employ a broad approach to quantifying the total uncertainty in M&S that accounts for all three of these sources [5,6].

The taxonomy employed for classifying uncertainties is based on their fundamental essence. The input or parameter uncertainty includes the uncertainty in the initial conditions, boundary conditions, material parameters, geometry, and system excitation. This uncertainty may be purely aleatory (i.e., probabilistic), purely epistemic (arising from lack of knowledge), or a mixture of the two. The aleatory uncertainty is an uncertainty due to inherent variation or randomness. The epistemic uncertainty (also called reducible uncertainty or ignorance uncertainty) is an uncertainty that arises due to a lack of knowledge on the part of the analyst conducting the M&S. If knowledge is added (through experiments, improved numerical approximations, expert opinion, higher fidelity physics modeling, etc.), then the uncertainty can be reduced. If sufficient knowledge (which costs time and resources) is added, then the epistemic uncertainty can, in principle, be eliminated.

The aleatory uncertainty is typically characterized probabilistically with a precise probability density function or CDF. The epistemic uncertainty (and mixtures of epistemic and aleatory uncertainty) can be characterized in a number of ways, including probabilistically, as second-order probabilities (where the parameters governing the probability distribution are themselves uncertainty), as intervals, as Dempster–Shafer structures, and as fuzzy probabilities [7]. When one has very little knowledge about the value of a parameter, then an interval representation (with no associated probability distribution) is the weakest statement that one can make about the value of the parameter. Stated differently, every value in an interval-valued uncertainty can realize a probability of unity. This is not possible in precise probability theory, but it is allowed in imprecise probability theory. For cases where the parameter is known to be a random variable, but little information is available about its specific distribution, one may choose to characterize the variable using either a uniform distribution or as a distribution with uncertain descriptive parameters. For the latter case, a mixed aleatory and epistemic characterization is appropriate.

One approach for characterizing mixed aleatory and epistemic uncertainty is a probability box (or p-box), which is similar to a CDF but with a finite width representing the epistemic uncertainty. The shapes of the two outer bounding CDFs reflect the aleatory uncertainty in the variable as seen in Fig. 2. The width of the p-box represents the range of parameter values that are possible for a given cumulative probability level, whereas the height of the p-box represents the range of interval-valued cumulative probabilities associated with a given parameter value.

Unless otherwise noted, in this work, the input uncertainties are categorized as purely aleatory, purely epistemic, or mixed based on the available information, and in some cases, external information available in the literature. The purely aleatory uncertainty is characterized by a precise probability distribution. The purely epistemic uncertainty is characterized as an interval. The mixed uncertainty is characterized as imprecise probability distribution, i.e., precise probability distribution with interval-valued mean.

Uncertainty Propagation.

When sufficient information and data are available such that all parametric uncertainties are aleatory, then well-established probabilistic methods for propagating uncertainty can be used (e.g., Refs. [8,9]). While a probabilistic treatment of epistemic uncertainty fits nicely within a Bayesian framework [10], it tends to underpredict the true uncertainty [11] and often has a strong (and undesirable) dependence on prior assumptions. For example, when some information is available for an input parameter, a Bayesian approach would assume some precise probability distribution representing the degree of belief of the analyst (as opposed to actual evidence of frequency of occurrence) [12]. This has two detrimental effects with regard to capturing the actual poor state of knowledge. First, the result of the Bayesian analysis represents the individual belief of the analyst, as opposed to a result that is based on the limited information available. Second, when the epistemic uncertainties are characterized as random variables, the final Bayesian result for the analysis is a single probability distribution as opposed to a set of distributions captured by a p-box. In fact, Beer et al. [7] point out that the posterior distribution becomes the prior as the available information goes to zero. On the other hand, a more general framework of information theory is described in Ref. [13], which can handle imprecise probabilities (i.e., aleatory, epistemic, and mixed uncertainty) and includes approaches such as PBA [14] and evidence theory [3].

In order to propagate mixtures of aleatory and epistemic uncertainty, we employ a segregated approach to uncertainty propagation (see Refs. [6,15]). In the outer loop, samples from the interval-uncertain model inputs are drawn. For each of these sample values, the probabilistically uncertain model inputs are propagated using Latin hypercube sampling (LHS) along with the fixed sample values of the interval-uncertain variables. This propagation of the probabilistic uncertainty forms the inner loop of segregated uncertainty propagation. The result from one iteration of the outer loop is a possible CDF on the SRQ. The total result of the process is an ensemble of possible CDFs on the SRQ. In the limit of infinite samples, this process yields the correct solution for problems in which there is no assumption of independence between the interval-uncertain inputs; but it is assumed that the interval-uncertain inputs are independent of the probabilistic inputs [11].

Model Form Uncertainty: Validation and Calibration.

There is still a great deal of debate on how to treat the model form uncertainty in M&S. One extreme is calibration (e.g., via Bayesian updating) which attempts to remove the model form uncertainty by using experimental data to improve the model [16,17]. When the calibration is used to “remove” the model form uncertainty, issues often arise when applying the model outside the range of conditions where the data are available (as is generally the case when making predictions). Another extreme is to use all the experimental data to quantify the model form uncertainty, a process known as model validation [5,18,19]. There are also numerous approaches between these two extremes. For example, Kennedy and O'Hagan [20] developed a Bayesian calibration approach which includes a model discrepancy term. Roy and Oberkampf [6] allow a partitioning of the available experimental data into a subset to be used for calibration and another subset to be used for estimating the model form uncertainty via comparison of nondeterministic model and experimental outcomes. Both authors provide an approach for extrapolating the model form uncertainty to conditions where the experimental data are not available: Kennedy and O'Hagan [20] assume a Gaussian process while Roy and Oberkampf [6] employ a regression fit of the model form uncertainty along with prediction intervals.

Numerical Uncertainty.

Since differential equation-based models rarely admit exact solutions for practical problems, approximate numerical solutions must be used. The characterization of the numerical approximation errors associated with a simulation is called verification [5,18,19]. It includes discretization error, iterative convergence error, round-off error, and also errors due to computer programing mistakes. The errors due to programing mistakes can usually be found and eliminated using code verification practices. For cases where numerical approximation errors can be estimated with a high degree of accuracy, their impact on the M&S results can, in principle, be eliminated if sufficient computing resources are available. Since this is often not practical, the uncertainties due to numerical errors should generally be converted to epistemic uncertainties due to the uncertainties associated with the error estimation process itself. Since the numerical uncertainty is due to a lack of knowledge (i.e., epistemic), we treat it in the same manner as model form uncertainty, i.e., as an interval about the simulation outcome.

Overview of the Analysis Process.

Our analysis employed the following steps, which are discussed briefly here. The implementation details of these steps are provided in Sec. 4.

  1. Parametric study: The three control parameters, P, H, and χ, were varied over their operating envelope to determine their qualitative effects on the maximum von Mises stress.

  2. Sensitivity analyses: Global and local sensitivity analyses were performed to determine which of the uncertain variables had the largest impact on the maximum (von Mises) stress and maximum displacement. During these sensitivity studies, all the uncertain variables were treated as purely aleatory uncertainties, with epistemic uncertainties approximated as uniform probability distributions over the estimated interval ranges. The sensitivity analyses were used as a screening process to either (a) omit certain insignificant variables from the UQ analysis or (b) to treat epistemic and/or mixed uncertainties as precisely specified random uncertainties.

  3. Numerical uncertainty estimation: Since no information was available regarding the code verification status of the software, we proceeded assuming that the code is free from any coding mistakes. In practice, rigorous code verification must be performed first for any of the downstream VVUQ activities to have any meaning. For this V&V challenge problem, the only numerical error source that was assumed to be present was discretization error, which was estimated with a combination of Richardson extrapolation and the grid convergence index (GCI).

  4. Uncertainty characterization: The input uncertainties were categorized as purely aleatory (A), purely epistemic (E), or mixed (M). Unless otherwise noted, we characterize the input uncertainties as follows: the pure aleatory uncertainties are treated probabilistically, the pure epistemic uncertainties are treated as intervals, and the mixed aleatory and epistemic uncertainties are treated probabilistically with interval-valued means.

  5. Uncertainty propagation: We employ segregated propagation of probabilistic and interval-characterized uncertainties using the PBA framework. Interval uncertainties are sampled in the outer loop using LHS. Probabilistic uncertainties are sampled in the inner loop using LHS.

  6. Model form uncertainty: In this work, we do not employ model calibration. We instead utilize the available experimental data to estimate the model form uncertainty. Since no data are available for the primary SRQ, i.e., von Mises stress, we employ an approach called u-pooling [21,22] to convert experimental realizations for displacements to those for von Mises stress (note: u-pooling is discussed in detail with its implementation in Sec. 4.6). We then use the MAVM to estimate model form uncertainty in von Mises stress [23,24]. Since this uncertainty is epistemic, we treat it as an interval about the simulation outcome. In our case, the simulation outcome is the p-box found by propagating the aleatory, epistemic, and mixed uncertainties through the model.

  7. Estimation of total prediction uncertainty: The total prediction uncertainty in von Mises stress is determined by beginning with the p-box found from propagating all input uncertainties (aleatory, epistemic, and mixed) through the model. The estimated model form and numerical uncertainties are then appended as additional, independent, epistemic uncertainties on the bounds of the simulation p-box.

Parametric Study.

As a first step toward qualitatively assessing the computational model available, a simple parametric study is performed by varying the three operating parameters over the operating range as follows:

  • P=[15,75] psig: 11 values at intervals of 6 psig

  • χ=[0.1,1] : 10 values at intervals of 0.1

  • H=[0,55] in.: 12 values at intervals of 5 in.

All other input variables are kept constant at legacy values: E=3×107 psi, ν=0.27, L=60 in., R=30 in., and T=0.25 in. For this parametric analysis (and later, sensitivity analysis (SA) and uncertainty propagation study), the dakota software toolkit [25] is used mainly due to its capability to perform large number of parallel runs of the model over multiple processors. Initially, when information about the computational cost was not known, the finest mesh (i.e., m=4) was used for all 1320=11×10×12 cases using the nonuniform mesh (option: resultStyle=2) in the provided code. After information about the cost of computation on each mesh level was made available (as shown in Table 1), we recommend the use of the coarsest mesh (m=1) for parametric analysis.

The output quantities from each of the 1320 runs include the maximum stress value and its location on the cylinder (i.e., axial and angular locations, and whether the maximum stress occurs on the internal or external surface). For brevity, only the key observations are discussed as follows:

  • As expected, the maximum stress for any liquid composition always appears for the largest pressure and liquid height.

  • Maximum stress varies most strongly with the liquid height in the container, less strongly with pressure and with liquid composition. A sample set of results are shown in Fig. 3Fig. 3

    Maximum stress values for the parametric space of pressure and liquid composition for different liquid heights: (a) H=5in., (b) H=30in., and (c) H=55in.

    Grahic Jump LocationMaximum stress values for the parametric space of pressure and liquid composition for different liquid heights: (a) H=5 in., (b) H=30 in., and (c) H=55 in.

    where the effect of varying pressure and liquid composition in the parametric space is shown for different values of liquid heights.

  • Variation of maximum stress with liquid composition is not monotonic, i.e., it is large at χ=0.1, then decreases as χ increases until χ0.3, and then increases from χ0.4  to χ=1.0. The variation of maximum stress with pressure is almost always linear as expected.

  • The tank has a circular cross section which results in slight nonlinearity in the variation of maximum stress with liquid height near 30–50 in. (i.e., when the tank is just over half-full). The cylindrical geometry of the tank also effects the angular location of maximum stress which moves up to approximately 80deg (i.e., near the middle of the tank) when the liquid height is approximately 35 in. and then falls down as the liquid height is further increased. This suggests that the middle region of the container is an important region for assessing maximum stress. A similar analysis of variation in axial location of maximum stress suggests that the regions near the support are important for failure analysis. The information about angular and axial locations of maximum stress suggests that the input option, resultStyle=2, which reports results on a nonuniform grid (with refinement at the centerline and the supports of the tank) is suitable for this V&V study.

Sensitivity Analysis.

Two sensitivity analyses are presented here at the maximum loading conditions (i.e., H=55 in., P=75 psig, and χ=1) with legacy values of the material and geometry parameters. The first involves performing a global SA using the variance-based decomposition (VBD) method in dakota [25]. The second involves a local SA using simple finite-difference calculations. If resources are a constraint, we recommend the local SA around the desired conditions. However, the global SA approach provides measures of sensitivity in terms of main and total Sobol’ indices giving an insight into the uncertainty in the output not only due to the uncertainty in the input variable but also due to the interactions between the variables.

Global SA.

VBD is used along with a sampling method (LHS). Since this SA was performed prior to the characterization of input uncertainties, all the input variables are treated as normal probability distribution functions during this analysis. We used N = 100 samples for each of the M = 8 input parameters as described in Table 2 for a total of N×(M+2)=1000 runs. While sampling for input parameters, it was found that the provided code has a calculation of “arccosine” function using liquid height, H, and tank radius, R, as arccos((HR1)/R1), where R1 has been created by creating a bias/error in the input value of R as

Display Formula

(1)R1=(1.5R17)0.9926/m
where R is the tank radius and m=[1,2,3,4] is the meshID. This transformation shifts an R of 28–32 in. to an R1 of 25–31 in. Since the arccosine function here cannot accept H>2R1, this results in a rather strict (and complicated) constraint on the upper bound of H and the lower bound of R. We use a height ratio to work around this issue during the sampling process (discussed later). Another consideration that must be made while selecting the input parameters is that the liquid composition must strictly follow 0χ1. To deal with these requirements, we have used truncated normal distribution functions which affect the mean and SD (standard deviation) for H and R as shown in Table 2. The coefficient of variation (COV) and mean values are determined based upon the limited experimental data available.

The VBD approach provides two primary measures to study how the uncertainty in model output can be apportioned to the uncertainty in input variables: the main effect and the total effect sensitivity indices. The main effect sensitivity index, Si, corresponds to the fraction of the uncertainty in the output (e.g., Y) that can be attributed to an input (e.g., xi) alone. The total effects index corresponds to the fraction of the uncertainty in the output, Y, that can be attributed to input, xi, and its interactions with other input variables. Large values of Si indicate that the uncertainty in the input variable, xi, has a large effect on the variance of the output. Sum of the main effect indices from different input variable should be approximately equal to one. However, if the sum is significantly less than one, then there could be significant interactions between the input variables. Further details on interpretation of the Sobol’ indices can be found in Ref. [26].

Two different conditions (at χ=1 and χ=0.1) were explored, though the results of only the more relevant case (χ=1) are presented here in Table 3 for brevity, where large values of sensitivity indices are highlighted. Note that apart from the suggested SRQ of maximum stress (σmax), we also looked at the deflection at the maximum stress location (w@σmax) and the maximum deflection (or normal displacement, wmax) in the simulation. Importance is given to both stress and deflection related SRQs in the current V&V study since the main quantity of interest for final failure prediction (i.e., σmax) is never measured directly during the experiments and must be inferred from other properties such as deflection. Based on the results shown in Table 3, it is evident that the maximum stress is most sensitive to uncertainties in tank thickness and radius values. While the deflection SRQs are sensitive to tank thickness and radius, they are most sensitive to uncertainties in Young's modulus of the material. The results from all the samples are presented pictorially in Figs. 4 and 5, where the maximum stress and maximum deflection, respectively, are plotted against different input values. It can be seen from Figs. 4(a), 4(b), 5(a), and 5(b) that both SRQs are strongly correlated with tank radius and thickness. Figure 5(c) shows that the deflection SRQ is most strongly correlated with the Young's modulus of the material, whereas the maximum stress has zero correlation with the Young's modulus as seen from Fig. 4(c).

Local SA.

A simple local SA is performed using one-sided finite differences. Here, the eight input variables are perturbed one-at-a-time by 0.01% of the initial value at maximum loading conditions. Thus, this analysis requires only a total of nine runs which can be performed using results from the fine mesh (i.e., m=3). As the inputs have a wide range of magnitudes, gradients are formed in a dimensionless manner. For an input x and the resulting SRQ, Y(x), the gradient is calculated about the desired initial condition, x0, as simply Display Formula

(2)dYdx|x0 (Y(x0+δx)Y(x0))/Y(x0)δx/x0 

The results from this local SA are shown in Table 4 and largely support the conclusions derived from the earlier Sobol’ indices computations, i.e., the examined SRQs are most sensitive to the wall thickness and tank radius, with the deflection SRQs also being sensitive to the Young's modulus. Based only on the finite-difference analysis, arguments could be made that variations in the liquid height and tank length are also moderately important, especially for the deflection SRQs.

Numerical Uncertainty: Grid Effects.

To estimate the numerical uncertainty, we examined three cases: two cases at H=Hmax=55 in. with (a) χ=1 and (b) χ=0.1, and one pressure loading only case (i.e., H=0in.). For each of the three cases, simulations were run on all four grid sizes (i.e., m=1,2,3,4). Since the exact solution is unknown, the triplets of grids (1,2,3) and grids (2,3,4) (where 1 is the coarsest and 4 is the finest grid) were used to obtain observed orders of accuracy for each of the two SRQs (maximum stress and maximum displacement). The observed order with the finer grid triplet (2,3,4) always produced negative orders. For the coarser grid triplet (1,2,3), the observed order of accuracy of the maximum displacement was approximately 1.5 while the observed order for the maximum stress was near zero. For a real finite element method (FEM) solution, this erratic behavior for observed orders could be attributed to issues with systematic refinement of the finest grid or iterative nonconvergence. For the current proxy FEM model, we believe that this erratic behavior is the result of the tank radius transformation (discussed in Eq. (1)) which may not be well-posed with systematic refinement.

Since we were limited to the four meshes identified in the problem statement, we chose to simply estimate the numerical error using a conservative factor of safety of three. Furthermore, we decided to ignore the finest grid results and estimated the exact solution (S¯) using the fine (m=3) and the medium (m=2) grids as follows: Display Formula

(3)S¯=S3+S3S2rp1
where S represents the simulation solution, r is the refinement factor between grids (r=2 in this study), and p is the observed order of accuracy limited between 0.5p1, as recommended in Ref. [5] (note that p=1 is the stated formal order of accuracy of the FEM code for this problem). The numerical uncertainty can then be estimated for solution on a given grid level, Sm, using a factor of safety of Fs=3, as follows: Display Formula
(4)Unum(%)=Fs|S¯SmS¯|×100
Note that this procedure is similar to the GCI of Ref. [27], but slightly modified to explicitly include the estimated exact solution which allows for the estimation of numerical uncertainty in each of the coarser mesh levels. The final numerical uncertainties for H=55in. and χ=1 are given in Table 1. A crucial observation was that the maximum stress always decreased monotonically with mesh refinement while the maximum displacement always increased monotonically with grid refinement (numerical values not shown here for brevity). Thus, one-sided numerical uncertainty intervals can be justifiably used about the numerical solution as [SUnum,S] for maximum stress and [S,S+Unum] for maximum displacement. Here, S is the simulation result (deterministic value, CDF, or p-box) and Unum is the estimated numerical uncertainty. Based upon the uncertainty levels found in Table 1, we propose using the medium mesh (m=2) for the uncertainty propagation at nominal conditions. This will require using the estimated Unum of ()24.1% for maximum stress values and (+)5.5% when maximum displacement SRQ is considered.

Uncertainty Characterization
Description and Characterization of Input Data.

All input parameters can be divided into the categories of model input, system excitation input (also called operating conditions), and numerical input parameters, as shown in Table 5. Note that the liquid specific weight, γ, can be determined directly from a correlation function dependent on the liquid composition, χ (with a relatively small error of ±2%). Thus, there is no need to discuss both χ and γ during this analysis.

Proper characterization of the input uncertainties is a crucial step in the current PBA framework. The three types of characterization (aleatory, epistemic, and mixed) are suggested here based upon the nature of the property, information provided in the problem statement, the parametric analysis of the input data, and the SA conducted. An attempt was made to limit the number of quantities with interval-valued characterization because the epistemic and mixed uncertainties rapidly increase the number of samples required during the propagation of uncertainties through the model. Thus, if the SA showed that an input quantity has very little effect on the maximum stress or the wall displacement, then the quantity is characterized as a precise probability distribution using a normal distribution function. Although the results of SA matched well with our physical understanding of the problem, it must be qualified here that these observations were based upon SA performed before any model validation which can sometimes be misleading.

The input uncertainty characterization is summarized in Table 6. The uncertainties are characterized about the nominal condition which is defined as the condition at which the ultimate failure prediction is to be made. The nominal condition for the control parameters is described in the problem statement as: P=73.5 psig, H=50in., and χ=1. For the material and geometry model input parameters, the legacy values are used as the nominal values. The liquid composition, χ, and the liquid height, H, are characterized as interval-valued uncertain parameters (i.e., epistemic) with precise lower and upper bounds. Note that the upper bound of χ=1 is ensured here. Also, the liquid height is input into the code using a user-defined normalized height parameter, ψ=(H/2R1), where R1 is described in Eq. (1). The use of ψ to define the liquid height ensures the strict mathematical constraint of Hmax2R1 during the sampling process and avoids the otherwise fatal segmentation fault within the arccos((HR1)/R1) calculation in the code. The lower and upper bounds of H are set based upon tank leveling concerns as ΔH=±1 in., while those for χ were set as Δχ=±0.05 based upon the information in the problem statement.

The gauge pressure, P, the Poisson's ratio, ν, and the tank length, L have been characterized as aleatory uncertainties using normal (or Gaussian) probability distribution functions with known mean (μ) and SD. The mean and SD (or equivalently, COV defined as COV=SD/μ) have been selected based upon the available experimental data. From Table 6, it can be seen that the COV for P is selected as 2.5%, for ν as 4%, and for L as 1%. Given the small quantity of experimental data available for these input properties, we selected the mean as the legacy value (rather than the mean of the experimental data) and COV as 3.5 times the COV from the experimental data to encompass 95% of the available data points.

The mixed uncertainty characterization is employed for the following three model input parameters: Young's modulus, E, tank surface thickness, T, and tank radius, R. Here, the mean is represented as an uncertain interval-valued parameter with known upper and lower bounds; however, for each mean value, a normal probability distribution is assumed with a selected SD. The upper and lower bounds for the mean and the SD for the probabilistic distribution are selected after observing the distribution of the (limited) experimental data provided.

Description of SRQs.

For a sampled point in the eight-dimensional input space, we can obtain a solution of the mathematical model. While the failure prediction is to be made based upon the von Mises stress response from the computational model, the only experimentally measured SRQ is the normal (radial) displacement of the tank wall under various loading conditions. Data set #5 (see Refs. [1,2] for details on various data sets) provides measured displacements for tanks 1 and 2, for three pressure-only loading conditions at four locations on each tank. Measurements on each tank were repeated in a second experiment, so we have two independent sets of measurements. Data set #6 contains measured displacements for tanks 3–6 for various combinations of pressure, liquid height, and liquid composition at 20 locations on each tank. Measurements on each tank were repeated three times, so three independent sets of measurements are available on each tank. One could argue that these data could also be used for computing a validation metric, or for calibration of input parameters. However, we decided to not use these data for both these purposes since the failure prediction is to be based upon the stress SRQ and not displacements. Instead, we choose to use the experimental realizations of the displacements at the provided locations to estimate the von Mises stress using u-pooling in order to estimate the model form uncertainty.

The given mathematical model provides a local value of the von Mises stress, σ, over the entire tank surface which can be used to find the maximum value of stress, σmax, under a given loading condition. The failure criterion in this problem is the requirement that σmax does not exceed the yield stress, σY, by a probability of 103. This criterion can be stated as Display Formula

(5)P(σmax>σY)<103
A key issue in interpreting the failure criterion is the characterization of yield stress. The yield stress varies from unit-to-unit during manufacturing, depends upon operational history, environmental conditions, and also varies locally throughout the volume of a solid. However, experimental measurements for σY are available at only ten locations and only on tank 0. This lack of knowledge about the yield stress, which is a key property for determining failure, is a major challenge and its proper treatment is addressed during the final prediction. Note that the uncertainty in the von Mises yield criterion itself should not be significant and gets addressed through the uncertainty in the yield stress.

Uncertainty Propagation.

Input uncertainties are propagated through the model using segregated propagation of uncertainties as implemented in the dakota toolkit [25]. Here, the epistemic uncertainties are sampled in the outer loop (number of samples =  M) and the aleatory uncertainties are sampled (number of samples =  N) within the inner loop. This results in a total of M×N total samples (and thus total number of simulations) within the parametric space of eight input uncertainties. The standard LHS is used at all steps with the characterization of input uncertainties, as described earlier and in Table 6. For the final uncertainty propagation at nominal conditions and for the u-pooling study (discussed in Sec. 4.6), the number of samples were selected as M=50 for the epistemic variables and N=200 for the aleatory variables. The results of uncertainty propagation performed at nominal conditions are shown in Fig. 6, where the left figure shows all 50 CDFs for the maximum stress. The outer bounds of the ensemble of CDFs result in a p-box which is shown in the right figure. Note that no uncertainty structure is assumed within this p-box.

Model Form Uncertainty
Obtaining Stress Data.

Tanks 3–6 were field tested and three sets of experimental results are available for each tank at different loading conditions. For each of the resulting 12 experiments, displacement measurements are available at the 20 location as identified in Fig. 7. As mentioned earlier, the experimental data are only available for displacements while a prediction is to be made for von Mises stress. To address this concern, we use a surrogate model developed from the simulations in order to correlate the measured displacements with stress components at the locations provided. Given the complexity of the problem, there is no direct correlation between the stress and displacement. However, assuming that the system response can be represented by a single mode-shape (i.e., the functional form of the solution is known but the amplitude is unknown), then we can find exact correlation between the two quantities. Furthermore, we find that fitting the displacements from the simulations to the stress components resulted in nearly exact fit between the displacements and the von Mises stress. The axial stress (σx), the tangential stress (σϕ), and the cross stress (σxϕ) terms can be combined to obtain the “surrogate” von Mises stress (σ) at each of these locations within the domain. This approach is somewhat similar to a proper orthogonal decomposition analysis of the most important “mode shapes.”

In order to create the surrogate model that could convert displacements into stress, we find the coefficients that correlate each of stress components at a few locations (numbers 1, 5, 8, 16, and 20) on the tank surface with their respective displacements obtained from a parametric run over the operating condition space (P, χ, and H). A total of 125=5×5×5 runs were made on the fine mesh (m=3) to obtain the coefficients of the displacement–stress surrogate model.

u-Pooling.

The goal here is to employ the MAVM to obtain the model form uncertainty in the predicted stress, which requires multiple experimental data points for each set of conditions. However, for each set of experimental conditions, only one data point for stress is available at several locations on the tank. The method of u-pooling [22] is used here to transfer the five stress values from the five locations (numbers 16–20) to one location (number 16) so that the MAVM can be employed using this u-pooled “experimental” stress distribution. The key assumption for the use of u-pooling is that the effects of input uncertainties on these five different stress locations are correlated. Essentially, the u-pooling process is used here to transform the surrogate stress data to probability space by performing uncertainty propagation (while treating the interval uncertainties as uniform probabilities) at the experimental operating conditions. Next, the probability values (u values) obtained from different experiments are all pooled to one location at nominal conditions. For example, we used case52 and case53 (see Table 7) to pool the data to H=51 in. condition as follows:

  • For conditions of case52 (P=63.2 psig, χ=0.7,andH=51 in.), 10,000 samples (M = 50 and N = 200) were selected over the input parameter space and nondeterministic simulations were performed. The CDFs for the five von Mises stress values, called s1, s2, s3, s4, ands5, at the five extreme locations, numbered 16–20, respectively, are probed to find the corresponding probabilities (u values) for the five stress values obtained from the surrogate model. Note that the surrogate stress data were obtained earlier using the displacement data from this specific experiment.

  • Similarly, 10,000 samples were run for case53 (P=64.6 psig, χ=0.4, andH=54 in.) and after probing the CDFs, five u values are obtained for the five surrogate stress data for this specific experiment.

  • Another nondeterministic run of 10,000 samples is performed for the operating conditions where we want to pool the data (i.e., P=73.5 psig, χ=1.0, andH=51 in.). The ten u values (five from each of the two experiments) are then used to probe the CDF for the stress at location 1 to obtain ten stress values which we refer to as “experimental” stress data.

The experimental stress data can now be used to estimate the area validation metrics at H=51 in. For this case, the ten surrogate stress data points, the corresponding u values, and the experimental stress data are tabulated in Table 8. Figure 8 shows the process pictorially, where in (a)–(e), the probability levels are collected from each of the five stress values at the five locations (shown for case52 only), while in (f), the ten cumulative probabilities from both cases are used to probe for the ten experimental stress values.

MAVM.

Area validation metrics employ the area between CDFs as the basis for the formation of a metric quantifying the disagreement between the simulation and experimental outcomes [5,22]. The metric d is defined for the area validation metric as Display Formula

(6)d(F, Sn)=|F(Y)Sn(Y)|dY
where F(Y) is the p-box from the simulations, Sn(Y) is the empirical CDF from the experiments, and Y is the SRQ of interest. The MAVM employed here [24] accounts for regions in the cumulative probability space where the experimental values are larger than the simulation values (d+) and are smaller than the simulation values (d) (see Refs. [23,24] for a complete discussion on the process of MAVM evaluation). Once these metrics are computed, the model form uncertainty is constructed as the following interval around the simulation p-box: Display Formula
(7)[F(Y)+(1Fs2)d+(1+Fs2)d, F(Y)+(1+Fs2)d+(1Fs2)d]
where Fs is a factor of safety based on the number of experimental samples available. Fs is determined here as Display Formula
(8)Fs(k)=F1+1.2F0F1k13
where k is the number of available experimental samples, F1=1.25, and F0=4.0. For additional conservativeness, the “average area” definition given by Voyles and Roy [23,24] is used. Note that another formulation for Fs can be found in Ref. [24] based on confidence intervals.

After employing u-pooling, the new experimental CDF is compared with the simulation p-box (from Fig. 6) as shown in Fig. 9. Using the u-pooled case at H=51 in., the MAVM metrics d+ and d employing the average area are found as

Display Formula

(9)d+=2210 psi  and  d=20.7 psi
Here, the factor of safety for ten available experimental samples was found to be Fs=2.72 from Eq. (8). Thus, from Eq. (7), the final model form uncertainty is estimated to be Display Formula
(10)[F(s1)1878, F(s1)+4067] psi
where F(s1) represents the p-box in this analysis. A similar method can be employed to obtain the MAVMs at different liquid heights, which can then be interpolated to obtain model form uncertainty at any liquid height. However, we use the MAVM from Eq. (10) (i.e., for H= 51 in.) as the final model form uncertainty for total uncertainty estimation as an approximation to that at the nominal condition (H=50 in.).

Total Prediction Uncertainty.

The total prediction uncertainty is determined by appending the numerical uncertainty, which was estimated in Sec. 4.3 and the estimated model form uncertainty from Sec. 4.6 to the p-box obtained from the segregated propagation performed at nominal conditions in Sec. 4.5. The resulting total prediction p-box is shown in Fig. 10. Note that the numerical uncertainty is appended only to the left side of the p-box since the maximum stress was found to be monotonically decreasing with mesh refinement. The numerical uncertainty remains the largest source of uncertainty since the segregated propagation was performed with the medium grid (m=2). The model form uncertainty is also a large contribution. Although the increase in predictive uncertainty due to numerical and model form uncertainties is large compared to the contribution of the input uncertainties, it is not claimed that the total uncertainty after appending these contributing uncertainties is a certain bound.

To evaluate the maximum probability of failure at the nominal conditions, the rightmost CDF of the total prediction p-box is fitted with a probability distribution. After comparing several distributions, it was found that the three-point log-normal distribution with the parameters, μ=9.8296, SD=0.13876, and threshold parameter, λ=12,404, provides the best fit based upon the Kolmogorov–Smirnov goodness of fit statistic. Similarly, another three-point log-normal distribution is fitted to the leftmost CDF of the total prediction p-box using the parameters μ=10.631, SD=0.03996, and λ=26,315 for determining the minimum probability of failure. The fitted distributions for prediction the p-box are shown in Fig. 11 using solid curves.

The treatment of yield stress is more challenging because the small sample size results in a poor fit for most probability distributions. This is a case of lack of enough knowledge about a variable that is known to be random and is best characterized as a mixed uncertainty. A normal distribution is assumed for the yield stress with the SD calculated from the provided sample as SD=1755.857, while the mean value is an uncertain interval. The lower and upper bounds for the mean of the normal distribution are selected as the sample mean and the legacy value for yield stress, respectively, resulting in the interval μ=[44,203.8,45,000]. The resulting p-box for yield stress is plotted as the probability of exceedance (also called complementary CDF (CCDF) or survival function) in Fig. 11 using dotted curves. CCDF is simply defined as: CCDF(σ)=1CDF(σ).

In this case, the maximum probability of failure is determined by the complement of cumulative probability level where the rightmost prediction CDF exceeds the leftmost CCDF. This point of intersection is shown in Fig. 11 using a solid circle (right). Similarly, the minimum probability of failure is determined by the complement of cumulative probability level, where the leftmost prediction CDF exceeds the rightmost CCDF. This value is found to be zero (within machine precision). Our final conclusions regarding failure prediction are as follows:

  • The probability of failure is found as: Pfail=[0, 0.0034].

  • Based upon the available data on yield stress and defined failure criterion, we conclude that the tanks are not safe.

The final conclusion strongly depends upon the proper characterization of yield stress. However, detailed knowledge of yield stress from experimental measurements has not been provided in the problem statement. We recommend that accurate interval bounds as well as probabilistic distribution be established for σY so it can be accurately characterized as a mixed uncertainty for comparison with maximum stress prediction.

Computational Cost.

The proxy code for FEM simulation provided with the problem statement simply performs function evaluations based on Timoshenko–Krieger series solution for cylindrical shells [4]. Though these function evaluations are inexpensive (almost instantaneous), the corresponding FEM simulation needs several CPU hours depending on the grid level as per the data provided in the problem statement. The number of runs performed and the total cost of computation in terms of CPU hours are shown in Table 9.

The largest number of runs (30,000) was performed during the u-pooling step, where three different cases for u-pooling to H=51 in. each were run with 10,000 simulations. Thus, the u-pooling study was performed using the coarsest mesh (m=1). The MAVM calculations used the u-pooling runs and hence no additional runs were required for this part. A bias error arises due to the use of different grid sizes for u-pooling and for formation of the simulation p-box. However, we estimate that the bias error in the formation of MAVM due to the use of coarse mesh is small and would lead to further reduction in total uncertainty.

One thousand runs were made each at two different operating conditions (χ=1 and χ=0.1) on the coarse mesh for the global SA to determine the Sobol’ indices. For local SA using finite-difference method, only nine runs were required for each of the two conditions, and thus medium mesh (m=2) was used for this purpose. The parametric study mentioned in Sec. 4.1 was performed with 1320 runs on the finest mesh (m=4) before the cost of computation was known. However, after the knowledge of the computational cost, we recommend replacing this study with the parametric runs over the operating conditions space that was done for the surrogate model construction in Sec. 4.6, for a total of 125 runs on the fine mesh (m=3).

The solution verification to determine numerical uncertainty was performed using three runs (for the three conditions) on each of the four mesh levels. Finally, the uncertainty propagation at the nominal conditions for failure prediction was performed with 10,000 runs using the medium mesh (m=2). The total cost of computation was found to be approximately 1.6 × 106 CPU hours which could be expensive for a practical application. Note that the largest cost in our analysis arises from the uncertainty propagation step (i.e., 1.05 × 106 CPU hours for 10,000 simulations). Given the large variation in computational cost with mesh sizes, we recommend that more mesh levels be examined (such as intermediate mesh sizes between the coarse and medium meshes) to determine whether one could obtain satisfactory results for lower computational cost. Also, more information from experimental measurements could significantly reduce the interval uncertainty in input variables leading to requiring fewer computational simulations for an accurate estimation of total prediction uncertainty.

In this work, we employed the segregated propagation of probabilistic and interval-characterized uncertainties using the PBA framework to address the Sandia challenge problem of tank failure prediction. Systematic analysis was performed involving a parametric study to obtain a feel for the problem, global and local sensitivity analyses, numerical uncertainty estimation using Richardson extrapolation, and model form uncertainty estimation using MAVMs. Input uncertainties were characterized into aleatory, epistemic, and mixed uncertainties after careful observation of the provided data, qualitative observation from the parametric study, and sensitivity analyses. In order to convert the experimental data for displacements to stress data so MAVM could be generated, we used a surrogate model to convert experimental displacement data to surrogate stress data. We then used u-pooling to convert stress data from various conditions to experimental stress data points. The total uncertainty prediction was formed by appending the numerical uncertainty and the model form uncertainty to the p-box obtained from segregated propagation of input uncertainties.

The final p-box for maximum stress SRQ was used to determine that the probability of failure at nominal conditions is Pfail=[0, 0.0034]. The tanks are determined to be not safe. However, this conclusion is strongly dependent on the uncertainty in the yield stress. Specifically, not enough data has been provided to characterize a key element of this failure analysis, i.e., yield stress.

The lack of experimental data in quantity (e.g., number of data points), quality (e.g., no tolerances or distributions for legacy data), and general information (e.g., age/environmental conditions of the tank population) was a major challenge in estimating the model form uncertainty. Also, the nature of bias created in the proxy code for radius of the tank (possibly used to mimic the true solution) created problems at various stages of this process. Another challenge was that the experimental data, especially geometry data, were all from the same tank and thus there were no estimates of tank-to-tank uncertainties.

The uncertainties just mentioned contributed large epistemic and aleatory uncertainty to our prediction of tank safety. Because of the large interval value of Pfail, one could conclude that the tanks may be safe; but that is not the issue that a decision maker must deal with. Faced with large uncertainty for a system that does not meet safety or performance requirements, a decision maker will then ask: “What are the major contributors to the predicted uncertainty?” While the present approach does segregate the effects of epistemic model input uncertainty, aleatory model input uncertainty, model form uncertainty, and numerical uncertainty, further information on which aleatory or epistemic uncertainties are important requires an SA. When interval-valued uncertainties and random variables exist together, as in the present analysis, special SA techniques must be used.

The authors would like to thank Kenneth Hu of Sandia National Laboratories for various useful clarifications regarding the problem statement and usage of dakota.

  • d or w =

    tank wall displacements, normal to the surface (in.)

  • E =

    Young's modulus (psi)

  • F(Y) =

    simulation outcome (value, CDF, or p-box)

  • Fs =

    factor of safety for numerical uncertainty

  • Fs(k) =

    factor of safety for modified area validation metric

  • F0, F1 =

    constants to determine Fs(k)

  • H =

    liquid height in tank (in.)

  • L =

    length (in.)

  • m =

    mesh ID

  • p =

    observed order of accuracy

  • P =

    gauge pressure (psig)

  • Pfail =

    probability of failure

  • r =

    grid refinement factor

  • R =

    radius (in.)

  • S¯ =

    estimated exact solution

  • Sm =

    solution on mesh m

  • T =

    wall thickness (in.)

  • Unum =

    estimated numerical uncertainty

  • x =

    axial location (in.)

  • γ =

    liquid specific weight (lb/in.3)

  • ν =

    Poisson's ratio

  • σ =

    von Mises stress (psi)

  • σY =

    yield stress (psi)

  • ϕ =

    circumferential angle or angular location (rad)

  • χ =

    liquid composition or mass fraction

Hu, K. T. , and Orient, G. E. , 2016, “ The 2014 Sandia V&V Challenge Problem: A Case Study in Simulation, Analysis, and Decision Support,” ASME J. Verif. Validation Uncertainty Quantif. 1(1).
Hu, K. T. , 2014, “ 2014 V&V Challenge: Problem Statement,” Sandia National Laboratories, Albuquerque, NM, SAND Report No. 2013-10486P.
Bernardini, A. , and Tonon, F. , 2010, Bounding Uncertainty in Civil Engineering: Theoretical Background, Springer-Verlag, Berlin.
Timoshenko, S. , and Woinowsky-Krieger, S. , 1987, Theory of Plates and Shells, McGraw-Hill, New York.
Oberkampf, W. L. , and Roy, C. J. , 2010, Verification and Validation in Scientific Computing, Cambridge University Press, Cambridge, MA.
Roy, C. J. , and Oberkampf, W. L. , 2011, “ A Comprehensive Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing,” Comput. Methods Appl. Mech. Eng., 200(25), pp. 2131–2144. [CrossRef]
Beer, M. , Ferson, S. , and Kreinovich, V. , 2013, “ Imprecise Probabilities in Engineering Analyses,” Mech. Syst. Signal Process., 37(1–2), pp. 4–29. [CrossRef]
Deodatis, G. , and Spanos, P. D. , 2011, “ Computational Stochastic Mechanics,” 6th International Conference on Computational Stochastic Mechanics, Island of Rhodes, Greece, June 13–16.
Cullen, A. C. , and Frey, H. C. , 1999, Probabilistic Techniques in Exposure Assessment: A Handbook for Dealing With Variability and Uncertainty in Models and Inputs, Plenum Press, New York.
Veneziano, D. , Agarwal, A. , and Karaca, E. , 2009, “ Decision Making With Epistemic Uncertainty Under Safety Constraints: An Application to Seismic Design,” Probab. Eng. Mech., 24(3), pp. 426–437. [CrossRef]
Roy, C. J. , and Balch, M. S. , 2012, “ A Holistic Approach to Uncertainty Quantification With Application to Supersonic Nozzle Thrust,” Int. J. Uncertainty Quantif., 2(4), pp. 363–381. [CrossRef]
Ghosh, J. K. , Delampady, M. , and Samanta, T. , 2006, An Introduction to Bayesian Analysis: Theory and Methods, Springer, Berlin.
Klir, G. J. , 2006, Uncertainty and Information: Foundations of Generalized Information Theory, Wiley-Interscience, Hoboken, NJ.
Ferson, S. , and Hajagos, J. G. , 2004, “ Arithmetic With Uncertain Numbers: Rigorous and (Often) Best Possible Answers,” Reliab. Eng. Syst. Saf., 85(1–3), pp. 135–152. [CrossRef]
Ferson, S. , and Ginzburg, L. R. , 1996, “ Different Methods are Needed to Propagate Ignorance and Variability,” Reliab. Eng. Syst. Saf., 54(2–3), pp. 133–144. [CrossRef]
Leonard, T. , and Hsu, J. S. J. , 1999, Bayesian Methods: An Analysis for Statisticians and Interdisciplinary Researchers, Cambridge University Press, New York.
van den Bos, A. , 2007, Parameter Estimation for Scientists and Engineers, Wiley-Interscience, Hoboken, NJ.
AIAA, 1998, AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations (G-077-1998e), American Institute of Aeronautics and Astronautics, Reston, VA.
ASME, 2006, Guide for Verification and Validation in Computational Solid Mechanics, ASME, New York, Standard V&V 10-2006.
Kennedy, M. C. , and O'Hagan, A. , 2001, “ Bayesian Calibration of Computer Models,” J. R. Stat. Soc., Ser. B, 63(3), pp. 425–464. [CrossRef]
Oberkampf, W. L. , and Ferson, S. , 2007, “ Model Validation Under Both Aleatory and Epistemic Uncertainty,” NATO/RTO Symposium on Computational Uncertainty in Military Vehicle Design, Athens, Greece, Dec. 3–6, Paper No. AVT-147/RSY-022.
Ferson, S. , Oberkampf, W. L. , and Ginzburg, L. , 2008, “ Model Validation and Predictive Capability for the Thermal Challenge Problem,” Comput. Methods Appl. Mech. Eng., 197(29–32), pp. 2408–2430. [CrossRef]
Voyles, I . T. , and Roy, C. J. , 2014, “ Evaluation of Model Validation Techniques in the Presence of Uncertainty,” AIAA Paper No. 2014-0120.
Voyles, I . T. , and Roy, C. J. , 2015, “ Evaluation of Model Validation Techniques in the Presence of Aleatory and Epistemic Input Uncertainties,” AIAA Paper No. 2015-1374.
Adams, B. M. , Bauman, L. E. , Bohnhoff, W. J. , Dalbey, K. R. , Ebeida, M. S. , Eddy, J. P. , Eldred, M. S. , Hough, P. D. , Hu, K. T. , Jakeman, J. D. , Swiler, L. P. , and Vigil, D. M. , 2013, “ DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis: Version 5.4 User's Manual,” Sandia National Laboratories, Alburquerque, NM, Sandia Technical Report No. SAND2010-2183.
Saltelli, A. , Tarantola, S. , Campolongo, F. , and Ratto, M. , 2004, Sensitivity Analysis in Practice, Wiley, Ispra, Italy.
Roache, P. J. , 2009, Fundamentals of Verification and Validation, Hermosa Publishers, Socorro, NM.
Copyright © 2016 by ASME
View article in PDF format.

References

Hu, K. T. , and Orient, G. E. , 2016, “ The 2014 Sandia V&V Challenge Problem: A Case Study in Simulation, Analysis, and Decision Support,” ASME J. Verif. Validation Uncertainty Quantif. 1(1).
Hu, K. T. , 2014, “ 2014 V&V Challenge: Problem Statement,” Sandia National Laboratories, Albuquerque, NM, SAND Report No. 2013-10486P.
Bernardini, A. , and Tonon, F. , 2010, Bounding Uncertainty in Civil Engineering: Theoretical Background, Springer-Verlag, Berlin.
Timoshenko, S. , and Woinowsky-Krieger, S. , 1987, Theory of Plates and Shells, McGraw-Hill, New York.
Oberkampf, W. L. , and Roy, C. J. , 2010, Verification and Validation in Scientific Computing, Cambridge University Press, Cambridge, MA.
Roy, C. J. , and Oberkampf, W. L. , 2011, “ A Comprehensive Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing,” Comput. Methods Appl. Mech. Eng., 200(25), pp. 2131–2144. [CrossRef]
Beer, M. , Ferson, S. , and Kreinovich, V. , 2013, “ Imprecise Probabilities in Engineering Analyses,” Mech. Syst. Signal Process., 37(1–2), pp. 4–29. [CrossRef]
Deodatis, G. , and Spanos, P. D. , 2011, “ Computational Stochastic Mechanics,” 6th International Conference on Computational Stochastic Mechanics, Island of Rhodes, Greece, June 13–16.
Cullen, A. C. , and Frey, H. C. , 1999, Probabilistic Techniques in Exposure Assessment: A Handbook for Dealing With Variability and Uncertainty in Models and Inputs, Plenum Press, New York.
Veneziano, D. , Agarwal, A. , and Karaca, E. , 2009, “ Decision Making With Epistemic Uncertainty Under Safety Constraints: An Application to Seismic Design,” Probab. Eng. Mech., 24(3), pp. 426–437. [CrossRef]
Roy, C. J. , and Balch, M. S. , 2012, “ A Holistic Approach to Uncertainty Quantification With Application to Supersonic Nozzle Thrust,” Int. J. Uncertainty Quantif., 2(4), pp. 363–381. [CrossRef]
Ghosh, J. K. , Delampady, M. , and Samanta, T. , 2006, An Introduction to Bayesian Analysis: Theory and Methods, Springer, Berlin.
Klir, G. J. , 2006, Uncertainty and Information: Foundations of Generalized Information Theory, Wiley-Interscience, Hoboken, NJ.
Ferson, S. , and Hajagos, J. G. , 2004, “ Arithmetic With Uncertain Numbers: Rigorous and (Often) Best Possible Answers,” Reliab. Eng. Syst. Saf., 85(1–3), pp. 135–152. [CrossRef]
Ferson, S. , and Ginzburg, L. R. , 1996, “ Different Methods are Needed to Propagate Ignorance and Variability,” Reliab. Eng. Syst. Saf., 54(2–3), pp. 133–144. [CrossRef]
Leonard, T. , and Hsu, J. S. J. , 1999, Bayesian Methods: An Analysis for Statisticians and Interdisciplinary Researchers, Cambridge University Press, New York.
van den Bos, A. , 2007, Parameter Estimation for Scientists and Engineers, Wiley-Interscience, Hoboken, NJ.
AIAA, 1998, AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations (G-077-1998e), American Institute of Aeronautics and Astronautics, Reston, VA.
ASME, 2006, Guide for Verification and Validation in Computational Solid Mechanics, ASME, New York, Standard V&V 10-2006.
Kennedy, M. C. , and O'Hagan, A. , 2001, “ Bayesian Calibration of Computer Models,” J. R. Stat. Soc., Ser. B, 63(3), pp. 425–464. [CrossRef]
Oberkampf, W. L. , and Ferson, S. , 2007, “ Model Validation Under Both Aleatory and Epistemic Uncertainty,” NATO/RTO Symposium on Computational Uncertainty in Military Vehicle Design, Athens, Greece, Dec. 3–6, Paper No. AVT-147/RSY-022.
Ferson, S. , Oberkampf, W. L. , and Ginzburg, L. , 2008, “ Model Validation and Predictive Capability for the Thermal Challenge Problem,” Comput. Methods Appl. Mech. Eng., 197(29–32), pp. 2408–2430. [CrossRef]
Voyles, I . T. , and Roy, C. J. , 2014, “ Evaluation of Model Validation Techniques in the Presence of Uncertainty,” AIAA Paper No. 2014-0120.
Voyles, I . T. , and Roy, C. J. , 2015, “ Evaluation of Model Validation Techniques in the Presence of Aleatory and Epistemic Input Uncertainties,” AIAA Paper No. 2015-1374.
Adams, B. M. , Bauman, L. E. , Bohnhoff, W. J. , Dalbey, K. R. , Ebeida, M. S. , Eddy, J. P. , Eldred, M. S. , Hough, P. D. , Hu, K. T. , Jakeman, J. D. , Swiler, L. P. , and Vigil, D. M. , 2013, “ DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis: Version 5.4 User's Manual,” Sandia National Laboratories, Alburquerque, NM, Sandia Technical Report No. SAND2010-2183.
Saltelli, A. , Tarantola, S. , Campolongo, F. , and Ratto, M. , 2004, Sensitivity Analysis in Practice, Wiley, Ispra, Italy.
Roache, P. J. , 2009, Fundamentals of Verification and Validation, Hermosa Publishers, Socorro, NM.

Figures

Grahic Jump Location
Fig. 1

Side view (left) and axial view (right) of the tank

Grahic Jump Location
Fig. 2

An example of probability box (p-box) for a parameter (x) that is a mixture of both aleatory (random) and epistemic (lack of knowledge) uncertainty

Grahic Jump Location
Fig. 4

Effect of perturbing various input variables on maximum stress: (a) tank radius, (b) tank surface thickness, and (c) Young's modulus. Note that input distribution for radius is truncated at R=30 in.

Grahic Jump Location
Fig. 5

Effect of perturbing various input variables on maximum normal displacement: (a) tank radius, (b) tank surface thickness, and (c) Young's modulus. Note that input distribution for radius is truncated at R=30 in.

Grahic Jump Location
Fig. 6

Uncertainty propagation at nominal conditions. M=50 epistemic samples and N=200 aleatory samples were used (for a total of 10,000 simulations) on medium grid (m=2): (a) all CDFs and (b) p-box.

Grahic Jump Location
Fig. 7

Locations on the tank surface where the displacement data are measured during field tests. Twenty locations are marked with circles. Filled circles are locations where we convert the displacement data to stress data. Location#16 is where all the experimental data are pooled for MAVM calculation.

Grahic Jump Location
Fig. 8

The u-pooling process explained for pooling data to one location at H=51 in. liquid height: (a)–(e) probing for probability levels (u values) based upon the surrogate stress data and (f) collecting stress data for the experimental CDF

Grahic Jump Location
Fig. 9

MAVM calculation using simulation p-box and experimental (u-pooled) discrete CDF at H=51 in. operating condition

Grahic Jump Location
Fig. 10

Total prediction uncertainty

Grahic Jump Location
Fig. 11

Prediction p-box for maximum stress (solid curves) and p-box for yield stress (dotted curves): (left) CDFs for maximum stress and CCDFs for yield stress, and (right) enlarged view. The relevant point of intersection is shown with a solid circle which results in maximum probability of failure as: 1−0.9966=0.0034.

Tables

Table Grahic Jump Location
Table 1 Maximum numerical uncertainty (Unum) for different grid sizes
Table Footer NoteaCPU = central processing unit.
Table Grahic Jump Location
Table 2 User-defined and dakota-sampled values of input parameter distribution for global SA
Table Grahic Jump Location
Table 3 Main and total Sobol’ indices for global SA (bold numbers indicate significant correlations)
Table Grahic Jump Location
Table 4 Finite-difference based sensitivity (bold numbers represent significant correlations while shaded cells represent moderately significant correlations)
Table Grahic Jump Location
Table 5 Categorization of input parameters
Table Grahic Jump Location
Table 6 Characterization of input uncertainties
Table Grahic Jump Location
Table 7 Operating conditions for which displacement data is available from data set #6 (only conditions relevant to u-pooling are shown here)
Table Grahic Jump Location
Table 8 u-pooling to get experimental stress data points at H=51in.
Table Grahic Jump Location
Table 9 Cost of computation

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In