0
Research Papers

# Reliability Analysis With Model Uncertainty Coupling With Parameter and Experiment Uncertainties: A Case Study of 2014 Verification and Validation Challenge ProblemOPEN ACCESS

[+] Author and Article Information
Zhimin Xi

Mem. ASME
Industrial and Manufacturing Systems Engineering,
University of Michigan–Dearborn,
Dearborn, MI 48128
e-mail: zxi@umich.edu

Ren-Jye Yang

Fellow ASME
Ford Motor Company,
Dearborn, MI 48121
e-mail: ryang@ford.com

Manuscript received February 9, 2015; final manuscript received October 18, 2015; published online December 14, 2015. Guest Editor: Kenneth Hu.

J. Verif. Valid. Uncert 1(1), 011005 (Dec 14, 2015) (11 pages) Paper No: VVUQ-15-1011; doi: 10.1115/1.4031984 History: Received February 09, 2015; Revised October 18, 2015

## Abstract

A validation strategy with copula-based bias approximation approach is proposed to address the 2014 Verification and Validation (V & V) challenge problem developed by the Sandia National Laboratory. The proposed work further incorporates model uncertainty into reliability analysis. Specific issues have been addressed including: (i) uncertainty modeling of model parameters using the Bayesian approach, (ii) uncertainty quantification (UQ) of model outputs using the eigenvector dimension reduction (EDR) method, (iii) model bias calibration with the U-pooling metric, (iv) model bias approximation using the copula-based approach, and (v) reliability analysis considering the model uncertainty. The proposed work is well demonstrated in the challenge problem.

<>

## Introduction

To date, reliability analysis relies significantly on computer simulation models or analytical models to predict performance of interest given a set of design configurations. The majority of research focuses on the development of various reliability analysis methods so that reliability can be more accurately and efficiently evaluated. It is well known that models are built to approximate the real physical systems on the basis of a series of assumptions and simplifications. Hence, model bias, i.e., the portion failing to represent the real system, always exists because there is no perfect model that can represent the real physical systems without any error. Ignorance of the model bias in reliability analysis or reliability-based design could result in significant design errors by overestimating the system or structure reliability. For the 2014 V&V challenge problem [1], one of the key objectives is to predict the probability of failure (i.e., 1-reliability) of tanks based on the provided simulation model, where the model bias should be estimated on the basis of limited test data and the proposed bias characterization strategy.

The key objective of model validation is to determine the degree to which the model is an accurate representation of the real world from the perspective of the intended uses of the model [24]. For the challenge problem, in particular, the intended use of the model is for reliability analysis. Hence, the model may be considered as a validated model if it can produce reasonable reliability (or probability of failure) prediction within an acceptable tolerance level compared with the true reliability. Since the true reliability is not known in the challenge problem, validation techniques are mainly used to improve the model accuracy at the intended operating conditions. The reliability prediction can then be validated if the true reliability is known in the future.

Traditionally, the research on model validation was proposed to revise the model conceptually for credibility improvement of the model. From the model development perspective, the key advantage of revising the model conceptually is that accuracy of the model could be significantly improved. However, this approach is practically difficult and may not be feasible in reality due to three reasons: (i) identification of the root cause for model inaccuracy is complicated particularly for large-scale engineering systems; (ii) fundamental modification of the model is time consuming, costly, and yet may not be practical; and (iii) there is no perfect model which can represent the real physical system without any model bias.

The bias correction approach, therefore, has recently gained significant attention [58] for quantifying the model bias in the design domain without changing the baseline model. The essential idea is to add the characterized model bias to the baseline simulation model so that the corrected model prediction could be more accurate and robust compared to the baseline model. This bias correction approach is mainly composed of three steps: (i) characterize model bias at a few training design configurations; (ii) construct a metamodel for the model bias; and (iii) approximate model bias at intended uses of the model and add it to the baseline model prediction. Good accuracy improvement of the baseline model has been shown in the literature using the bias correction approach not only for single model output but also for model predictions with multiple- or high-dimensional outputs [510].

The key research challenge of the bias correction approach is to construct the metamodel of the model bias effectively based on limited training datasets. Regression-based models are the most popular approach because of the well-established research in this field, such as the Gaussian process (GP) regression model [8] and the moving least square (MLS) method [5]. The GP regression model assumes a multivariate normal distribution for the model bias in the design space such that the model bias at each specific design configuration follows a univariate normal distribution and their correlations at different design configurations are modeled by an assumed covariance function. The MLS method directly builds four regression models for the first four central moments (i.e., mean, standard deviation (STD), skewness, and kurtosis) of the model bias so that distribution of the model bias can be approximated at any new design configuration using the Pearson system. However, there are a few important limitations for the regression-based models. First of all, accuracy and efficiency of these models degrade significantly for problems with many design variables due to the curse of dimensionality. Second, an underlying regression structure must be assumed based on the training datasets such as the covariance function in the GP regression model and the basis function in the MLS method. The limitation is that different assumptions of the regression structure could affect approximation accuracy of the model bias and it is tricky to identify the optimal regression structure especially when there are only limited training date sets. It was reported that regression-based models could possibly worsen the model accuracy at the intended uses of the model after incorporating the characterized model bias into the baseline model [9]. The reasons are mainly due to aforementioned limitations for the regression-based models.

In the 2014 V&V challenge problem [1], the provided simulation model needs to be validated using the bias correction approach. Recognizing the limitations of the regression-based models, a copula-based bias characterization approach is proposed for model bias approximation in the design space with associated validation strategies in uncertainty modeling, UQ, model bias calibration, and reliability analysis. The rest of the paper is organized as follows. Section 2 elaborates the proposed validation strategies with copula-based bias characterization approach. Section 3 presents the challenge problem for demonstration. Discussion and conclusions are presented in Secs. 4 and 5, respectively.

## Proposed Validation Strategies With Copula-Based Bias Characterization

Various uncertainties play key roles in model validation and they are classified into five groups in this paper including: (i) test uncertainty, (ii) model parameter uncertainty, (iii) model uncertainty, (iv) statistical uncertainty, and (v) algorithm uncertainty. Test uncertainty means uncertainties of the inherent test error and measurement error. Model parameter uncertainty stands for the uncertainties of model parameters which are representations of physical uncertainties, such as material properties, manufacturing tolerances, and loading conditions in a specific simulation model. Model parameter can be further grouped into two categories: (a) controllable (or design) model parameter and (b) noncontrollable (or random) model parameter. Design model parameter, also called design variable for simplicity, is changeable at different design configurations. Random model parameter (or random variable for simplicity) is not changeable when design changes. Model uncertainty represents the uncertainty of model bias, i.e., the portion of model inadequacy for representing the real physical system, such as discretization of a continuous system in finite-element analysis, approximation of real physical systems by metamodels, and simplification of electrochemical battery models by equivalent circuit models. Statistical uncertainty is the uncertainty modeling error caused by data insufficiency and improper distribution assumptions. Algorithm uncertainty means the error of UQ methods on different problems.

The objective of this paper is to address the 2014 V&V challenge problem considering aforementioned uncertainties in a systematic manner. In particular, model uncertainty should be accurately approximated in the whole design space, where the design space is defined as the allowable design domain of all design variables for an engineering design practice, where the domain of each design variable is specified by the lower and upper design bounds. The proposed validation strategies are briefly summarized as follows. First of all, uncertainty modeling is performed for model parameters including design and random variables. Next, UQ is conducted to quantify the model output uncertainty subject to the parameter uncertainty at different design/operation configurations. Then, model uncertainty is characterized at different design/operation configurations based on the test data, the UQ results, the validation model, and the validation metric. Next, model uncertainty is approximated at the intended design/operation configuration using the copula-based approach. Finally, reliability analysis is conducted at the intended configuration considering the characterized model uncertainty. Technical details of these steps are elaborated in Secs. 2.1 to 2.5.

###### Uncertainty Modeling of Model Parameters.

Uncertainty modeling of model parameters can be conducted by two approaches, i.e., the irreducible uncertainty modeling and reducible uncertainty modeling, depending on the data sufficiency. The irreducible uncertainty is typically characterized using probability density functions (PDFs) with sufficient data. The reducible uncertainty is derived from the lack of information for describing the uncertainty. For example, distribution parameters, e.g., the mean and the variance, are uncertain unless sufficient data are collected. Typically, data sufficiency can be studied from the convergence of the distribution parameters.

The Pearson system [11,12] is proposed for irreducible uncertainty modeling because of its general better accuracy for the high and low probability levels compared to other PDF estimation methods, such as the saddlepoint approximation [13], maximum entropy principle [14], and Johnson system [15]. Many popular distributions (e.g., normal, beta, gamma, lognormal, etc.) are just special cases of the Pearson system. Hence, the proposed approach can reduce the statistical uncertainty by eliminating improper distribution assumptions.

It is very common that only a few tests can be conducted for uncertainty modeling. In that scenario, Bayes statistics are proposed for reducible uncertainty modeling. Basically, the Bayesian approach is used for updating information about the distribution parameter vector Δ (e.g., mean and STD) using hyperparameters. First, a prior distribution of Δ should be assigned with assumed hyperparameters before any future observation of Θ (i.e., test data related to model parameters) is taken. Then, the prior distribution of Δ is updated to the posterior distribution with updated hyperparameters as the data of Θ are obtained. With the posterior distribution of the parameter vector Δ, the final marginal distribution of Θ can be determined using Monte Carlo simulation (MCS) to first draw random samples from the posterior distribution of Δ and then again draw random samples for Θ given the set of distribution parameters. It is computationally expensive to calculate the posterior distribution of the parameter vector Δ for nonconjugate Bayesian updating. This paper employs the conjugate Bayesian [16,17] with closed-form solutions for the posterior distribution of the parameter vector Δ. However, Laplace approximation [18], entropy-based methods [19], and Metropolis–Hastings algorithm [20] are methods readily to be employed for nonconjugate Bayesian updating.

###### UQ Using the EDR Method.

The objective is to quantify the uncertainty of model outputs (e.g., PDF) subject to various model parameter uncertainties. In many advanced UQ methods, only a few simulations or function evaluations at a set of samples of the input design and random variables are required for UQ if the input design and random variables are statistically independent. For example, the EDR method [21] demands either 2N + 1 or 4N + 1 samples for UQ, where N is the number of the input design and random variables. In the polynomial chaos expansion (PCE) method [2225], the evaluation of the PCE coefficients requires the response values at the predefined Gaussian quadrature points [26], the collocation points specified by the Smolyak algorithm [27], or the univariate and bivariate sample points [22]. Hence, the UQ of the system response can be carried out using one of the advanced probability analysis methods with high efficiency and accuracy. The EDR method is employed in this paper and is briefly reviewed in this section.

In general, statistical moments of system responses can be calculated as Display Formula

(1)$E{Ym(X)}=∫−∞+∞⋯∫−∞+∞Ym(x)⋅fX(x)⋅dx$

In Eq. (1), a major challenge is a multidimensional integration over the entire random input domain. To resolve this difficulty, the EDR method uses an additive decomposition [28] that converts a multidimensional integration in Eq. (1) into multiple one-dimensional integrations. Thus, Eq. (1) can be approximated as Display Formula

(2)$E[Ym(X)]≅E[Y¯m(X)]=∫−∞∞⋯∫−∞∞Y¯m⋅fX(x)⋅dx$

where $Y¯=∑j=1NY(μ1,...,μj−1,Xj,μj+1,...,μN)−(N−1)⋅Y(μ1,...,μN)$ and μj is the mean value of Xj. Using a binomial formula, Eq. (2) can be evaluated by executing one-dimensional integration recursively. Uncertainty of system responses can be evaluated through multiple one-dimensional numerical integrations. The challenge of the problem still remains how to carry out one-dimensional integration effectively. To overcome the challenge, the EDR method incorporates three technical components: (i) eigenvector sampling, (ii) one-dimensional response approximations for efficient and accurate numerical integration, and (iii) a stabilized Pearson system for PDF generation. For the sake of completeness, these technical components are briefly reviewed in Secs. 2.2.1 to 2.2.3. Interested readers can refer to Ref. [21] for more technical details.

###### Eigenvector Sampling.

Accuracy for probability analysis can be increased as the number of integration points becomes larger in recursive one-dimensional integration. However, the increase of integration points makes simulations prohibitively expensive. To achieve both accuracy and efficiency in probability analysis, one-dimensional response surface will be created using samples along the eigenvectors of a random system. For efficiency, the EDR method employs only either three or five samples along each eigenvector, depending on nonlinearity of the system responses. For N number of model parameters, the EDR method demands 2N + 1 or 4N + 1 samples. To obtain the eigenvectors and eigenvalues, an eigenproblem can be formulated as Display Formula

(3)$Σϕ=λϕ$

where ϕ and λ are the eigenvectors and eigenvalues of the covariance matrix Σ.

###### Stepwise Moving Least Squares (SMLS) Method for Numerical Integration.

The MLS method is improved by a stepwise selection of basis functions, referred to as the SMLS method. The optimal set of basis terms is adaptively chosen to maximize numerical accuracy by screening the importance of basis terms. This technique is exploited for approximating the integrand in Eq. (2). The idea of a stepwise selection of basis functions comes from the stepwise regression method [29]. The SMLS method allows the increase in the number of numerical integration points without requiring actual system evaluations through simulations or experiments. Thus, a large number of integration points can be used to increase numerical accuracy in assessing statistical moments of the responses while maintaining high efficiency. The EDR method has no restriction to choose numerical integration schemes.

###### A Stabilized Pearson System.

The Pearson system can be used to construct the PDF of a random response (Y) based on its first four moments (mean, STD, skewness, and kurtosis). The detail expression of the PDF can be achieved by solving the differential equation as Display Formula

(4)$1p(Y)dp(Y)dY=−a+Yc0+c1Y+c2Y2$

where a, c0, c1, and c2 are the four coefficients determined by the first four moments of the random response (Y) and expressed as

$c0=(4β2−3β1)(10β2−12β1−18)−1μ2a=c1=β1(β2+3)(10β2−12β1−18)−1μ2c2=(2β2−3β1−6)(10β2−12β1−18)−1$

where β1 is the square of skewness, β2 is the kurtosis, and μ2 is the variance. The mean value is always treated as zero in the Pearson System, and it can be easily shifted to the true mean value once the differential equation is solved. Basically, the differential equation can be solved based on the different assumptions of the four coefficients a, c0, c1, and c2. In Pearson system, however, a singularity problem is often found due to the failure in calculating coefficients of a specific distribution type, which results in a numerical instability. In the EDR method, a stabilized Pearson system is proposed to avoid instability. Two hyper-PDFs are generated by fixing the first three statistical moments and incrementally adjusting the original kurtosis by slightly increasing or decreasing the value until two hyper-PDFs are successfully constructed. Then, these two hyper-PDFs are used to approximate the PDF with original moments.

###### Model Bias Calibration.

The majority of the bias correction approaches are based on the Bayesian calibration model proposed by Kennedy and O’Hagan [30] as shown below Display Formula

(5)$Ŷ(P,X)+δ=Y−ε$

where $Ŷ$ is the system performance prediction from the baseline simulation model, δ is the model bias, Y is the test data, ε is the test and measurement error, P is a vector of deterministic model parameters, and X is a vector of model parameters with uncertainties (e.g., design variables and random variables). In this paper, this model is adopted mainly for calibrating model bias δ.

At each specific design configuration, the model bias δ is calibrated as Display Formula

(6)$minimize U((Ŷ+δ+ε,Y)|δ)$

where U(•) is the U-pooling metric. With available test data Y, provided estimation of test and measurement error ε, and UQ of $Ŷ$ with the uncertainty modeling of X, the only unknown quantity is the model bias δ which needs to be calibrated. U-pooling metric was proposed by Ferson et al. [31] as a validation metric and has been adopted by many researchers in the study of model validation. The basic idea is to compare the cumulative distribution function (CDF) difference (i.e., the U-pooling value) between model prediction and test data in the standard Uniform space (or U-space) as shown in Fig. 1. The smaller the area difference, the higher the expected accuracy of the model prediction. For a specific static system response, each test datum yi corresponds to one ui value which is calculated from the CDF value of the corrected model prediction (i.e., $Ŷ+δ+ε$) at the same design configuration (i.e., $ui=FŶ+δ+ε(yi)$, where F(•) is the CDF of $Ŷ+δ+ε$).

At each specific design configuration, uncertainty of the model bias δ (i.e., model uncertainty) could be modeled as an arbitrary distribution by the Pearson system, an assumed normal distribution, or a constant value depending on the amount of test data and desired generality. If there is only one test datum, it may be desirable to model the bias as a constant value because there is no sufficient information to uniquely determine two unknown parameters (e.g., mean and STD) of the model bias. It is worth noting that the U-pooling metric can pool test data at different configurations together to come up with just one U-pooling value so that the issue of lacking test data could be compensated. In other words, the calibration model can be formulated to minimize the U-pooling value for all design configurations instead of calibrating the bias individually at each design configuration. Though this is a nice feature, we prefer not to go toward this approach for model bias calibration with the following reasons. First, without any assumption of the model bias as a function of design variables (e.g., regression models), M number of independent bias values need to be calibrated so that the U-pooling value is minimized considering total M design configurations. Theoretically, a total number of M! solutions are available to obtain the same minimum U-pooling value. Second, with functional assumption of the model bias, the functional structure (e.g., a linear model or a nonlinear model of the design variable) needs to be assumed; then, model coefficients are calibrated. This regression-based approach essentially assumes a new model (i.e., the bias model) that needs to be validated and has inherent limitations as already elaborated in Sec. 1. Third, it is certainly not desirable to have only one overall model bias considering all M design configurations.

###### Copula-Based Model Bias Approximation in the Design Space.

A copula-based approach [32] is proposed to approximate the model bias in the design space. The main idea is to build general statistical relationships between the expected model bias δ, the baseline model prediction Ŷ, and the design variables (i.e., a subset of model parameters X) in the design space using available calibrated model bias across various design configurations.

A copula is a general way in statistics to formulate a multivariate distribution with various types of statistical dependence. To date, most copulas only deal with bivariate data due to the fact that there is a lack of practical n-dimensional generalization of the coupling parameter [33,34]. One way to deal with multivariate data is to analyze the data pair-by-pair using two-dimensional copulas. The common methods to select the optimal copula are based on the maximum likelihood approach [35], which estimates an optimal parameter set. Recently, a Bayesian copula approach was proposed to select the optimal copula that is independent on the parameter estimation and provides more reliable identification of true copulas even with the lack of samples [33]. Hence, it is proposed in this paper to employ the Bayesian copula approach for building the general statistical relationships between the expected model bias δ, the baseline model prediction Ŷ, and the design variables. For the sake of completeness, we briefly describe the procedures for selecting the optimal copula using the Bayesian approach. Interested readers can refer to Ref. [33] for details.

A set of hypotheses is first made as follows using the Bayesian copula approach.

$Hk: The data come from copula Ck,k=1,…,Q$

The objective is to find the copula with the highest probability Pr(Hk | D), i.e., the optimal copula, from a finite set (Q) of copulas, where D represents the bivariate data (e.g., expected model bias δ versus baseline model prediction Ŷ and expected model bias δ versus design variables) in the standard uniform space. Based on the Bayes’ theorem, the probability that the bivariate data come from the copula Ck is expressed as Display Formula

(7)$Pr(Hk|D)=Pr(D|Hk)Pr(Hk)Pr(D)=∫−11Pr(D|Hk,τ)Pr(Hk|τ)Pr(τ)dτPr(D)$

where τ is the Kendall’s tau, which is a nonparametric measure of the statistical dependence associated to copulas. The probability of Kendall’s tau, Pr(τ), is equally likely for each copula. All copulas are equally probable with respect to a given τ, which reflects no preference over the copulas. The likelihood Pr(D|Hk,τ) depends upon τ and can be calculated from the copula PDF as Display Formula

(8)$Pr(D|Hk,τ)=∏l=1Mck(u1l,u2l|τ)$

where ck (•) is the PDF of the kth copula, M is the total number of bivariate data, and u1l and u2l are the lth bivariate data. The normalization of Pr (D) can be computed using the sum rule [36]. Four representative copulas (i.e., Clayton, Gaussian, Frank, and Gumbel) are employed in this study.

According to a set of bivariate copula models between the expected model bias δ, the baseline model prediction $Ŷ$, and the design variables, it is feasible to predict the possible model bias at any new design configuration. For example, copula modeling between $Ŷ$ and $δ$ allows us to identify the possible model bias $δ$ for a realization of $Ŷ$ (e.g., $Ŷ$ = a) at a new design configuration. Mathematically, this is a process to identify the conditional PDF of the model bias δ given $Ŷ$ = a, that is Display Formula

(9)$c((FŶ(ŷ),FΔ(δ))|ŷ=a)$

Meanwhile, we also know the design variable (e.g., at the new design configuration. Thus, the possible realizations of the model bias δ must also simultaneously satisfy all the conditional PDFs identified from a series of copula models between δ and design variables. In other words, the model bias in the standard uniform space can be expressed as Display Formula

(10)$δ=β×c((FŶ(ŷ),FΔ(δ))|ŷ=a)×∏j=1Rc((FXj(xj),FΔ(δ))|xj=aj)$

where β is a normalization parameter such that the integration of the PDF over the whole domain equals to one, R is the number of design variables. It is noted that the PDF of δ can be an arbitrary distribution with a closed-form solution depending on the combination of copulas in Eq. (10).

It is worth noting that the copula-based approach is an acausal bias modeling approach, which is fundamentally different with regression-based approaches where causality is assumed such that design variables cause the model bias δ. Such causal modeling approaches deviate from the essential meaning of the model bias, i.e., the model inherent inadequacy for representing the real physical systems. In addition, the proposed approach is expected to address two limitations of the regression-based approach. First of all, the curse of dimensionality is trivial for copula modeling. Second, it is not needed to assume any underlying regression structure for copula modeling. In fact, it is not even needed to assume the type of copula using the Bayesian copula approach.

###### Reliability Analysis Considering Model Bias.

With approximate model bias at a new design configuration, reliability analysis should be performed for the corrected model prediction (i.e., $Ŷ+δ$) instead of the baseline model prediction (i.e., $Ŷ$). Essentially, another source of uncertainty (i.e., $δ$) should be included in reliability analysis. Since the model bias δ is represented by a distribution similar to the model parameter X, any available reliability analysis methods can be used to perform reliability analysis for the corrected model prediction. In this paper, two approaches of computing the reliability are proposed: (i) the expected reliability and (ii) the reliability distribution. MCS was employed for illustrating the difference of the two approaches.

Calculation of the expected reliability considers overall uncertainty from δ at a specific design configuration. First of all, sufficient random samples of X and δ are generated to calculate the corrected model prediction of $Ŷ+δ$ represented by sufficient random samples; then, the expected reliability is calculated from the ratio of safe trials over the total trials. Calculation of the reliability distribution treats realizations of model bias δ individually. First of all, sufficient random samples of X are generated to calculate the baseline model prediction $Ŷ$. Next, sufficient random samples of δ are generated to represent the possible model bias realizations at the interested design configuration. Then, the corrected model prediction is computed for each realization of the model bias δi (i.e., $Ŷ+δi$) to obtain one reliability value. Finally, above step is repeated for all model bias realizations to obtain many reliability values thus forming a reliability distribution.

Thus, the first approach obtains the expected reliability considering overall uncertainty from δ, whereas the second approach computes the reliability distribution which may be more useful for safety critical structure design where confidence bounds of the reliability prediction can be provided. Other than the MCS, many advanced reliability analysis methods, such as the most-probable-point-based approaches [37,38], the EDR method, the PCE methods, etc., can be employed to significantly improve the computational efficiency of the reliability analysis.

## Case Study for the 2014 V&V Challenge Problem

This section closely elaborates the proposed validation strategies with the 2014 V&V challenge problem. Detailed background and introduction of the challenge problem can be referred in Ref. [1]. The objective of this case study is to predict the probability of failure of liquid-storage tanks under a specified operation condition (i.e., gauge pressure P = 73.5 psig, liquid composition χ = 1, and liquid height H = 50 in.) considering the potential model bias. The failure is defined as the von Mises stress of the tank exceeds the yield stress at any location of the tank. A simulation model was provided to predict the displacement and the von Mises stress under feasible operation conditions with defined key model parameters as shown in Table 1, where P, χ, and H are the controllable model parameters (i.e., design variables) that determine specific tank operation conditions. Other material properties and geometry parameters are noncontrollable model parameters (i.e., random variables) with associated uncertainties and Table 1 shows only the legacy data from the manufacturer. The mesh size is similar to the mesh size in the finite-element analysis where higher value indicates smaller mesh size and higher computational cost.

The proposed validation strategies are applied to the challenge problem with a step by step process. First of all, uncertainty modeling is conducted for model parameters. Then, UQ is carried out to approximate the tank displacement and stress uncertainty subject to the parameter uncertainty. Next, model bias calibration is executed at available tank operation conditions, followed by the model bias approximation at the intended tank operation condition using the proposed copula-based approach. Finally, reliability (or probability of failure) is computed considering the approximate model bias. As five types of uncertainties are defined in the proposed validation strategies, they are further illustrated in this case study. Test uncertainty means the test and measurement error of the tank displacement which is presumed to be extremely accurate, within ±3% or 0.002 in., whichever is greater. In particular, the test uncertainty is neglected in the problem because it will be shown in Sec. 3.2 (i.e., Fig. 4) that the uncertainty is insignificant compared with the displacement uncertainty quantified with respect to the parameter uncertainty. Model parameter uncertainty stands for the uncertainty of model parameters which are shown in Table 1 excluding the mesh size, the axial location x, and circumferential angle ϕ. In particular, P, χ, and H are design variables and rest of them are random variables. Uncertainties of these variables are modeled from provided test datasets and will be further illustrated in Sec. 3.1. Model uncertainty represents the uncertainty of model bias, i.e., the bias of displacement and the von Mises stress prediction from the simulation model compared to the test value. The simulation model with four different mesh sizes basically represents four baseline models with different levels of fidelity. Statistical uncertainty is the uncertainty modeling error caused by insufficient test data provided in this case study. Algorithm uncertainty means the UQ error from the EDR method for this case study.

Before applying the proposed validation strategies to this case study, a baseline simulation model has to be selected. The model with mesh size #2 was employed considering its reasonable accuracy and computational efficiency. In addition, the empty tank and the tank with liquid were considered as two different models. Since the intended application is for the tank with liquid, the empty tank model with associated test data for the displacement (i.e., dataset 5) was not used in this case study. Otherwise, dataset 5 could be used to calibrate the model bias when the liquid height is zero.

###### Uncertainty Modeling.

The objective of this stage is to complete the uncertainty modeling for model parameters. The expected outcome is that all model parameter uncertainties are represented by PDFs. The approach employed is the conjugate Bayesian model due to the data insufficiency identified from the convergence study of the distribution parameters. The Pearson system, however, is also used for the purpose of comparison.

Ten test datasets were collected for material properties and geometrical sizes in addition to the legacy data from the manufacturer. In particular, E, υ, T, and σy were measured from tank0 at ten different locations and L and R were measured from tank1 and tank2. The Bayesian approach was employed for the uncertainty modeling. In particular, a conjugate Bayesian model was used with normal distribution as the marginal PDF with assumed known variance calculated from the sample variance. The prior mean and variance of the mean parameter were assumed to be equal to the legacy value and the sample variance, respectively. The posterior mean and variance of the mean parameter were thus updated given ten set of measured data. With the aid of the MCS, the final marginal PDFs of the material properties and geometrical sizes were obtained as shown in the circled line in Fig. 2.

As a comparison, the Pearson system was employed to directly approximate the PDFs based on ten datasets. It is observed that the approximate PDFs from the Pearson system match the test data better than the Bayes approach. However, it is worth noting that the test data are not sufficient and the Pearson system could create significant amount of statistical error by modeling all PDFs as irreducible uncertainty. The statistical error may also be applied to the Bayesian approach. However, this approach produces conservative modeling which is more desirable when data are insufficient.

Uncertainties of the three design variables (i.e., P, χ, and H) were illustrated in the challenge problem [1]. In particular, uncertainty of P is due to the gauge measurement error to be within ±5% of the measured pressure. Uncertainty of χ is significant due to the measurement error to be within ±0.05 mass fraction. Uncertainty of H is again due to the measurement error caused by the orientation of the tank and the height difference is ≤2 in. at two support sides. According to the above provided information, they were modeled as normal distributions for simplicity with controllable mean values and noncontrollable STDs as shown in Table 2.

###### UQ of the Model Performance of Interests.

The objective of this stage is to quantify uncertainty of model outputs (i.e., displacement and stress) considering model parameter uncertainties. The expected outcome is that model output uncertainties are represented by PDFs or CDFs. The approach employed is the EDR method due to its high efficiency and accuracy. The MCS, however, is also used for the purpose of comparison. The UQ serves two purposes in this example: (i) model bias calibration in the next stage and (ii) observation of the agreement between the baseline model prediction and the corresponding test data.

The EDR method with a 2N + 1 sampling scheme was employed to predict the first four statistical moments and PDFs/CDFs of the maximum displacement and stress of the tank at specified operation conditions, where N is the total number of model inputs with uncertainties. Figure 3 shows such an example for the maximum stress prediction at the intended operation condition (i.e., P = 73.5, χ  = 1, and H = 50). To confirm the accuracy of the EDR method, MCS with 3000 simulation runs was also performed and compared with the EDR method as shown in Fig. 3. The convergence study of the MCS was conducted in advance to ensure the high accuracy of the MCS with 3000 samples. The results indicate that algorithm uncertainty of the EDR method is insignificant for this problem.

The displacement data of four tanks (i.e., tank3, tank4, tank5, and tank6) were provided and there were three different operation conditions for each tank as shown in Table 3, where the column of “test” lists the maximum absolute displacement obtained from 20 displacement values measured at different tank locations. UQ was conducted to obtain the PDF of maximum displacement at each operation condition using the EDR method. To compare with the test data, the median values and 95% confidence intervals (CIs) were plotted for 12 operation conditions as shown in Fig. 4, where the starred line indicates the test data sorted from the smallest to the largest. The detailed values of medians and CIs were provided in Table 3. It is observed that four test data (i.e., configurations #3, #4, #5, and #9) were located outside the CI bounds of the baseline model prediction indicating the lack of agreement between the baseline model prediction and the test data.

###### Model Bias Calibration of the Maximum Displacement.

The objective of this stage is to calibrate model bias of the maximum displacement at tank operation conditions where test data are available. The expected outcome is that model bias is characterized either by a PDF or a constant value at a specific tank operation condition depending on the amount of the test data. The approach employed is described in Eq. (6).

Due to high accuracy of the displacement measurement, test uncertainty ε is insignificant and hence ignored in the bias calibration. Since there is only one test datum (i.e., one maximum displacement value) for each operation condition, the model bias is assumed to be a constant value that needs to be determined at each operation condition. With the U-pooling metric, the bias is easily computed as the difference between the test value and the median value of the model prediction as shown in the last column in Table 3. Therefore, 12 model bias values were identified at 12 tank operation conditions.

###### Model Bias Approximation at the Intended Operation Condition.

The objective of this stage is to approximate model bias at the intended operation condition (i.e., P = 73.5, χ = 1, and H = 50). The expected outcome is that model bias is approximated in the form of a PDF at the intended operation condition. The approach employed is the copula-based bias approximation as shown in Eq. (10).

With characterized model bias at 12 training operation conditions as illustrated in Sec. 3.3, copula modeling was performed to build statistical relationships between the model bias, the design variables, and the baseline model prediction as shown in Fig. 5, where circles represent the calibrated model bias and the point clouds are generated random samples from the copula model. At the intended operation condition, the bias of the displacement was finally obtained as shown in Fig. 6. It is observed that the baseline model tends to overestimate the maximum displacement at the intended operation condition.

###### Reliability Analysis at the Intended Operation Condition.

The objective of this stage is to conduct reliability analysis considering the characterized model bias at the intended operation condition. The expected outcome is a reliability (or probability of failure) distribution due to the uncertainty of the model bias (i.e., model uncertainty) at the intended operation condition. The approach employed for reliability analysis is the EDR method. As a comparison, reliability analysis without considering the model bias was also conducted.

Since the failure is defined as the probability of maximum stress of the tank exceeding the yield stress, bias of the maximum stress from the baseline model should be approximated at the intended operation condition. However, there is no measured stress data. Model bias of the maximum stress is thus proposed to be indirectly approximated through the relationship between the maximum displacement and the maximum stress at the intended operation condition as shown in Fig. 7. This relationship was generated from the baseline model using MCS by considering all model parameter uncertainties at the intended operation condition. Considering a realization of model bias δ* (e.g., δ* = −0.01) for the maximum displacement shown in Fig. 6, the x-axis in Fig. 7 should shift δ* to account for the model bias of the maximum displacement, which will further result in a certain amount of shift for the y-axis because the displacement and stress are correlated. The amount of shift for the y-axis was proposed to be formulated as

Display Formula

(11)$σs=σ̂s+σ̂s×(μDisp+δ*μDisp−1)×ρ$

where ρ is the correlation coefficient between the displacement and stress; μDisp is the mean of the displacement from the baseline model prediction; and $σ̂s$ is the stress prediction from the baseline model. In fact, the second portion of Eq. (11) represents the bias of the maximum stress, which is approximated from the bias of maximum displacement. If ρ =  0 indicates that displacement and stress are uncorrelated; then, the shift of displacement should not affect the stress. If ρ =  1 indicates a linear relationship; then, the percentage shift of the displacement applies to the stress as well. For scenarios in between these two extreme cases, the amount of shift of stress is determined by the magnitude of the correlation coefficient ρ, which is computed as 0.47 at the intended operation condition.

With corrected maximum stress considering the bias in Eq. (11), reliability analysis was conducted to obtain the distribution of the probability of failure as shown in Fig. 8. The probability of failure is extremely small and there is 99.83% confidence level stating that the probability of failure is less or equal to 4.687 × 10−16. Figure 9 shows the comparison of maximum stress CDFs with and without consideration of the model bias. Similar to the maximum displacement, the baseline model tends to overestimate the maximum stress because of the positive correlation between the displacement and the stress. It is worth noting that reliability analysis from the baseline model would be almost the same as the above calculation. This is mainly due to the fact that the yield stress (i.e., Fig. 2(f)) is much higher than the maximum stress. However, it can be easily observed from Fig. 9 that the reliability would be very much different if the yield stress was reduced to the same level as the maximum stress.

## Discussion

Although the calculated probability of failure is extremely small, we are not convinced that the probability of failure is truly ignorable because of several important factors discussed as follows.

###### Potential Problems in the Uncertainty Modeling.

The challenge problem is a typical stress–strength type reliability analysis problem. The strength is the system capability to withstand the external stress without failures. In particular for this problem, the yield stress is the system strength. The strength of the system could decay over the time due to many factors, such as aging and corrosion effects. In this example, it appears that the yield stress reduced significantly compared to the legacy data as shown in Fig. 2(f). Although ten data were measured, it was from the same tank0, which simply presents the variability of the yield stress in one tank. The objective of the challenge problem, however, is to predict the probability of failure for tanks distributed all around the world. Hence, it is critical to further test several other tanks about the decay of the yield stress especially those tanks under severe environmental conditions, which could accelerate the decay rate of the yield stress. The uncertainty modeling of the yield stress should be updated after the extra testing.

On the other hand, the external stress of the tank comes from specific operation conditions causing displacement and von Mises stress of the tank. Similar to the aforementioned problems in uncertainty modeling, some of the important parameter data (e.g., Young’s modulus) were only measured from tank0, to represent the variability in one tank, not the tanks all around the world. If tank0 fails to cover the typical range of all other tanks or only represents a small portion of the tanks (e.g., 10% of all tanks), the probability of failure calculated above only applies for tank0 or that small portion of the tanks. It is worth noting that tank0 was checked due to the out of specification (e.g., displacement), not real failure from the stress. This out of specification approach could fail to detect real failures if the relationship between the displacement and stress is correct as shown in Fig. 7. Figure 7 indicates that although generally larger displacement means higher stress, high stress could exist for relatively small values of the displacement.

###### Potential Problems in the Bias Correction of the Stress.

Since stress data were not measured, bias of the stress was estimated based on the statistical relationship between the displacement and stress, resulting in two potential problems. First of all, the relationship was obtained from the baseline model that needs to be validated. In other words, the relationship may not be correct. Second, Eq. (5) used for the stress correction may not be accurate when the correlation coefficient ranges in between 0 and 1.

###### Potential Problems of the Mesh Size.

Different mesh size affects the model prediction accuracy. Hence, they could be treated as different baseline models with different accuracy levels. Mesh size #2 was used in the above case study. To find out potential problems using different mesh sizes, mesh size #1 was employed to repeat above calculations. First of all, UQ was conducted at the intended operation condition. The difference of the PDFs of the maximum stress is shown in Fig. 10 with the four statistical moments listed in Table 4. It is observed that mesh size #1 overestimates the stress for both mean and STD if we assume that mesh size #2 is supposed to be more accurate.

The bias calibration was followed at 12 operation conditions and the results are shown in Fig. 11(a). It is observed that Fig. 11(a) is very similar to Fig. 4 indicating that the displacement is not affected much by different mesh sizes. This observation also applies for the characterized model bias of the displacement at the intended operation condition as shown in Fig. 11(b). Finally, reliability analysis was conducted and the distribution of the probability of failure was obtained as shown in Fig. 12. The probability of failure is again small and there is 99.84% confidence level stating that the probability of failure is less or equal to 8.109 × 10−14. However, the magnitude increases significantly. This phenomena would be more obvious if the yield stress was reduced to about 3 × 104. The problem is that it is difficult to know which model should be trusted without measured stress for direct bias correction if both mesh sizes were regarded as sufficient from expert judgment.

###### Remarks of Potential Problems.

Three topics discussed are the most important remaining problems that we have identified for this challenge problem. As discussed in Sec. 4.1, lack of valuable test data for uncertainty modeling is the severest problem, which could generate very significant statistical uncertainty (i.e., the uncertainty molding error). However, this is not a problem that can be addressed by any methodology. It is a data collection issue. In Sec. 4.2, absence of direct measurement of the stress data causes inaccurate model uncertainty approximation. In Sec. 4.3, selection of the baseline model affects the model uncertainty characterization especially when it is combined with the issue in Sec. 4.2. These three discussed topics are remaining problems that cannot be addressed by the proposed work and deserve more attention in the future data collection for model validation.

## Conclusion

A systematic validation strategy with copula-based bias approximation was proposed to incorporate model uncertainty into reliability analysis, which is composed of (i) uncertainty modeling, (ii) UQ, (iii) model bias calibration, (iv) model bias approximation, and (v) reliability analysis. The 2014 V&V challenge problem was employed for successful demonstration of the proposed validation strategy. Other than the data collection issue for uncertainty modeling, model bias appears to be the most important issue compared to test uncertainty, algorithm uncertainty, mesh size, etc., for this challenge problem. A few important observations are highlighted as follows.

1. (a)The Pearson system is proper for modeling irreducible uncertainty.
2. (b)The Bayes approach is more appropriate for modeling reducible uncertainty.
3. (c)The EDR method is very effective for UQ.
4. (d)Multiple test data are preferred for accurate model bias calibration under a specific design/operation condition.
5. (e)The copula-based approach for bias approximation in the design space was demonstrated in the case study.
6. (f)Reliability (or probability of failure) follows a distribution due to the model uncertainty.
7. (g)Collection of valued test data is pivotal in all validation steps.
8. (h)Bias approximation for performances without test data requires indirect approximation through a new connection model that cannot be validated.

## Acknowledgements

The research was supported by the Ford Motor Company and Faculty Research Initiation and Seed Grant at the University of Michigan–Dearborn.

## References

Hu, K. T. , and Orient, G. E. , “ The 2014 Sandia V&V Challenge Problem: A Case Study in Simulation, Analysis, and Decision Support,” ASME J. Verif. Valid. Uncertainty Quantif., 1(1).
Hills, R. G. , and Trucano, T. G. , 1999, “ Statistical Validation of Engineering and Scientific Models: Background,” Sandia National Laboratories, Report No. SAND99-1256.
Thacker, B. H. , Doebling, S. W. , Hemez, F. M. , Anderson, M. C. , Pepin, J. E. , and Rodriguez, E. A. , 2004, “ Concepts of Model Verification and Validation,” Los Alamos National Laboratory, Los Alamos, NM, Report No. LA-14167.
Babuska, I. , and Oden, J. T. , 2004, “ Verification and Validation in Computational Engineering and Science: Basic Concepts,” Comput. Methods Appl. Mech. Eng., 193(36–38), pp. 4057–4066.
Xi, Z. , Fu, Y. , and Yang, R. J. , 2013, “ Model Bias Characterization in the Design Space Under Uncertainty,” Int. J. Performability Eng., 9(4), pp. 433–444.
Zhan, Z. , Fu, Y. , and Yang, R. J. , 2013, “ On Stochastic Model Interpolation and Extrapolation Methods for Vehicle Design,” SAE Int. J. Mater. Manuf., 6(3), pp. 517–531.
Zhan, Z. , Fu, Y. , Yang, R. J. , Xi, Z. , and Shi, L. , 2012, “ A Bayesian Inference Based Model Interpolation and Extrapolation,” SAE Int. J. Mater. Manuf., 5(2), pp. 357–364.
Jiang, Z. , Chen, W. , Fu, Y. , and Yang, R. J. , 2013, “ Reliability-Based Design Optimization With Model Bias and Data Uncertainty,” SAE Int. J. Mater. Manuf., 6(3), pp. 502–516.
Xi, Z. , Fu, Y. , and Yang, R. , 2013, “ An Ensemble Approach for Model Bias Prediction,” SAE Int. J. Mater. Manf., 6(3), pp. 532–539.
Higdon, D. , Gattiker, J. , Williams, B. , and Rightley, M. , 2008, “ Computer Model Calibration Using High-Dimensional Output,” J. Am. Stat. Assoc., 103(482), pp. 570–583.
Pearson, K. , 1901, “ Mathematical Contributions to the Theory of Evolution. X. Supplement to a Memoir on Skew Variation,” Philos. Trans. R. Soc. London, 197(287–299), pp. 443–459.
Xi, Z. , Hu, C. , and Youn, B. D. , 2012, “ A Comparative Study of Probability Estimation Methods for Reliability Analysis,” Struct. Multidiscip. Optim., 45(1), pp. 33–52.
Daniels, H. E. , 1954, “ Saddlepoint Approximations in Statistics,” Ann. Math. Stat., 25(4), pp. 631–650.
Jaynes, E. T. , 1957, “ Information Theory and Statistical Mechanics,” Phys. Rev., 106(4), pp. 620–630.
Johnson, N. L. , Kotz, S. , and Balakrishnan, N. , 1994, Continuous Univariate Distributions, Wiley, New York.
Rossi, V. , and Vila, J. P. , 2006, “ Bayesian Multioutput Feedforward Neural Networks Comparison: A Conjugate Prior Approach,” IEEE Trans. Neural Networks, 17(1), pp. 35–47.
George, E. I. , and McCulloch, R. E. , 1997, “ Approaches for Bayesian Variable Selection,” Stat. Sin., 7(2), pp. 339–373.
Wang, C. , and Blei, D. M. , 2013, “ Variational Inference in Nonconjugate Models,” J. Mach. Learn. Res., 14(1), pp. 1005–1031.
Zhu, J. , and Xing, E. P. , 2009, “ Maximum Entropy Discrimination Markov Networks,” J. Mach. Learn. Res., 10, pp. 2531–2569.
Berg, B. A. , 2004, Markov Chain Monte Carlo Simulations and Their Statistical Analysis, World Scientific Publishing, Singapore.
Youn, B. D. , Xi, Z. , and Wang, P. , 2008, “ Eigenvector Dimension Reduction (EDR) Method for Sensitivity-Free Probability Analysis,” Struct. Multidiscip. Optim., 37(1), pp. 13–28.
Hu, C. , and Youn, B. D. , 2011, “ Adaptive-Sparse Polynomial Chaos Expansion for Reliability Analysis and Design of Complex Engineering Systems,” Struct. Multidiscip. Optim., 43(3), pp. 419–442.
Blatman, G. , and Sudret, B. , 2010, “ An Adaptive Algorithm to Build Up Sparse Polynomial Chaos Expansions for Stochastic Finite Element Analysis,” Probab. Eng. Mech., 25(2), pp. 183–197.
Oladyshkin, S. , and Nowak, W. , 2012, “ Data-Driven Uncertainty Quantification Using the Arbitrary Polynomial Chaos Expansion,” Reliab. Eng. Syst. Saf., 106, pp. 179–190.
Coelho, R. F. , Lebon, J. , and Bouillard, P. , 2011, “ Hierarchical Stochastic Metamodels Based on Moving Least Squares and Polynomial Chaos Expansion: Application to the Multiobjective Reliability-Based Optimization of Space Truss Structures,” Struct. Multidiscip. Optim., 43(5), pp. 707–729.
Le Maître, O. P. , Reagan, M. , Najm, H. N. , Ghanem, R. G. , and Knio, O. M. , 2002, “ A Stochastic Projection Method for Fluid Flow: II. Random Process,” J. Comput. Phys., 181(1), pp. 9–44.
Gerstner, T. , and Griebel, M. , 1998, “ Numerical Integration Using Sparse Grids,” Numer. Algorithms, 18(3–4), pp. 209–232.
Rahman, S. , and Xu, H. , 2004, “ A Univariate Dimension-Reduction Method for Multi-Dimensional Integration in Stochastic Mechanics,” Probab. Eng. Mech., 19(4), pp. 393–408.
Myers, H. R. , and Montgomery, D. C. , 1995, Response Surface Methodology, Wiley, New York.
Kennedy, M. C. , and O’Hagan, A. , 2002, “ Bayesian Calibration of Computer Models,” J. R. Stat. Soc. B, 63, pp. 425–464.
Ferson, S. , Oberkampf, W. L. , and Ginzburg, L. , 2008, “ Model Validation and Predictive Capability for the Thermal Challenge Problem,” Comput. Methods Appl. Mech. Eng., 197(29–32), pp. 2408–2430.
Xi, Z. , Pan, H. , Fu, Y. , and Yang, R. J. , 2014, “ A Copula-Based Approach for Model Bias Characterization,” SAE Int. J. Passeng. Cars: Mech. Syst., 7(2), pp. 781–786.
Huard, D. , Evin, G. , and Favre, A. C. , 2006, “ Bayesian Copula Selection,” Comput. Stat. Data Anal., 51(2), pp. 809–822.
Roser, B. N. , 1999, An Introduction to Copulas, Springer, New York.
Fermanian, J. D. , 2005, “ Goodness-of-Fit Tests for Copulas,” J. Multivariate Anal., 95(1), pp. 119–152.
Jaynes, E. T. , and Bretthorst, G. L. , 2003, Probability Theory: The Logic of Science, Cambridge University Press, Cambridge, UK.
Hasofer, A. M. , and Lind, N. C. , 1974, “ Exact and Invariant Second-Moment Code Format,” ASCE J. Eng. Mech., 100(1), pp. 111–121.
Tvedt, L. , 1984, Two Second-Order Approximations to the Failure Probability: Section on Structural Reliability, A/S Vertas Research, Hovik, Norway.
View article in PDF format.

## References

Hu, K. T. , and Orient, G. E. , “ The 2014 Sandia V&V Challenge Problem: A Case Study in Simulation, Analysis, and Decision Support,” ASME J. Verif. Valid. Uncertainty Quantif., 1(1).
Hills, R. G. , and Trucano, T. G. , 1999, “ Statistical Validation of Engineering and Scientific Models: Background,” Sandia National Laboratories, Report No. SAND99-1256.
Thacker, B. H. , Doebling, S. W. , Hemez, F. M. , Anderson, M. C. , Pepin, J. E. , and Rodriguez, E. A. , 2004, “ Concepts of Model Verification and Validation,” Los Alamos National Laboratory, Los Alamos, NM, Report No. LA-14167.
Babuska, I. , and Oden, J. T. , 2004, “ Verification and Validation in Computational Engineering and Science: Basic Concepts,” Comput. Methods Appl. Mech. Eng., 193(36–38), pp. 4057–4066.
Xi, Z. , Fu, Y. , and Yang, R. J. , 2013, “ Model Bias Characterization in the Design Space Under Uncertainty,” Int. J. Performability Eng., 9(4), pp. 433–444.
Zhan, Z. , Fu, Y. , and Yang, R. J. , 2013, “ On Stochastic Model Interpolation and Extrapolation Methods for Vehicle Design,” SAE Int. J. Mater. Manuf., 6(3), pp. 517–531.
Zhan, Z. , Fu, Y. , Yang, R. J. , Xi, Z. , and Shi, L. , 2012, “ A Bayesian Inference Based Model Interpolation and Extrapolation,” SAE Int. J. Mater. Manuf., 5(2), pp. 357–364.
Jiang, Z. , Chen, W. , Fu, Y. , and Yang, R. J. , 2013, “ Reliability-Based Design Optimization With Model Bias and Data Uncertainty,” SAE Int. J. Mater. Manuf., 6(3), pp. 502–516.
Xi, Z. , Fu, Y. , and Yang, R. , 2013, “ An Ensemble Approach for Model Bias Prediction,” SAE Int. J. Mater. Manf., 6(3), pp. 532–539.
Higdon, D. , Gattiker, J. , Williams, B. , and Rightley, M. , 2008, “ Computer Model Calibration Using High-Dimensional Output,” J. Am. Stat. Assoc., 103(482), pp. 570–583.
Pearson, K. , 1901, “ Mathematical Contributions to the Theory of Evolution. X. Supplement to a Memoir on Skew Variation,” Philos. Trans. R. Soc. London, 197(287–299), pp. 443–459.
Xi, Z. , Hu, C. , and Youn, B. D. , 2012, “ A Comparative Study of Probability Estimation Methods for Reliability Analysis,” Struct. Multidiscip. Optim., 45(1), pp. 33–52.
Daniels, H. E. , 1954, “ Saddlepoint Approximations in Statistics,” Ann. Math. Stat., 25(4), pp. 631–650.
Jaynes, E. T. , 1957, “ Information Theory and Statistical Mechanics,” Phys. Rev., 106(4), pp. 620–630.
Johnson, N. L. , Kotz, S. , and Balakrishnan, N. , 1994, Continuous Univariate Distributions, Wiley, New York.
Rossi, V. , and Vila, J. P. , 2006, “ Bayesian Multioutput Feedforward Neural Networks Comparison: A Conjugate Prior Approach,” IEEE Trans. Neural Networks, 17(1), pp. 35–47.
George, E. I. , and McCulloch, R. E. , 1997, “ Approaches for Bayesian Variable Selection,” Stat. Sin., 7(2), pp. 339–373.
Wang, C. , and Blei, D. M. , 2013, “ Variational Inference in Nonconjugate Models,” J. Mach. Learn. Res., 14(1), pp. 1005–1031.
Zhu, J. , and Xing, E. P. , 2009, “ Maximum Entropy Discrimination Markov Networks,” J. Mach. Learn. Res., 10, pp. 2531–2569.
Berg, B. A. , 2004, Markov Chain Monte Carlo Simulations and Their Statistical Analysis, World Scientific Publishing, Singapore.
Youn, B. D. , Xi, Z. , and Wang, P. , 2008, “ Eigenvector Dimension Reduction (EDR) Method for Sensitivity-Free Probability Analysis,” Struct. Multidiscip. Optim., 37(1), pp. 13–28.
Hu, C. , and Youn, B. D. , 2011, “ Adaptive-Sparse Polynomial Chaos Expansion for Reliability Analysis and Design of Complex Engineering Systems,” Struct. Multidiscip. Optim., 43(3), pp. 419–442.
Blatman, G. , and Sudret, B. , 2010, “ An Adaptive Algorithm to Build Up Sparse Polynomial Chaos Expansions for Stochastic Finite Element Analysis,” Probab. Eng. Mech., 25(2), pp. 183–197.
Oladyshkin, S. , and Nowak, W. , 2012, “ Data-Driven Uncertainty Quantification Using the Arbitrary Polynomial Chaos Expansion,” Reliab. Eng. Syst. Saf., 106, pp. 179–190.
Coelho, R. F. , Lebon, J. , and Bouillard, P. , 2011, “ Hierarchical Stochastic Metamodels Based on Moving Least Squares and Polynomial Chaos Expansion: Application to the Multiobjective Reliability-Based Optimization of Space Truss Structures,” Struct. Multidiscip. Optim., 43(5), pp. 707–729.
Le Maître, O. P. , Reagan, M. , Najm, H. N. , Ghanem, R. G. , and Knio, O. M. , 2002, “ A Stochastic Projection Method for Fluid Flow: II. Random Process,” J. Comput. Phys., 181(1), pp. 9–44.
Gerstner, T. , and Griebel, M. , 1998, “ Numerical Integration Using Sparse Grids,” Numer. Algorithms, 18(3–4), pp. 209–232.
Rahman, S. , and Xu, H. , 2004, “ A Univariate Dimension-Reduction Method for Multi-Dimensional Integration in Stochastic Mechanics,” Probab. Eng. Mech., 19(4), pp. 393–408.
Myers, H. R. , and Montgomery, D. C. , 1995, Response Surface Methodology, Wiley, New York.
Kennedy, M. C. , and O’Hagan, A. , 2002, “ Bayesian Calibration of Computer Models,” J. R. Stat. Soc. B, 63, pp. 425–464.
Ferson, S. , Oberkampf, W. L. , and Ginzburg, L. , 2008, “ Model Validation and Predictive Capability for the Thermal Challenge Problem,” Comput. Methods Appl. Mech. Eng., 197(29–32), pp. 2408–2430.
Xi, Z. , Pan, H. , Fu, Y. , and Yang, R. J. , 2014, “ A Copula-Based Approach for Model Bias Characterization,” SAE Int. J. Passeng. Cars: Mech. Syst., 7(2), pp. 781–786.
Huard, D. , Evin, G. , and Favre, A. C. , 2006, “ Bayesian Copula Selection,” Comput. Stat. Data Anal., 51(2), pp. 809–822.
Roser, B. N. , 1999, An Introduction to Copulas, Springer, New York.
Fermanian, J. D. , 2005, “ Goodness-of-Fit Tests for Copulas,” J. Multivariate Anal., 95(1), pp. 119–152.
Jaynes, E. T. , and Bretthorst, G. L. , 2003, Probability Theory: The Logic of Science, Cambridge University Press, Cambridge, UK.
Hasofer, A. M. , and Lind, N. C. , 1974, “ Exact and Invariant Second-Moment Code Format,” ASCE J. Eng. Mech., 100(1), pp. 111–121.
Tvedt, L. , 1984, Two Second-Order Approximations to the Failure Probability: Section on Structural Reliability, A/S Vertas Research, Hovik, Norway.

## Figures

Fig. 1

Illustration of the U-pooling value (i.e., the shaded area)

Fig. 2

Uncertainty modeling of parameter uncertainties from test data using the Bayesian and Pearson approaches

Fig. 3

UQ of the maximum stress using the MCS and EDR methods at the intended operation condition

Fig. 4

Comparison of maximum displacement between test data and model prediction at 12 tank operation conditions

Fig. 5

Copula modeling of model bias with the relationship of design variables and the baseline model prediction

Fig. 6

Bias distribution of the displacement at the intended tank operation condition

Fig. 7

Statistical relationship between the maximum displacement and maximum stress at the intended tank operation condition

Fig. 8

CDF of the tank probability of failure at the intended operation condition

Fig. 9

CDF of the tank maximum stress with and without considering the model bias at the intended operation condition

Fig. 10

PDFs of the tank stress using two mesh sizes

Fig. 11

Model bias calibration and approximation of the displacement using mesh size #1: (a) comparison of maximum displacement between test data and model prediction at 12 tankoperation conditions and (b) bias distribution of the displacement at the intended tank operation condition

Fig. 12

CDF of the tank probability of failure at the intended operation condition using mesh size #1

## Tables

Table 1 Definition of model parameters
Table 2 Randomness of controllable model parameters
Table 3 Maximum displacement data versus baseline model prediction after the UQ at 12 tank operation conditions
Table 4 Statistical moments prediction of the maximum stress using mesh size #1 and #2 at the intended tank operation condition

## Discussions

Some tools below are only available to our subscribers or users with an online account.

### Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections