0
Research Papers

A General Methodology for Uncertainty Quantification in Engineering Analyses Using a Credible Probability Box PUBLIC ACCESS

[+] Author and Article Information
Mark E. Ewing, Brian C. Liechty, David L. Black

Northrop Grumman Innovation Systems,
Promontory, UT 84302

Manuscript received April 12, 2018; final manuscript received September 10, 2018; published online October 8, 2018. Assoc. Editor: Christopher J. Roy.

J. Verif. Valid. Uncert 3(2), 021003 (Oct 08, 2018) (12 pages) Paper No: VVUQ-18-1013; doi: 10.1115/1.4041490 History: Received April 12, 2018; Revised September 10, 2018

Uncertainty quantification (UQ) is gaining in maturity and importance in engineering analysis. While historical engineering analysis and design methods have relied heavily on safety factors (SF) with built-in conservatism, modern approaches require detailed assessment of reliability to provide optimized and balanced designs. This paper presents methodologies that support the transition toward this type of approach. Fundamental concepts are described for UQ in general engineering analysis. These include consideration of the sources of uncertainty and their categorization. Of particular importance are the categorization of aleatory and epistemic uncertainties and their separate propagation through an UQ analysis. This familiar concept is referred to here as a “two-dimensional” approach, and it provides for the assessment of both the probability of a predicted occurrence and the credibility in that prediction. Unique to the approach presented here is the adaptation of the concept of a bounding probability box to that of a credible probability box. This requires estimates for probability distributions related to all uncertainties both aleatory and epistemic. The propagation of these distributions through the uncertainty analysis provides for the assessment of probability related to the system response, along with a quantification of credibility in that prediction. Details of a generalized methodology for UQ in this framework are presented, and approaches for interpreting results are described. Illustrative examples are presented.

Verification, validation, and uncertainty quantification are fundamental to modern modeling and simulation applications. Verification deals with mathematical accuracy of numerical solutions while validation deals with the accuracy of mathematical models in replicating real world systems [1]. Uncertainty quantification (UQ) combines these assessments with the variation that is inherent in real-life systems to provide an overall quantification of uncertainty in a predicted response [2,3].

Historical design methods in many industries have long acknowledged the existence of real-life variability and modeling inaccuracies, but legacy approaches have traditionally relied on limited implementation into engineering analyses. Most common design practices use perceived conservatism (sometimes ultra-conservatism) and “safety factors” (SFs) to provide sufficient margin to cover variation and unknowns [4]. These approaches are intended to ensure design adequacy but they limit the ability to optimize and balance risk. Alternatively, modern UQ approaches demand the explicit treatment of various types of uncertainties, with meaningful statistical analysis to provide quantification of system reliability and modeling confidence. A price is paid for the higher fidelity in uncertainty assessment, but the pay-off can be critical in design trades and the balance of reliability.

Sophistication in UQ capability is becoming increasingly important in the modern aerospace industry. Competition drives the need for optimized designs, and budget constraints limit the availability of expensive full-scale testing to support design iterations. This is especially the case in solid rocket motors, as designs become increasingly more reliant on analysis and less on full-scale testing. While historical motor development programs commonly relied on tens of full-scale motor firings, modern budget constraints often limit the number of available full-scale test motors in the design and qualification process. Certain programs have recently been developed with a single qualification motor, and it is anticipated that future programs may be developed with no supporting full-scale static test. This puts tremendous importance on both accurate simulations and the quantification of related uncertainty. Decision-makers benefit from predictions with quantified uncertainty to understand risk, balance designs, and support the proper allocation of resources in the design process.

A common roadblock in UQ implementation is a lack of understanding of appropriate methods to account for and quantify uncertainties. Several excellent resources are available in the literature describing details of UQ approaches [13,5], but implementation by practicing engineers requires time to study and distill into a practical and meaningful approach. As a result, most engineering analyses do not include a formal assessment of uncertainty beyond the use of factors and conservatisms believed to be bounding. A focus of this paper is to create a general summary and description of a practical UQ method that is relevant and useful for practicing engineers and decision-makers. An emphasis is placed on addressing both variability and predictability for the output of an engineering analysis. Variability is associated with inherent random behavior in the physical system, and is categorized as aleatory uncertainty. Predictability addresses inaccuracies in the engineering analysis and is dominated by epistemic uncertainties. It is well established that a proper UQ analysis should separate variability from modeling inaccuracies [13,59]. The literature predominantly promotes a probabilistic approach for the former with a bounding approach for the latter. However, several investigators have recognized the need to include some form of probability assessment for both. Related efforts have included the incorporation of possibility theory [10,11], fuzzy math [12,13], belief functions [14], and Bayesian P-boxes [15].

Here, an adaptation of the traditional bounding approach is described using probabilistic treatments for both variability and predictability. Similar to the approach taken to form a Bayesian P-box, this treatment requires estimates of distribution functions for all variables used in a UQ analysis, including those that would generally be classified as epistemic. This provides results that can be analyzed statistically with respect to both variability and predictability. Of course, the overall assessment of predictability is only as good as the engineering data and judgment used to estimate probable inaccuracies in the models and inputs, and this can be a source of criticism for the approach. However, some assessment of overall accuracy must always be a consideration, and this is often left for high-level decision-makers who are not experts in the engineering models. Here, as opposed to simply assuring decision-makers that each model and input has conservative bounds, the likelihood of reaching certain bounds is estimated. The probabilistic influence of each submodel is propagated through the UQ analysis in a manner that accounts for the (predominantly nonlinear) effects on the overall system response. This can be critical for systems with limited margins that cannot afford the convenience of “bounding” every submodel in the analysis. The likelihood of reaching certain predictive bounds can be quantified by propagating the best available engineering assessments through the UQ analysis. Resulting uncertainty quantification using this approach is presented in the form of cumulative distribution functions (CDF) to capture variability and credible intervals to quantify predictability. This preserves the estimates for submodel predictability, based on engineering knowledge and experience, as they are propagated through the uncertainty analysis. In place of the traditional bounding probability box (P-box) [3,8], the use of a credible P-box is proposed. This adaptation allows decisions to be made with risk acceptance associated with exclusion of predictions that fall beyond a specified level of credibility. As opposed to the bounding P-box that represents 100% credibility, decisions can be made, for example, based on a 90% credible P-box. If decision-makers are unwilling to accept any assessed risk below a bounding analysis, a 100% credible P-box can always be used. In such cases, the credible P-box becomes a bounding P-box.

The paper proceeds as follows: Important UQ terminology is first described in Sec. 2. Section 3 describes generic sources of uncertainty in engineering modeling and Sec. 4 describes an important categorization of these uncertainties into variability and predictability groups. Section 5 describes methods for propagating the two types of uncertainty through a typical UQ analysis. That section is especially important in supporting the interpretation of UQ in terms of probability and credibility. Section 6 discusses interpretation methods, and Secs. 7 and 8 provide illustrative examples. Conclusions are provided in Sec. 9.

Some important terminology and concepts are summarized in this section. While related information is available throughout the literature, it is included here for convenient background and to provide an interpretation and orientation, relative to the usage and applications of these concepts, consistent with the UQ approach presented here. Formal definitions for many of these concepts are provided in the ASME and AIAA guides for uncertainty quantification [16,17]. For more details on these definitions, the reader is referred to the literature [13,16,17].

The system response quantity (SRQ) is the parameter of interest from an engineering analysis of a real-life system. Examples might include the maximum stress in a structural member, the temperature at a specified location, or the margin in a structural assessment. The SRQ is predicted by the engineering model and is the parameter for which the uncertainty is to be quantified.

The engineering model provides the functional relationship between the SRQ and the various parameters that govern its behavior. It generally consists of a governing equation, or collection of equations, and their solution. It can encompass a sequence of analyses, and should include the full analytical process used to predict the SRQ.

The term uncertainty quantification covers a very broad range of technical concepts, many of which are well beyond the scope of this paper. Relative to the methodologies described here, it refers specifically to the quantification of both the probability of the SRQ taking on specific values and the credibility in that prediction. Supporting information about uncertain parameters that influence the prediction of the SRQ is generally obtained from “verification” and “validation” processes.

Verification is a quantification of the accuracy in the computational (i.e., numerical) solution of the mathematics of the model. This quantification provides the uncertainties (inaccuracies) associated with solution of the governing equations. Verification analyses are sometimes performed to show that numerical inaccuracies are negligible, but it is important to keep in mind that verification is a quantification of accuracy, not a “stamp of approval” for the model.

Validation is a quantification of the accuracy of the mathematical model in predicting the SRQ. Similar to verification, validation provides a quantification of accuracy as opposed to an approval or certification of the model. This quantification provides the uncertainties (inaccuracies) associated with the limited physics represented by the governing equations. By definition, validation requires some form of measurement of “real-life” to which model results can be compared and consistency quantified.

Certification is the process of determining that the predicted SRQ is accurate enough for the intended application. The process of certification is the “stamp-of-approval” associated with the use of a given model for a certain application. This concept is included here to help clarify the concepts of verification and validation, which are commonly confused with certification activities. Upon completion of a UQ analysis, the engineer can assess whether the model is certified to answer the questions of interest relative to the SRQ.

The CDF is fundamental to the quantification of uncertainty. The CDF ranges from zero to one and provides the probability that a parameter will take a value less than or equal to a specified value. For results with a high degree of variability, the CDF is relatively broad. With less variation, the CDF becomes much steeper as values nearer the average are much more likely. In the extreme, as the standard deviation approaches zero, the CDF approaches a vertical line through the average value of the SRQ. The CDF is the primary tool used in the UQ methodologies presented here, and familiarity with its characteristics is necessary for UQ analysis and interpretation.

Subjective probability is an estimated probability derived from current knowledge and personal judgment. Here, the necessary judgment is rooted in engineering assessments that include consideration of physical principals, test data, modeling experience and similarity. Subjective probability is a key concept from Bayesian statistics [18] and is fundamental to the approach presented in this paper.

The credible interval is the range within which the SRQ is expected to fall with a specified subjective probability. This is a Bayesian statistical concept analogous to the confidence internal in frequentist statistics. It represents a posterior probability of a random parameter after accounting for all relevant evidence that supports the calculation of that parameter.

Credible intervals can be defined in one of two ways, as an equal-tailed (E–T) interval or as the highest posterior density interval. These two types of intervals are illustrated in Fig. 1, where 90% credibility intervals are shown for a log-normal distribution on both the PDF and CDF plots. The E–T interval is selected such that there is an equal probability above or below the interval bounds. An equal-tailed 90% credible interval can easily be determined by the range in values between the probabilities of 0.05 and 0.95 on the CDF. The highest posterior density is the narrowest interval that captures the desired percentage of the distribution. For the log-normal distribution shown, only a small fraction of the interval is below the lower bound. In this paper, credible intervals calculated from the uncertainty quantification analysis are equal-tailed and are referred to simply as the credible interval of the SRQ.

Uncertainty in engineering modeling can be categorized into three primary areas [3,19], (1) model input uncertainties, (2) model discretization uncertainties, and (3) model form uncertainties. Each of these is discussed in this section.

Model Input Uncertainties.

Model input uncertainties include uncertainty in any input required in the solution of the mathematical model. Commonly, these include material properties, boundary conditions, and physical geometry. Traditional approaches in engineering require “conservative” analyses using these inputs. In a proper UQ analysis, these uncertainties are instead propagated through the mathematical model according to their distributions. The result is the capability to quantify the probability of the SRQ taking on certain values, as opposed to an ambiguous “confidence” that the SRQ is conservative. It is generally the case that complex engineering models contain an overwhelming number of inputs, and performing a UQ analysis including all inputs is neither practical nor necessary. Typically, sensitivity analyses can be performed to assess which variables should be included. An example of input parameter selection is provided by Welch et al. [20] who select three relevant inputs for an active flow control jet on a transonic airfoil. Details on the propagation of input uncertainties are provided in Sec. 5.

Model Solution Uncertainties.

Model solution uncertainties include discrepancies between the analytical solution for the SRQ, as defined by the mathematical model, and the numerically calculated value. These discrepancies can occur due to approximations in the discretization approach used in the numerical solution method applied to the mathematical model. In addition, these uncertainties include the influence of resolution in the spatial and temporal discretization of a particular application of the model. Convergence criteria and numerical round-off can also add to solution uncertainties. Estimation of related uncertainties is considered a verification process, and it generally requires an assessment of the numerical solution against an analytical result. Several creative methods exist for this assessment, including the method of manufactured solutions, grid convergence studies, and other numerical evaluations [1,4,19,21]. Depending on computational resources, numerical inaccuracies may be dominated by other uncertainties. In many engineering applications, steps are taken to ensure that solution errors are negligible and can be excluded in the uncertainty quantification. If these uncertainties cannot be neglected, they can be included in the UQ study as sources of epistemic error.

Model Form Uncertainties.

This type of uncertainty is related to the accuracy of the mathematical model in representing real-life behavior of the SRQ. Assessment is part of the validation process and usually requires a comparison of a predicted parameter with measured values. An example is provided by Freeman and Roy [19] who compare fluid analysis results with wind tunnel data. This is often a challenge since measurement of real life has its own uncertainties, and predicted values include within them inherent input uncertainties. In addition, validation data are sometimes unavailable in the domain of interest making quantification of the model form error difficult. Approaches and examples for addressing some of these difficulties are available in the literature [2,3,22,23]. These include methods to allow for the extraction of model form error from comparisons that include input uncertainties and methods for extrapolating from a validation domain to an application domain.

Quantification of this type of uncertainty is a critical, controversial, and often debilitating part of the UQ process. In contrast, typical engineering analyses simply use an established safety factor. This relieves the engineer from the burden (and often the responsibility) of having to assess his/her belief about an engineering model. A common implication is that a safety factor (usually historical) somehow appropriately covers variability and predictability of an analysis. Here, the concept is promoted that this type of uncertainty should be assessed using subjective probabilities based on engineering assessments. Of course, the quality of the UQ analysis hinges on these assessments, which can vary in fidelity. However, engineering decisions must be made with the best available information and estimates, and these are best assessed by the engineering experts performing the analysis. If related data or physical understandings are lacking, then the estimates may be crude and conservative (e.g., plus or minus 50% with a uniform distribution). However, this “belief” in the model form error, when propagated through the UQ analysis, will be appropriately reflected in the results.

Categorization of two basic types of uncertainty, aleatory and epistemic, is critical for proper UQ analysis [13,59]. These are directly related to the previously discussed concepts of variability and predictability with aleatory uncertainties characterizing variability and epistemic uncertainties influencing predictability. Separating these types of uncertainties allows for quantification of both probability of a stated value for the SRQ and the credibility in that prediction. To allow for estimates of credible intervals associated with the UQ result, a Bayesian approach is taken to characterize the epistemic variables governing predictability by assigning probability distributions to all variables, whether aleatory or epistemic in nature. These two types of uncertainties are discussed below.

Aleatory Uncertainties.

Aleatory uncertainty is associated with real-life variation. It is inherently random and cannot be reduced with further knowledge. A common example is that of material properties that have natural variations due to inhomogeneity, material processing, and conditioning. These variations can be small or large, and their influence on the SRQ may be moderate or pronounced. To account for aleatory uncertainties, the expected distribution must be defined. For example, the structural modulus of a material might be described by an average value, a standard deviation, and a normal distribution. The influence on the predicted SRQ should be assessed, and if significant, related aleatory uncertainty should be determined by propagating this distribution through the model.

Epistemic Uncertainties.

Epistemic uncertainty is associated with the lack of knowledge, and can be reduced with improved understanding. This can be associated, for example, with the quantification of material properties, boundary conditions, and even validation of the model itself. Epistemic uncertainty is usually treated using bounding intervals, with nothing assumed about a related probability distribution. This approach results in UQ analyses that are bounding with respect to epistemic influences. Here, it is proposed that the engineer make assessments relative to the distributions for each epistemic input. This may be done using validation data, similarity arguments, engineering judgment, and experience. In this respect, a Bayesian approach using subjective probabilities is taken for characterizing the epistemic uncertainties. Of course, the quality of these subjective probabilities governs the accuracy of the UQ analysis, and if these distributions cannot be accepted, the associated bounds of the UQ can be used, consistent with the intent of the bounding interval approach. The propagation of epistemic uncertainty through the model should be separated from aleatory uncertainties. Doing so, along with specifying the distribution functions for all variables, allows for an assessment of both the probability and credibility associated with a predicted SRQ. This is described further in Sec. 5.

Philosophy.

A brief discussion on the philosophy of aleatory versus epistemic categorization is included since the topic inevitably leads to difficult decisions, strong opinions, and sometimes debilitating argument. There is sometimes the opinion that all uncertainty is epistemic. After all, even what is thought of as inherent variation in material properties is caused by some physical mechanism that if properly included in the model could be predicted, and related uncertainty due to the unknown approach for proper modeling is by definition epistemic. That argument certainly has merit from a philosophical point of view, but the viewpoint relative to engineering modeling must be considered. It is proposed here that the categorization of aleatory versus epistemic is relative to the engineering model. For example, if the elastic modulus is known to vary, due to physical mechanisms not included in the model, and it is known how that property statistically varies, then those variations should be assessed as aleatory. Once propagated through the model, that aleatory uncertainty in the modulus will result in a probability distribution (a CDF) for the SRQ that would be expected in real-life if we repeated a related experiment. However, if uncertainty in the understanding of the modulus is due to, say, limited data, then that uncertainty is epistemic and it could be reduced, without changing the model, by obtaining more test data. Propagating this uncertainty through the model will result in various CDFs for the SRQ. The distribution of CDFs, resulting from the epistemic uncertainty, should encompass real life with its inherent variation. In summary, for this treatment, aleatory is associated with expectations for variations in real life, and epistemic is associated with inability to perfectly predict real life.

A third type of uncertainty, “ontological,” should also be acknowledged. This is related to errors in the analysis that are completely unknown. Whereas epistemic error accounts for known deficiencies in model form or inputs, ontological uncertainties are associated with unknown and unexpected occurrences. These can be due to phenomena that are unexpected and unaccounted for in the physical model or insufficient control of manufacturing processes. To account for this type of uncertainty, engineers can provide an “ontological allowance” in the design. Analogous to safety factors, these allowances can provide a buffer against unknown occurrences. They should be selected based on consideration of the evolutionary versus revolutionary nature of the design, and applied after a proper UQ that includes known variation and modeling uncertainties.

Here, the defined goal of UQ is the prediction of real life (the SRQ), including its natural variation, with a stated level of credibility in that prediction. Fundamental to this assessment is the acknowledgment that “real life” is not deterministic (relative to the limited model) and that models are imperfect. The prediction of a real-life event should therefore account for aleatory influences, and the final prediction should be in the form of probabilities of the SRQ taking various values. This can be expressed in the form of a CDF. In addition, since modeling inaccuracies exist (epistemic uncertainty), the exact CDF representing real life is unknown, and a range of CDFs is possible. The traditional probabilistic bounding approach to UQ analyses sets out to establish upper and lower bounds on these possible CDFs. Here, it is proposed that all modeling uncertainties be assessed with estimated probability distribution functions. A Monte Carlo approach is adopted for uncertainty propagation that generates an ensemble of CDFs that represent the best current estimate available for the probability distribution associated with the SRQ. Denser regions of the ensemble are more likely to be representative of true-life than sparser regions, and upper and lower limits of the ensemble bound the possibilities. These important ideas (separation of variability and predictability) drive the motivation to separately categorize aleatory and epistemic uncertainty and to separately propagate their influences through the model. This is described below by way of example. For simplicity, the following simple model for an SRQ is used Display Formula

(1)SRQ=E1E2A3A2A1+E3

Here, the SRQ is a function of six independent variables E1, E2, E3, A1, A2 and A3. The first three are not expected to significantly vary, but have uncertainties that are epistemic, that is, there are limitations in the understanding of their values. The remaining three are very well characterized but have inherent (aleatory) variation. Characterizations for the parameters are provided in Table 1. The epistemic parameters are assumed to have uniform distributions with upper and lower values specified in the table. The aleatory parameters have normal distributions, and average values are provided along with standard deviations. All other uncertainties are assumed to be negligible and these uncertainties are propagated through the model of Eq. (1) using a Monte Carlo approach. Other sampling techniques, such as Latin Hypercube, may also be used if computational times become burdensome. A distinction is made between “one-dimensional” (1D) and “two-dimensional” (2D) propagation. For illustration, the uncertainty analysis is first performed using a 1D approach that acknowledges no fundamental difference between variability and predictability, but accounts for them both. The second is a 2D approach that propagates variability and predictive uncertainties separately. The results demonstrate that information is lost in a one-dimensional propagation, and that two-dimensional analysis is preferable. Various terminologies are used within the literature to describe these concepts. These include “first and second-order,” [24] “single- and double-loop,” [13] as well as “1D and 2D” [12,15,25,26]. Here, the “1D and 2D” terminologies are adopted.

One-Dimensional Analysis.

Here, the parameters of Table 1 are propagated through the model of Eq. (1) using a Monte Carlo approach as illustrated in Fig. 2. In the simulation, a vector of the six inputs [E1,E2,E3,A1,A2,A3] is randomly selected according to each of the distributions for the six components. This is used in the model (Eq. (1)) to calculate the SRQ associated with that input vector. The process is repeated for a total of N trials, and the output of each of the ith iterations (SRQi) is saved. After the N trials are completed, the set of outputs is analyzed and a single CDF is created for the SRQ. This process has been performed for the model of Eq. (1) with the data of Table 1, and the resulting CDF is shown in Fig. 3. Interpretations are discussed in Sec. 6.

Two-Dimensional Analysis.

A 2D propagation allows for the separation of inherent variability (aleatory effects) and modeling uncertainty (epistemic effects). The process is illustrated in Fig. 4. Aleatory variations are propagated through the analysis in the inner Monte Carlo loop as shown in the figure. Epistemic uncertainties are propagated in the outer loop. Iterations in the inner loop are indexed with the variable j, and the index i is used for the outer loop. The process begins with the first of the ith outer loop iterations by randomly selecting an epistemic input vector [E1,i,E2,i,E3,i] from within the distributions for each component. These values are then fixed and components of the aleatory input vector [A1,j,A2,j,A3,j] are randomly selected from within the distributions of the aleatory parameters. The inner loop proceeds as a one-dimensional propagation through continued random selection of the aleatory input vector all with this same fixed epistemic vector. The process continues until a specified number of iterations M is reached in the inner loop. Once this is complete, a CDF can be constructed for that particular epistemic vector and the completed Monte Carlo of the inner loop. The resulting CDF is illustrated inside the dashed box of Fig. 4. The process is repeated with a new selection for the epistemic vector following by a complete Monte Carlo propagation for the inner loop. This results in another CDF associated with the new epistemic vector. The process is repeated with N specified completions in the outer loop and therefore N separate CDFs resulting from each inner loop. It is important to keep in mind that each CDF represents a possible representation of the real-life behavior of the SRQ including its inherent variation. The ensemble of CDFs can be used to assess probability as well as credibility in estimating the SRQ. This process was performed for the model of Eq. (1) with the data of Table 1, and the resulting collection of CDFs is shown in Fig. 5. The question arises as to how to interpret from the ensemble of CDFs. A methodology is described in Sec. 6.

As previously stated, the present goal in uncertainty assessment is a quantification of the probability of the SRQ taking on certain values along with a stated level of credibility in that quantification. The probability is influenced by the inherent variability in the prediction (aleatory effects), and the credibility is associated with knowledge, or lack thereof in the modeling (epistemic effects). With this in mind, interpretations using 1D and 2D UQ assessments are discussed below.

One-Dimensional Analysis.

As an example, consider the 1D results of Fig. 3, which represent probabilities for an SRQ in the form of a CDF. An analyst might, for example, be interested in the lowest or highest expected values of the SRQ. The CDF of Fig. 3 is shown again in Fig. 6, but here assessments at ten and 90% probabilities are illustrated by the dashed lines in the figure. The lower horizontal line corresponds to the tenth percentile and the upper to the 90th. The lines intersect the CDF, and vertical lines are drawn with an intersection at the axis of the SRQ. The corresponding values of the SRQ are associated with the percentiles. In the example here, the probability of the SRQ taking on values less than 1050 is 10%, while the probability of taking on values of 2200 or less is 90%. Since the one-dimensional results have blended the influences of aleatory and epistemic uncertainties, the variation represented in the CDF is neither real-life, nor modeling uncertainty. Instead it represents a blended probability of both. While the 1D uncertainty assessment undoubtedly contains more information than a simple deterministic prediction, information is lacking concerning the actual prediction of the SRQ and the confidence in that prediction. In a sense, the 1D assessment provides a nominal probability associated with a particular value of the SRQ, but says nothing about the range of possible probabilities for the CDF. The 2D results of Sec. 6.2 show the benefits of that method, including capturing potential non-conservatisms associated with the 1D assessment.

Two-Dimensional Analysis.

The results of a two-dimensional assessment applied to the model of Eq. (1) are illustrated in Fig. 7. The results are in the form of an ensemble of CDFs, which is made up of the various CDFs associated with multiple epistemic uncertainty vectors. Figure 7(a) shows the ensemble of CDFs and the 90% credible P-box. The P-box is established by forming an equal-tailed credible interval for particular probabilities. This is possible because of the subjective probabilities assumed for epistemic variables, which in turn allow for a Bayesian statistical treatment of the resulting CDF ensemble. For the example here, this is done by determining the region that contains 90% of the SRQ values for a particular probability (5% are excluded on either side). The P-box is sliced horizontally creating a distribution of values for each probability level. The 5th and 95th percentiles of these values are the credible interval for that probability. These are compiled into new CDFs that represent the upper and lower bounds of the box. The region within the P-box represents the uncertain behavior of the predicted SRQ. The thickness of the box represents “predictability,” in this case associated with the epistemic uncertainties listed in Table 1. Likewise, the “slope” of the P-box represents a measure of the inherent variability of the system associated with the aleatory parameters. A very well-predicted system has a narrow P-box, and a system with little real-life variation has a steep P-box. In the limit of perfect modeling for a nonrandom system, the P-box becomes a vertical line at the location of the deterministic value of the SRQ. Figures 7(b) and 7(c) show two different manners of interpretation from the P-box. The first extracts SRQ values associated with particular probabilities, while the second extracts probabilities associated with particular SRQ values. Figure 7(b) illustrates an interpretation in the form of targeted probabilities. Horizontal lines are drawn at locations of the 10th and 90th percentiles. Intersections with the 90% credible P-box curves represent the values between which the SRQ should be expected to fall with the stated credibility level. For example, the intersection of the 10th percentile in Fig. 7(b) shows predicted SRQ values between 900 and 1200. As a result, it can be stated the 10th percentile is in the interval [900, 1250] with 90% credibility (or above 900 with 95% credibility since 95% fall above the lower 90% credibility bound). Similarly, the 90th percentile is in the interval [1800, 2500] with 90% credibility (or below 2500 with 95% credibility).

Another assessment is in the form of predicted probabilities for a specified SRQ. This is illustrated in Fig. 7(c) for an SRQ value of 2100. Here, the vertical line intersects the credibility intervals at probabilities of 72 and 96%. It can therefore be stated that there is a 90% credibility in stating that the probability of the SRQ taking on a value of 2100 is between 72 and 96%. Note that a similar assessment from the 1D results of Fig. 6 would suggest a probability around 89% for the SRQ value of 2100. This illustrates the potentially nonconservative nature of the 1D interpretation. If, for example an important system design requirement were for the SRQ to remain below 2100, the 1D assessment would suggest a probability of 89% of meeting the requirement. However, as seen in the 2D interpretation, it can only be said (with 90% credibility) that the probability is at least 72%. Whereas 89% may represent an acceptable level of risk, 72% might not.

To help conceptualize the engineering application of the methodology, a simple illustrative example is provided. The example is in the form of the structural assessment of a cantilevered beam as illustrated in Fig. 8. Although the example is simple, it is presented in the form of a typical engineering assessment that might include the six areas of (1) an engineering model, (2) loads and environments, (3) material properties, (4) production variabilities, (5) design variables, and (6) design requirements. An assessment and design iteration is first made using a typical safety factor approach. This is followed by a reliability assessment using UQ.

Engineering Model.

Here, the SRQ of interest is the capability of the beam C based on the maximum stress σ and the strength of the beam material S. Specifically, the capability is given by Display Formula

(2)C=Sσ

For the cantilevered beam, this is given by Display Formula

(3)C=Sa36Wl

where a is the cross-sectional width of the square beam, W is the weight of the person, and l is the length of the beam. Through model validation exercises against measured data, the accuracy of the model in predicting nominal behavior is observed to follow a Gaussian distribution with a coefficient of variation of 10% (note that this is a hypothetical example and the 10% value is for illustrative purposes). Note that several factors (inputs) that may affect the capability have been omitted from Eq. (3) and are assumed to be negligible. This includes twisting due to the weight of the person not acting through the center of the cross section, wind loading, thermal loading, etc. A more detailed analysis might include such secondary effects.

Loads and Environments.

Here, the load is the weight of the person on the beam. In this example, the beam is to be designed to support a particular, but unknown, person. As a result, the weight of the person expected on the beam is unknown, but based on a sampling of measurements of potential participants the normal distribution of Fig. 9 is expected. This corresponds to an average weight of W = 175 lbf (891 N) with a standard deviation of σw = 11 lbf (49 N).

Material Properties.

The strength of the beam material is known to vary according to the normal distribution of Fig. 10, which corresponds to an average strength of S = 200 MPa with a standard deviation of σs = 10 MPa.

Production Variabilities.

Production variabilities are known that will affect the tolerance in the beam length, which is nominally l = 5 m. An assessment is made that variations can be controlled to within a coefficient of variation of 1% with a normal distribution. This is illustrated in Fig. 11.

Design Variables.

The design variable is the width of the beam a, and following a preliminary design, a value of 5.8 cm is proposed. This beam width, if acceptable, will represent the design. If not acceptable, an alternative value that meets design requirements will be determined.

Design Requirements (Conservative SF Approach).

Requirements for the beam design are intended to account for uncertainty in the loads, inaccuracies in the model, property variations, and production variability. A “3-sigma” requirement philosophy is adopted with the intent to provide greater than 99% reliability. To cover load uncertainties, a safety factor of 1.5 is to be applied to the weight. For modeling uncertainty, the usage of a 10% knock-down factor is required. To account for property and production variabilities, the usage of “3-sigma” conditions is required for the material strength and beam length. Incorporating these requirements into the engineering model results in the following modification to Eq. (3)Display Formula

(4)C=0.9(S3σs)a36(SF)W(l+3σl)

Safety Factor Assessment.

Application of Eq. (4), with SF = 1.5, and the standard deviations of the material strength and beam length give a beam capability of 0.77. Since the value is less than 1.0, a redesign is required. If the beam width is increased to 6.5 cm, a new and acceptable capability of 1.08 is achieved.

Reliability (Uncertainty Quantification) Assessment.

For comparison, a UQ assessment is made using the methods described here. In this example, epistemic uncertainty is associated with model inaccuracy and the weight on the beam. Aleatory uncertainty is associated with production variability in the beam length and variability in material strength. As a result, a 2D UQ assessment includes model uncertainty and weight in the outer loop with beam length and material strength in the inner loop. The results of a Monte Carlo analysis with N = 2000 for the inner loop and M = 3000 for the outer loop are shown in Fig. 12. The ensemble of 3000 CDFs is shown along with the associated 90% credible P-box. The P-box is skewed toward higher capabilities. This is a result of the inverse relationship to the weight and length in Eq. (3). Applying the interpretation method as previously described, the probability of a beam capability being 1.0 or less (with a = 5.8 cm) is assessed using the intersection of the vertical dashed line with the lower bound of the P-box. The results show the probability of the capability being 1.0 or less at 0.4% with 95% confidence. This corresponds to a reliability of 99.6%. As a result, the original design meets the targeted reliability of 99%. A similar assessment on the redesign results in over 99.9999% reliability. This is illustrated in Fig. 13, which illustrates a shifting of the P-box to the right with the thicker beam design. The increased beam width of the redesign results in an unnecessary weight increase of 26%.

This section describes a UQ application to the thermal analysis of a solid rocket motor nozzle. In order to allow presentation in the open literature, a generic rocket motor is considered with inputs that are hypothetical but representative of typical values and assessments used in rocket nozzle design. A generic solid rocket motor is illustrated in Fig. 14 with some primary components identified. Basic operation of the motor involves the ignition of solid propellant within the motor case followed by the flow of multiphase, high-temperature, high-speed, compressible, chemically reacting combustion products through a converging/diverging nozzle. Structural components of the nozzle are protected (insulated) with an ablative carbon-cloth phenolic (CCP) insulator, which is sized in thickness to protect underlying structural components from reaching temperatures above 366 K (200 °F). The ablative material pyrolyzes (chars) as in-depth temperatures rise. Resulting pyrolysis gases permeate through the porous char structure and pick up energy as they flow to the surface. In addition, the surface of the material chemically ablates under the influence of the reactive boundary gases. Engineering modeling of related phenomena is performed to determine the CCP thickness requirements along the contour of the nozzle. The analyses are performed at specific axial locations, corresponding to various area ratios. Several axial design stations are illustrated in Fig. 15. These correspond to specific area ratios and are labeled A in the figure. Supporting analysis stations (labeled S) are identified from among the design stations. The throat is located at station 3 where the area ratio is one. Forward stations (to the left) are subsonic, and aft station (to the right) are supersonic. Supporting modeling is performed using thermochemical analyses coupled with 1D ablation heat transfer models. The Chemics thermochemistry software [27] provides local boundary conditions at the specified area ratios. These include local values for the recovery enthalpy, enthalpy conductance, and incident radiation heat flux. The thermochemistry modeling also provides thermochemical reaction tables associated with the chemistry of the propellant and ablative. In-depth material response is calculated using the insulation thermal response and ablation code (ITRAC) [28] which uses the boundary conditions and thermochemistry tables as input. Required material properties include thermal conductivity, density, specific heat, and heats-of-formation (for both virgin and charred material), along with pyrolysis kinetics and pyrolysis gas enthalpy [29].

This analysis is performed for a typical aluminized solid rocket propellant (16% aluminum) with an average chamber pressure of 700 psi (4826 Pa). The CCP material properties are based on the theoretical ablative composite for open testing material property set [30], with the density adjusted to 1400 kg/m3 to be more representative of a typical rocket nozzle ablative material.

The required insulation thickness at each station is based on the depth of the 200 °F isotherm in the material. This criterion ensures that temperatures in any structural components will be low enough to maintain structural integrity. To estimate this required depth, semi-infinite models are run using the chemics and itrac software with properties for the theoretical ablative composite for open testing material. Local radiation, convective coefficients, and recovery enthalpies are calculated with Chemics for the specified area ratios. Based on historical engineering experience, the uncertainty in this analysis is dominated by the parameters listed in Table 2, which also includes the assessed type of uncertainty, the assumed probability distribution, and the basis of that assessment. Several other model inputs are omitted from the table because their variations have negligible influence on the 200 °F isotherm. Aleatory property variations are based on observed variations of this type of material in laboratory measurements. Boundary condition (enthalpy conductance) variations are based on the observed behavior of directly related CCP surface erosion rates in several static test motors. These are assessed using pre- and postfire nozzle measurements at several axial and circumferential locations in the nozzle. Assessment of predictive epistemic uncertainties of the overall modeling approach is based on validation of models against laboratory and static motor test data. Of particular importance in this UQ is the char conductivity, for which a direct measurement methodology does not exist due to the associated extremes of related temperatures (as high as 3000 K). Instead, a conductivity model that is partially based on a diffusive approximation for the apparent conductivity due to radiation exchange within the porous char structure is used. This model assumes a relationship between char conductivity and temperature, and related parameters are obtained through calibration against high-temperature laboratory charring material tests [29]. While the conductivity model is very successful in capturing temperature dependence, there is significant uncertainty associated with the model parameters. Related probability distributions are estimated using assessments of model consistencies against the calibration test data. The details of this calibration process are beyond the scope (and intent) of this paper. For the purpose of this hypothetical design, the estimated distribution is triangular with upper and lower limits of 15%. With this assessment, as opposed to simply bounding the analysis at ±15%, the engineer has assessed that 15% error is bounding, but the error is more likely centered within the interval and a triangular distribution is used. Aleatory variation is also expected in the char conductivity, so an associated distribution is included based on similarity to the virgin assessments. Heat-of-pyrolysis, a difficult parameter to measure, and a problematic parameter in the analysis, are not expected to have significant variation since the chemistry of the phenolic resin is highly controlled, but an epistemic uncertainty is expected in its value. Again, a distribution is provided by the analysis engineer based on modeling assessments against data, and a 15% triangular distribution is used. Also included is a distribution for the overall error in the analysis (the model form error). This is well understood for this type of modeling due to the availability of extensive model comparisons against test data (validation exercises). While the SRQ here is the depth of the 200 °F isotherm, no direct measurement of this is practical. However, uncertainties in the predicted char depth are well understood and based on engineering modeling, these uncertainties should bracket the uncertainties in the isotherm depth. Validation data show the epistemic uncertainty to be well represented by a Gaussian distribution with a 10% coefficient of variation. As a result, a 10% misprediction would be considered a “1-sigma event,” while 40% would be a “4-sigma” occurrence. Model solution error due to numerical round-off, discretization, and convergence is negligible for this example.

Solution times for the engineering model are on the order of minutes. Even a minute for the solution time is prohibitively long for a UQ analysis, which can require millions of model evaluations. Therefore, a surrogate to the engineering model was developed for rapid UQ assessment. The surrogate was developed by evaluating the isotherm prediction at several input (Table 2) variations as listed in Table 3. Three levels of the virgin material properties and char conductivity were used because the isotherm showed a slight nonlinear dependency to these values. A highly nonlinear response was observed in the char specific heat, requiring five levels for the surrogate. The isotherm varies nearly linearly with the heat of pyrolysis and enthalpy conductance, and consequently only two values were used in characterizing the surrogate model. A full factorial design was used for the surrogate where all possible combinations of the input scaling factors in Table 3 were evaluated (540 runs of the engineering model for each of the five stations shown in Figs. 1517). These evaluations of the engineering model were tabulated and used along with multivariate liner interpolation for the surrogate. Error in the interpolation scheme is less than 1% compared to the engineering model. This value was determined by examining the engineering model prediction at several points not evaluated in the interpolation table.

The uncertainties of Table 2 were propagated through the surrogate model using the 2D Monte Carlo approach. The analysis included 2000 inner (aleatory) loops and 1000 outer (epistemic) loops. Results of the predicted ensemble of CDFs are shown in Fig. 16, which also shows the 90% credible P-box (the dashed curves) for the station at the nozzle throat (station 3). The credible bounds are closely approximated by a normal distribution, and “3-sigma” conditions are selected as the design condition (for a normal distribution, this corresponds to 99.87% of the distribution falling below that value). In order to select with 95% credibility, the upper curve of the 90% credible P-box is used (95% of the results fall below the upper 90% credibility limit). This selection for the design requirement is illustrated in Fig. 16. For this station (the nozzle throat) the required insulation thickness is just over 0.05 m (1.97 in). With consideration of the various uncertainties described, it can be stated that the probability of this thickness providing protection against the design requirement (the isotherm penetration depth) is 99.87% with 95% confidence. Similar results for the five nozzle analysis stations are shown in Fig. 17, with the nominal (median) prediction shown by the triangular marker, and the bands representing 3-sigma conditions. Required design thicknesses range from 0.02 m (0.79 in) at the aft-most station analyzed to 0.051 m (2.0 in) at the nozzle throat.

Some discussion of these results is in order. First, a reminder that the example while representative is hypothetical, it is intended to be illustrative of the method applied to a typical, complex, engineering application. Consideration of the appropriate number of analyses in the two loops, and whether the selected distributions here are truly representative, is clearly important in the context of ablation heat transfer modeling and SRM nozzle design, but distracting to the intent of the discussion here. Of importance is the concept that the analysis engineer has assessed and quantified uncertainties in his/her inputs and approach, and has propagated those assessments through the analysis. As opposed to the typical postmodeling discussions about how decision-makers “feel” about the quality and conservatism of the engineering analysis, the appropriate experts have assessed probabilities of variation and modeling uncertainties and quantified the effects of those uncertainties in the SRQ. It is absolutely appropriate to challenge the various assessments through peer reviews, etc., but the impact of those individual assessments can be explicitly quantified using this method.

The importance of uncertainty quantification in engineering design has been described and a general methodology has been summarized that captures both variability and predictability. Consistent with accepted approaches, the method emphasizes the importance of separating aleatory and epistemic effects in the uncertainty quantification so as not to confound the influences of inherent system variabilities and modeling uncertainties. Supporting information, language, and background are summarized as a convenience for the practicing engineer, and a general approach is defined. The output of the described approach provides for interpretations addressing inherent system variability along with engineering predictability. These are quantified in the form of probability and credibility extracted from the generation of a probability box (P-box). Unique to the approach described here is the adaptation of the concept of a “bounding” P-box to that of a “credible” P-box. As opposed to limiting the assessment of epistemic uncertainties to bounding intervals, the engineer provides assessed probability distributions for epistemic variables. These assessments are propagated through the uncertainty analysis that provides for the extraction of a credible P-box, which becomes bounding at 100% credibility. Three illustrative examples are provided in the paper. The first uses a simple equation as the predictive model, the second uses an engineering equation for the capability of a cantilevered beam, and the third makes use of ablation heat transfer models commonly used in the design of solid rocket nozzles. These examples highlight (1) the general flow of an appropriate uncertainty analysis, (2) the improved fidelity over traditional safety factor approaches, (3) the propagation of assessed probabilities of uncertain parameters, both aleatory and epistemic, in the engineering process, (4) the generation of credible P-boxes, and (5) the extraction of design parameters with stated levels of probability and credibility. This approach provides decision-makers with clearer information than is typically provided, especially related to the credibility of an engineering analysis.

Roache, P. J. , 1998, Verification and Validation in Computational Science, Hermosa, Albuquerque, NM.
Oberkampf, W. L. , and Roy, C. J. , 2010, Verification and Validation in Scientific Computing, Cambridge University Press, Cambridge, UK.
Roy, C. J. , and Oberkampf, W. L. , 2011, “ A Complete Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing,” Comput. Methods Appl. Mech. Eng., 200(25–28), pp. 2131–2144. [CrossRef]
Zipay, J. J. , Modlin, C. T. , and Larsen, C. E. , 2016, “ The Ultimate Factor of Safety for Aircraft and Spacecraft—Its History, Applications and Misconceptions,” AIAA Paper No. AIAA 2016-1715.
Ferson, S. , Kreinovich, V. , Ginzburg, L. , Myers, D. S. , and Sentz, K. , 2003, “ Constructing Probability Boxes and Dempster-Shafer Structures,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2002-4015.
Ferson, S. , and Ginzburg, L. R. , 1996, “ Different Methods Are Needed to Propagate Ignorance and Variability,” Reliab. Eng. Syst. Saf., 54(2–3), pp. 133–144. [CrossRef]
Hoffman, F. O. , and Hammonds, J. S. , 1994, “ Propagation of Uncertainty in Risk Assessments: The Need to Distinguish Between Uncertainty Due to Lack of Knowledge and Uncertainty Due to Variability,” Risk Anal., 14(5), pp. 707–712.
Ferson, S. , and Tucker, W. T. , 2006, “ Sensitivity Analysis Using Probability Bounding,” Reliab. Eng. Syst. Saf., 91(10–11), pp. 1435–1442. [CrossRef]
Roy, C. J. , and Balch, M. S. , 2012, “ A Holistic Approach to Uncertainty Quantification With Application to Supersonic Nozzle Thrust,” Int. J. Uncertainty Quantif., 2(4), pp. 363–381. [CrossRef]
Baraldi, P. , and Zio, E. , 2008, “ A Combined Monte Carlo and Possibilistic Approach to Uncertainty Propagation in Event Tree Analysis,” Risk Anal., 28(5), pp. 1309–1326.
Baudrit, C. , Dubois, D. , and Guyonnet, D. , 2006, “ Joint Propagation and Exploitation of Probabilistic and Possibilistic Information in Risk Assessment,” IEEE Trans. Fuzzy Syst., 14(5), pp. 593–608.
Kentel, E. , and Aral, M. M. , 2005, “ 2D Monte Carlo Versus 2D Fuzzy Monte Carlo Health Risk Assessment,” Stochastic Environ. Res. Risk Assess., 19(1), pp. 86–96. [CrossRef]
Ali, T. , Boruah, H. , and Dutta, P. , 2012, “ Modeling Uncertainty in Risk Assessment Using Double Monte Carlo Method,” Int. J. Eng. Innovative Technol., 1(4), pp. 114–118.
Denoeux, T. , and Li, S. , 2018, “ Frequency-Calibrated Belief Functions: Review and New Insights,” Int. J. Approximate Reasoning, 92, pp. 232–254. [CrossRef]
Montgomery, V. J. , Coolen, F. P. A. , and Hart, A. D. M. , 2009, “ Bayesian Probability Boxes in Risk Assessment,” J. Stat. Theory Pract., 3(1), pp. 69–83.Vol. [CrossRef]
AIAA 1998, “ Guide for the Verification and Validation of Computational Fluid Dynamics Simulations,” AIAA Paper No. AIAA-G-077-1998.
ASME, 2006, “ Guide for Verification and Validation in Computational Solid Mechanics,” ASME, New York, Standard No. V V 10-2006.
Gelman, A. , Carlin, J. B. , Stern, H. S. , and Rubin, D. B. , 1995, Bayesian Data Analysis, Chapman and Hall, London.
Freeman, J. A. , and Roy, C. J. , 2016, “ Global Optimization Under Uncertainty and Uncertainty Quantification Applied to Tractor-Trailer Base Flaps,” J. Verif., Validation Uncertainty Quantif., 1(2), p. 021008.
Welch, L. A. , Beran, P. S. , and Freeman, J. A. , 2017, “ Computational Optimization Under Uncertainty of an Active Flow Control Jet,” AIAA Paper No. AIAA 2017-3913.
Syamlal, M. , Celik, I. B. , and Benyah, S. , 2017, “ Quantifying the Uncertainty Introduced by Discretization and Time‐Averaging in Two‐Fluid Model Predictions,” AIChE J, 63(12), pp. 5343–5360. [CrossRef]
Black, D. L. , and Ewing, M. E. , 2017, “ A Comprehensive Assessment of Uncertainty for Insulation Erosion Modeling,” 64th JANNAF Propulsion Meeting, Kansas City, MO, Paper No. 2017-0003AD.
Harmel, R. D. , and Smith, P. K. , 2007, “ Consideration of Measurement Uncertainty in the Evaluation of Goodness-of-Fit in Hydrologic and Water Quality Modeling,” J. Hydrology, 337(3–4), pp. 326–336. [CrossRef]
Johnson, J. S. , Gosling, J. P. , and Kennedy, M. C. , 2011, “ Gaussian Process Emulation for Second-Order Monte Carlo Simulations,” J. Stat. Plann. Inference, 141(5), pp. 1838–1848. [CrossRef]
Sun, S. , 2010, “ Decision-Making Under Uncertainty: Optimal Storm Sewer Network Design Considering Flood Risk,” Ph.D. thesis, University of Exeter, Exeter, UK.
Simon, T. , 1999, “ Two-Dimensional Monte Carlo Simulation and Beyond: A Comparison of Several Probabilistic Risk Assessment Methods Applied to a Superfund Site,” Hum. Ecol. Risk Assess., 5(4), pp. 823–843. [CrossRef]
Ewing, M. E. , and Isaac, D. A. , 2015, “ Mathematical Modeling of Multi-Phase Chemical Equilibrium,” J. Thermophys. Heat Transfer, 29(3), pp. 551–562. [CrossRef]
Ewing, M. E. , Laker, T. S. , and Walker, D. T. , 2013, “ Numerical Modeling of Ablation Heat Transfer,” J. Thermophys. Heat Transfer, 27(4), pp. 615–632. [CrossRef]
Ewing, M. E. , Hernandez, M. J. , and Griffin, D. R. , 2016, “ Thermal Property Characterization of an Ablative Insulator,” J. Propul. Energ., 7(1), pp. 89–106.
Lachaud, J. R. , Martin, A. , van Eekelen, A. , and Cozmuta, I. , (2012) “ Ablation Test-Case Series 2,” 5th Ablation Workshop, Lexington, KY, February 28 – March 1, No. AW05-051.
Copyright © 2018 by American Society of Mechanical Engineers
View article in PDF format.

References

Roache, P. J. , 1998, Verification and Validation in Computational Science, Hermosa, Albuquerque, NM.
Oberkampf, W. L. , and Roy, C. J. , 2010, Verification and Validation in Scientific Computing, Cambridge University Press, Cambridge, UK.
Roy, C. J. , and Oberkampf, W. L. , 2011, “ A Complete Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing,” Comput. Methods Appl. Mech. Eng., 200(25–28), pp. 2131–2144. [CrossRef]
Zipay, J. J. , Modlin, C. T. , and Larsen, C. E. , 2016, “ The Ultimate Factor of Safety for Aircraft and Spacecraft—Its History, Applications and Misconceptions,” AIAA Paper No. AIAA 2016-1715.
Ferson, S. , Kreinovich, V. , Ginzburg, L. , Myers, D. S. , and Sentz, K. , 2003, “ Constructing Probability Boxes and Dempster-Shafer Structures,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2002-4015.
Ferson, S. , and Ginzburg, L. R. , 1996, “ Different Methods Are Needed to Propagate Ignorance and Variability,” Reliab. Eng. Syst. Saf., 54(2–3), pp. 133–144. [CrossRef]
Hoffman, F. O. , and Hammonds, J. S. , 1994, “ Propagation of Uncertainty in Risk Assessments: The Need to Distinguish Between Uncertainty Due to Lack of Knowledge and Uncertainty Due to Variability,” Risk Anal., 14(5), pp. 707–712.
Ferson, S. , and Tucker, W. T. , 2006, “ Sensitivity Analysis Using Probability Bounding,” Reliab. Eng. Syst. Saf., 91(10–11), pp. 1435–1442. [CrossRef]
Roy, C. J. , and Balch, M. S. , 2012, “ A Holistic Approach to Uncertainty Quantification With Application to Supersonic Nozzle Thrust,” Int. J. Uncertainty Quantif., 2(4), pp. 363–381. [CrossRef]
Baraldi, P. , and Zio, E. , 2008, “ A Combined Monte Carlo and Possibilistic Approach to Uncertainty Propagation in Event Tree Analysis,” Risk Anal., 28(5), pp. 1309–1326.
Baudrit, C. , Dubois, D. , and Guyonnet, D. , 2006, “ Joint Propagation and Exploitation of Probabilistic and Possibilistic Information in Risk Assessment,” IEEE Trans. Fuzzy Syst., 14(5), pp. 593–608.
Kentel, E. , and Aral, M. M. , 2005, “ 2D Monte Carlo Versus 2D Fuzzy Monte Carlo Health Risk Assessment,” Stochastic Environ. Res. Risk Assess., 19(1), pp. 86–96. [CrossRef]
Ali, T. , Boruah, H. , and Dutta, P. , 2012, “ Modeling Uncertainty in Risk Assessment Using Double Monte Carlo Method,” Int. J. Eng. Innovative Technol., 1(4), pp. 114–118.
Denoeux, T. , and Li, S. , 2018, “ Frequency-Calibrated Belief Functions: Review and New Insights,” Int. J. Approximate Reasoning, 92, pp. 232–254. [CrossRef]
Montgomery, V. J. , Coolen, F. P. A. , and Hart, A. D. M. , 2009, “ Bayesian Probability Boxes in Risk Assessment,” J. Stat. Theory Pract., 3(1), pp. 69–83.Vol. [CrossRef]
AIAA 1998, “ Guide for the Verification and Validation of Computational Fluid Dynamics Simulations,” AIAA Paper No. AIAA-G-077-1998.
ASME, 2006, “ Guide for Verification and Validation in Computational Solid Mechanics,” ASME, New York, Standard No. V V 10-2006.
Gelman, A. , Carlin, J. B. , Stern, H. S. , and Rubin, D. B. , 1995, Bayesian Data Analysis, Chapman and Hall, London.
Freeman, J. A. , and Roy, C. J. , 2016, “ Global Optimization Under Uncertainty and Uncertainty Quantification Applied to Tractor-Trailer Base Flaps,” J. Verif., Validation Uncertainty Quantif., 1(2), p. 021008.
Welch, L. A. , Beran, P. S. , and Freeman, J. A. , 2017, “ Computational Optimization Under Uncertainty of an Active Flow Control Jet,” AIAA Paper No. AIAA 2017-3913.
Syamlal, M. , Celik, I. B. , and Benyah, S. , 2017, “ Quantifying the Uncertainty Introduced by Discretization and Time‐Averaging in Two‐Fluid Model Predictions,” AIChE J, 63(12), pp. 5343–5360. [CrossRef]
Black, D. L. , and Ewing, M. E. , 2017, “ A Comprehensive Assessment of Uncertainty for Insulation Erosion Modeling,” 64th JANNAF Propulsion Meeting, Kansas City, MO, Paper No. 2017-0003AD.
Harmel, R. D. , and Smith, P. K. , 2007, “ Consideration of Measurement Uncertainty in the Evaluation of Goodness-of-Fit in Hydrologic and Water Quality Modeling,” J. Hydrology, 337(3–4), pp. 326–336. [CrossRef]
Johnson, J. S. , Gosling, J. P. , and Kennedy, M. C. , 2011, “ Gaussian Process Emulation for Second-Order Monte Carlo Simulations,” J. Stat. Plann. Inference, 141(5), pp. 1838–1848. [CrossRef]
Sun, S. , 2010, “ Decision-Making Under Uncertainty: Optimal Storm Sewer Network Design Considering Flood Risk,” Ph.D. thesis, University of Exeter, Exeter, UK.
Simon, T. , 1999, “ Two-Dimensional Monte Carlo Simulation and Beyond: A Comparison of Several Probabilistic Risk Assessment Methods Applied to a Superfund Site,” Hum. Ecol. Risk Assess., 5(4), pp. 823–843. [CrossRef]
Ewing, M. E. , and Isaac, D. A. , 2015, “ Mathematical Modeling of Multi-Phase Chemical Equilibrium,” J. Thermophys. Heat Transfer, 29(3), pp. 551–562. [CrossRef]
Ewing, M. E. , Laker, T. S. , and Walker, D. T. , 2013, “ Numerical Modeling of Ablation Heat Transfer,” J. Thermophys. Heat Transfer, 27(4), pp. 615–632. [CrossRef]
Ewing, M. E. , Hernandez, M. J. , and Griffin, D. R. , 2016, “ Thermal Property Characterization of an Ablative Insulator,” J. Propul. Energ., 7(1), pp. 89–106.
Lachaud, J. R. , Martin, A. , van Eekelen, A. , and Cozmuta, I. , (2012) “ Ablation Test-Case Series 2,” 5th Ablation Workshop, Lexington, KY, February 28 – March 1, No. AW05-051.

Figures

Grahic Jump Location
Fig. 1

Illustration of credible intervals: (a) intervals defined on the log-normal PDF and (b) intervals defined on the log-normal CDF

Grahic Jump Location
Fig. 2

One-dimensional Monte Carlo approach

Grahic Jump Location
Fig. 5

Two-dimensional Monte Carlo results for the model of Eq. (1)

Grahic Jump Location
Fig. 6

One-dimensional approach combining epistemic and aleatory uncertainties in a single Monte Carlo iteration scheme

Grahic Jump Location
Fig. 7

Two-dimensional Monte Carlo approach using an inner loop for aleatory and an outer loop for epistemic uncertainty: (a) ensemble of CDFs and credible P-box for the SRQ of Eq. (1), (b) SRQ ranges for 10% and 90% probabilities interpreted from the P-box, and (c) probability ranges for an SRQ value interpreted from the P-box.

Grahic Jump Location
Fig. 3

One-dimensional Monte Carlo results for the model of Eq. (1)

Grahic Jump Location
Fig. 4

Two-dimensional Monte Carlo approach

Grahic Jump Location
Fig. 9

Weight distribution

Grahic Jump Location
Fig. 10

Strength distribution

Grahic Jump Location
Fig. 11

Length distribution

Grahic Jump Location
Fig. 12

Credible P-box for the original beam design

Grahic Jump Location
Fig. 13

Credible P-box for the updated design

Grahic Jump Location
Fig. 14

Solid rocket motor

Grahic Jump Location
Fig. 15

Nozzle contour with design stations at various area ratios (A) and supporting analysis stations (S)

Grahic Jump Location
Fig. 16

The CDF ensemble and the 90% credible P-box for the 200 °F isotherm depth (Station 3)

Grahic Jump Location
Fig. 17

Results for the 200 °F isotherm depths at the nozzle analysis stations

Tables

Table Grahic Jump Location
Table 1 Input parameters
Table Grahic Jump Location
Table 2 UQ inputs parameter for depth of the 200 °F isotherm
Table Grahic Jump Location
Table 3 Input scaling for surrogate model characterization

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In