0
Research Papers

Why Do Verification and Validation? OPEN ACCESS

[+] Author and Article Information
Kenneth T. Hu

Mem. ASME
V&V, UQ, and Credibility Processes Department,
Sandia National Laboratories,
P.O. Box 5800, MS 0828,
Albuquerque, NM 87185-0828
e-mail: khu@sandia.gov

Thomas L. Paez

Thomas Paez Consulting,
185 Valley View Drive,
Sedona, AZ 86336
e-mail: tlpaez4444@gmail.com

1Corresponding author.

Manuscript received December 4, 2015; final manuscript received January 21, 2016; published online February 19, 2016. Editor: Ashley F. Emery.

J. Verif. Valid. Uncert 1(1), 011008 (Feb 19, 2016) (6 pages) Paper No: VVUQ-15-1056; doi: 10.1115/1.4032564 History: Received December 04, 2015; Revised January 21, 2016

In this discussion paper, we explore different ways to assess the value of verification and validation (V&V) of engineering models. We first present a literature review on the value of V&V and then use value chains and decision trees to show how value can be assessed from a decision maker's perspective. In this context, the value is what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. The 2014 Sandia V&V Challenge Workshop is used to illustrate these ideas.

FIGURES IN THIS ARTICLE
<>

Several guidelines for V&V of engineering models have been published over the past 20 years [13]. These guides and similar literature have set up two common misconceptions in the V&V community: (1) extensive V&V is required for all modeling projects and (2) adherence to the V&V principles will result in credible model predictions. Unfortunately, practical constraints often limit how much V&V analysis is performed, and the end result is often ambiguous—the model provides potentially useful predictions but it is not obvious whether the predictions should be trusted. When this happens, the value of V&V can be hard to describe. This was illustrated by the 2014 Sandia V&V Challenge Workshop, where the participants took different approaches, invested different amounts of time and effort, and reached different conclusions [4,5]. The first goal of the workshop was to demonstrate different V&V approaches and methods, and the responses collected in this special edition clearly accomplish this [610]. The second goal was to show how those V&V activities added credibility and supported an engineering decision, but this turns out to be a much more difficult task. With so many choices of methods and no clear-cut results, how do you choose which V&V activities to do and what is their value [11]? This paper provides our perspective on how to value V&V analysis in the context of decision-making process. We first present a literature review on the value of V&V and then visualize the impact of V&V in two ways: value chains and decision trees.

The literature on the value of model V&V is very limited, even though the Foundations'02 and’04 workshops highlighted the need for this work [12,13]. Much of this review is built upon the literature surveys of others: Oberkampf [14], Nitta and Logan [15], and Jahangirian et al. [16]. The surveyed literature takes an economics perspective of measuring costs and benefits. Three common themes are: estimating costs, connecting V&V to an intermediate assessment or decision support framework, and computing a benefit from that intermediate assessment.

Costs are the most straightforward to estimate and record. Several references were found on that subject [1719], although Pace counters that “Cost and resource requirements for… V&V are not as well understood as they need to be because meaningful information about such is not widely shared… much more information about cost and resource requirements needs to be collected and made available to facilitate development of more reliable estimation processes” [20]. Intermediate analyses are more challenging because they are subjective analyses of quantitative information. A large section of the V&V literature talks about credibility of model predictions without precisely defining what credibility means. Several attempts have been made to assess the credibility impact of V&V—either quantitatively or qualitatively [2125]. Others have used risk as an intermediate assessment [2629]. In contrast to programmatic risk, these papers are interested in the “use-risk” that is assumed when relying on imperfect model predictions. None of these papers attempt to define a tangible, measureable benefit from V&V or the intermediate analyses. Instead, the work is presented as an aid to decision makers. The only works in this survey that put all the three themes together, including a derived benefit, were by Nitta and coworkers [30,31] and Paez et al. [11,32].

Due to the limited volume of relevant papers, the literature search was expanded to include economic analyses in general and the economics and value of modeling and simulation or intermediate assessments that derive from V&V, such as risk management and software quality assurance.

The 1976 work of Gray [19] identified the need for formal economic analysis to justify modeling projects. The paper showed how decision trees are used to compare different modeling options and outcomes, and how benefit–cost analysis is used to evaluate each option. Unfortunately, the impact of V&V is not addressed, except to increase the cost; the influence of V&V on model use is ignored. The United States Department of Defense has funded numerous warfighting modeling efforts, which have led to other procedures for assessing the value of modeling [3336] and various lessons learned. Brown et al. [37] suggested that “Benefits must be viewed primarily in terms of measurable value.” However, they and several other authors also recognize the need to buy unquantifiable benefits—hinting at intangible value from things like “the competitive advantage it provides” [37].

Other references from both government agencies and industry have considered the economics of software V&V and quality assurance and measured a return-on-investment [3841]. Although software V&V is only partially related to model V&V, there are enough similarities that the methodologies and assumptions made in the cited works are instructive. The economics of risk reduction is considered in two summary papers [42,43], in which the authors summarize the results of multiple economic studies of disaster risk reduction projects. They conclude that disaster risk reduction can be economically effective, but caution that the analysis is very sensitive to assumptions and methodology.

The literature lacks a clear description of V&V's role in the decision process and a definition of V&V's tangible, measureable value in that context. These pieces are implied in papers about V&V and credibility/risk assessments, but we believe an explicit discussion of the big picture is required.

The major difficulty when discussing the value of V&V is the lack of precisely defined and quantified benefits from doing V&V. Taking the scenario from the latest challenge problem [44,45] (also explained below): the Mystery Liquid Company's purpose is to maintain storage tanks and the modeling and V&V work that supports that purpose is not sold for profit—so it has no measureable, tangible benefit [11]. To justify the expense of doing V&V, it must indirectly provide some value to the company. Performing V&V analyses will produce an array of evidence which is useful and has intangible benefits in that it can improve future decisions, but it is hard to justify paying for intangibles.

A value chain [46] describes the many steps that are taken to make an engineering decision. A partial value chain for a company is shown in Fig. 1. It includes:

  • running experiments to generate new information

  • creating models to improve predictive capability

  • performing V&V to gather evidence about model quality

  • using V&V evidence to assess credibility or use-risk

  • following a decision-making process to integrate the available information and eventually sell a product

This figure emphasizes V&V analysis rather than experimentation or modeling. This is not meant to imply that V&V is more important, just that the focus is on V&V. The value chain shows related activities that feed into V&V and the decision-making process, plus the eventual connection to a product. The figure shows that experiments, modeling work, and intermediate analyses (credibility and risk assessments) all feed into decision analysis (discussed later), then on to product decisions, and finally to the realization of a product. The intermediate analyses are that link V&V and the decision analysis. One point of contention is that modelers and decision makers may feel that model predictions or experimental data add more value than the V&V assessment and that the V&V assessment can be separated and/or left out. In fact, the V&V acts more like a multiplier on the value of data or model predictions; it is not a separate item that can independently add value.

The value chain is a way to visualize that value is created at every step. For example, V&V evidence by itself is valuable. The question is where to assess the value while making the argument that V&V activities should be performed. An academic researcher, a code developer, and an engineer responsible for design of a product will all have a different point of view. In some cases, the goal is simply to apply V&V methods without considering the rest of the value chain. In other cases, the end product (i.e., the bottom line) is the primary concern. The intermediate benefits discussed in the literature review (model quality, model credibility, and use-risk) all impact the value chain between the V&V activities and the decision analysis. As shown in Fig. 1, the connection between V&V and the product (credibility or risk assessment) is easily understood, but the benefits are difficult to quantify—How do you place a value on a risk assessment [30,31]? Paez et al. took a different approach and focused on monetary benefits associated with selling the end product; the benefits are clearly defined but the causal link from V&V to benefit is confounded with other causes. Note that other steps in the chain also face the same problems: unquantifiable benefit and unclear connection to a tangible benefit.

In this paper, we propose that value be assessed at the decision analysis step—taking the perspective of an engineer who is responsible for ensuring that a product will meet requirements. At this step, the connection to the end product (where value is derived) is fairly obvious. The intended use of the model and V&V evidence and the repercussions of use-risk or credibility assessments must be known or assumed. Imagine that a modeling capability exists, and experimental data are available. Would that be enough to decide that requirements are met, or would further V&V analysis be worthwhile? How much would we pay to be certain that the model was perfect? How much would we pay to know that the model should not be used? Unfortunately, V&V activities must be selected and paid for without knowing the results. At the decision analysis step, the value of V&V can be assessed as what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. This is further explained below using decision trees.

The perspective described above implies that the purpose of V&V is to support decisions. Even though V&V has influence on each of the links in the value chain, the benefits of V&V are hard to articulate from the point of view of a V&V analyst. An alternative viewpoint is that of the decision maker [47]. The value chain is unchanged, but the question changes subtly from “what is the value of doing V&V” to “how does this V&V analysis impact my decision?” The fact remains that a V&V analyst is almost never the decision maker, but it is useful for everyone involved to consider a decision-focused point of view. To do so, we extend the work of Gray [19] to use decision trees to deal with the uncertain outcomes of V&V activities and the impact on future decisions.

The Challenge Problem.

We use the 2014 Sandia V&V Challenge Problem as an illustrative example. Very briefly, Mystery Liquid Co. is faced with the possibility that their products (storage tanks) may fail, and their engineers must decide whether to replace the tanks or keep them in service. Given data from experiments, plus a computational model, the challenge is to predict the probability of failure for the tanks and also to estimate the uncertainty and to assess the credibility of the prediction [44,45].

Unfortunately, the additional context needed to make the mitigation decision (replace versus keep) is not included with the problem statement and cannot be fully explored here. We would need to know the consequences of each outcome, the costs of replacing the tanks, the state of the company, etc., in order to make a reasoned decision. The problem statement does imply that possible failures are acceptable when Pfail103; this threshold is assumed to be true for this paper, but no other acceptable outcomes are known. The point here is not to perform a detailed decision analysis, but to qualitatively describe the effects of acquiring V&V evidence.

Decision Trees.

The final decision in the challenge problem scenario is represented by a decision tree in Fig. 2. The book by Clemen [48] is an excellent introduction to the concepts used here. The solid squares represent decisions, which branch out for each potential action, e.g., how to mitigate the possible tank failures. Because a decision tree looks at future actions, and the results are unknown. This is represented by a chance node (solid circle) and possible outcomes, each of which has a probability of occurrence. Each branch of the decision tree represents a sequence of actions and outcomes that terminates in an end node (solid triangle). At each end node, the consequences of the actions and outcomes are evaluated along their respective branches. To compare branches, those consequences—tangible or otherwise—must be condensed to a single value, called the utility [49], which is often measured in dollars. The utility is a single value that represents all the costs and benefits from a decision tree branch, including the actions and the consequences of the realized outcome. A decision tree allows us to visualize the available actions, possible outcomes, and related utility. A large body of literature exists on selecting decision rules (rubric to select which action is best) to match a certain risk tolerance [50,51]. A simple decision rule is to pick the action with the highest expected utility.

The construction of a decision tree requires several assumptions: the actions and outcomes must be identified, and all consequences, utilities, and probabilities must be specified. The decision tree in Fig. 2 was created to describe the mitigation decision from the challenge problem. It indicates that if tanks are replaced, there will be no failures. However, if tanks are kept in service, there is some probability of failure, Pfail, leading to a random number of tank failures, Nfails, out of NTanks total tanks. We assume that the probability of observing a certain number of failures, P(Nfails), is modeled by the binomial distribution [52]

P(Nfails)=(NTanksNfails)PfailNfails(1Pfail)NTanksNfails

The probabilities of observing a certain number of failures, for Pfail=103 and NTanks=450, are computed and shown in the figure. The consequences of actions and outcomes include things like: lost revenue, cleanup and replacement costs from failed tanks, analysis and testing costs, and intangibles like competitive advantage, which must be boiled down to a utility value, u. Paez et al. discussed this in detail in their response to the challenge workshop [11], but here we leave the utility unspecified.

If the utilities were defined, each action would have an expected utility over all possible outcomes. Using the simple decision rule (maximize expected utility), the decision comes down to whether ureplace>E[ukeep]=0.64u0+0.29u1+0.06u2+0.01u3+. Ultimately, the P(Nfails), and therefore, the mitigation decision, hinges on the value of Pfail. The problem statement implies that Pfail>103 will increase the probabilities of bad outcomes (higher number of failures), which would force the decision maker to replace the tanks. If Pfail<103, then the decision maker is willing to tolerate the probabilities of bad outcomes by keeping the tanks in service, rather than accepting the certain cost of replacing the tanks.

This process is relatively straightforward, except for the fact that the true Pfail is unknown. The construction of the decision tree itself relies on imperfect information. This forms the basis of the challenge problem, where participants need to provide a prediction of Pfail plus uncertainty and also to assess and communicate the credibility of the estimates. The latter aspect is critical, because the numbers that populate the decision tree are not necessarily the predictions provided by engineers; they are the numbers that the decision maker believes. At one extreme, if the model predictions are perfect and Pfail is known exactly, then the decision tree is constructed and evaluated as above. At the other extreme, the model is completely unreliable and all the predictions must be discarded, so the decision tree must rely on other information. The reality lies in the middle, and in that case, the decision maker needs to know the quality of the information at hand. The key to decision analysis is interpreting the available evidence in order to create a decision tree that represents the decision maker's appraisal of potential outcomes. This is where V&V enters into the decision-making process. The reason that decision makers need V&V evidence is to aid them in constructing the decision tree. The intermediate analyses mentioned in the literature review (credibility or use-risk assessments) are ways to organize and present information for this purpose. In one sense, the value of V&V is equal to what the decision maker would pay to acquire V&V evidence—even when he or she knows that V&V may indicate that the model is not credible.

The effects of V&V are further illustrated in the shaded portion of Fig. 3, which shows two decision trees with identical structure. Both represent the same decision: replace or keep the tanks; the difference is the inclusion of V&V. We will assume that all the modeling work has been completed, and the analysis indicates that Pfail=9×104 is acceptable to keep the tanks in service, but very close to the threshold. In the top branch that is the only information available. The other outcome, labeled V&V Result 1A, has a more thorough analysis, similar to one of the challenge problem responses. How should the probabilities on the tank failure outcomes be determined? The example is quite abstract from here, so we proceed qualitatively.

We assume that these two decision trees are examined independently, and that the actions and outcomes do not change. The important differences are the decision maker's assigned Pfail and the costs of doing V&V. In the lower tree, Pfail is affected by the V&V evidence—either adding conservatism or correcting for bias and uncertainty—but what should be done in the absence of any evidence? The difference between the two trees illustrates the impact of V&V and intermediate analyses like credibility assessment or risk analysis. The exact influence on the decision tree is dependent on the problem and the decision maker, so the decision analysis requires experience with V&V and familiarity with the decision maker.

The final topic returns to the beginning of the challenge problem. Decision trees illustrate how the evidence from a completed V&V strategy affects a decision. This framework also addresses how to determine which V&V strategy should be adopted (at least conceptually). The decision tree is expanded to represent the entire challenge problem scenario: the mitigation and the selection of a V&V strategy. This sequential decision is represented in Fig. 3. Three V&V strategies are shown for the “evidence gathering” decision. The first is simple: No V&V, only a nominal prediction. The second is labeled V&V strategy 1 and is only partially shown; the ellipses indicate that the branch continues as expected. Additional branches off the root node could be added, like V&V strategy 2, in order to explore other alternatives.

The actions, outcomes, and utility for the mitigation decision are the same as in the first example, shown in Fig. 2. The decision maker must again interpret the V&V evidence to come up with a value of Pfail, but now we are dealing with hypothetical V&V evidence. The choice of a particular V&V strategy tells us what type of evidence should be available upon completion, but we do not yet know what the V&V evidence will indicate about the quality of our model or the implications for assigning Pfail. Fully populating and analyzing this decision tree are not informative, because it is so abstract and the outcomes are not well defined. Instead, we will give a qualitative example to clarify the steps in the decision-making process.

The No V&V branch is easiest to evaluate. The decision maker will receive only a point estimate of Pfail and no credibility assessment. To be conservative, the decision maker assigns an arbitrarily large correction: PfailNoV&V=102. The replacement action is therefore the favored action on the No V&V branch.

We then choose V&V strategy 1 to be the strategy from Choudhary et al. [6] (the guides and standards [13] or another response to the challenge problem [710] would work equally well). The details of this strategy are best described by the original paper; here, the important point is that the strategy will result in an interval estimate of Pfail that incorporates all the analysts' knowledge of uncertainty and V&V evidence. Before completing the analysis, however, we cannot know what that interval will be. Therefore, we assume that three qualitatively different outcomes are possible: an interval below threshold, one that spans the threshold, and one that is above the threshold; these are labeled V&V result 1A, 1B, and 1C. The decision maker places a lot of faith in this analysis but is inherently conservative. He or she assigns Pfail to be the upper bound of each interval. Based on prior experience and the fact that the initial estimate of Pfail is so close to the threshold, the outcome 1A is assigned a low probability, while the other two are higher probability: P1A<P1B and P1A<P1C. This branch of the decision tree is shown in Fig. 4 (with some abbreviated labels).

Based on the assigned Pfail values and resulting utility comparisons, Fig. 4 indicates that the replace action is best for both V&V result 1B and C outcomes, and the Keep action is best for V&V result 1A. There is only a small probability, P1A, that the mitigation decision will be to Keep the tanks in service. Unfortunately, due to the cost of V&V analysis, the utility values on the V&V strategy 1 branch are lower compared to the No V&V strategy: u*w/V&V<u*. Evaluating the full decision tree in Fig. 3, we can conclude:

  1. (1)The V&V result 1A + Keep branch has less utility than the No V&V + Replace, then V&V strategy 1 is definitely not worth doing—even with the best possible outcome.
  2. (2)If Pfail<103, the maximum utility comes from the No V&V and Keep actions and If Pfail>103, max utility is from No V&V and Replace.
  3. (3)Unfortunately, the perceived lack of credibility forces the decision maker to eliminate the No V&V and Keep branch, in favor of Replace.
  4. (4)If V&V strategy 1 is selected, the most likely mitigation decision is still Replace, which has lower utility compared to No V&V and Replace. This is the result from both the V&V result 1B and C outcomes, which occur with probability P1B+P1C.
  5. (5)Because the No V&V and Keep branch has been eliminated, the maximum possible utility is actually the V&V strategy 1V&V Result 1A and Keep branch, which has a relatively low probability of occurring.

The question is now whether the expected utility from the whole V&V strategy 1 branch is greater than ureplace. This tradeoff depends greatly on the utility values and the assigned probabilities for the outcomes V&V result 1A, B, C. We end the analysis here without assuming utilities and probabilities. The main takeaway is that the outcome from performing V&V is uncertain: it is possible that performing the extra analysis will not change the resulting action. This example shows how to account for the various V&V outcomes and considers the impact on subsequent decisions. The key steps and lessons from the decision process are:

  1. (1)Understand the V&V strategy and what types of information it produces.
  2. (2)In practice, V&V is never complete—What level of credibility is possible when we skip certain aspects of V&V?
  3. (3)Set expectations for the results of the V&V strategy—many outcomes are possible, some useful and some less so. We cannot predict exactly what information will result from V&V, but the range of outcomes and their probabilities should be estimated. This illustrates the need for modeling and V&V experience and the benefits of continuously tracking computational capabilities, via efforts like the ASC V&V Program [5357].
  4. (4)Understand how the decision maker will interpret different V&V evidence. Different in many contexts: from different V&V strategies, using different communication methods, or with varying levels of reported uncertainty and credibility.
  5. (5)Clearly define the future decision and understand the impact of the decision maker's interpretation of V&V evidence on potential actions, outcomes, probabilities, consequences, and utilities. How will V&V impact the construction of the decision tree?
  6. (6)Estimate the sensitivity of future decisions to potential V&V evidence—Could additional evidence change which action is best? If not, then V&V is not worth doing.

We are not suggesting that a formal, quantitative decision analysis must be used to select a V&V strategy. Decision trees are very difficult to accurately construct for real projects because of the many quantitative assignments based on assumptions and intuition. However, we do believe that the thought process illustrated in this example is necessary when considering what V&V activities to perform. The example effectively illustrates the concepts in the decision process, including the role of V&V and its value to the decision maker.

We have used the idea of a value chain to describe the many tasks required for V&V to support a decision. Value can be assessed at any point in the value chain, and other authors have done so via risk or credibility assessments. In this work, we argued that the purpose of V&V is to aid and improve decisions, and so value is best measured from the decision maker's perspective. The major difficultly is to understand and account for the many possible outcomes from a particular V&V strategy. This was illustrated using decision trees with the 2014 Sandia V&V Challenge Problem as an example.

The challenge problem is ultimately about decision support, as illustrated in Fig. 3. The key issue is how to assign Pfail in the mitigation decision. Working backward, the decision maker must interpret the available evidence (prediction, uncertainty, and credibility), which must be gathered by executing a V&V strategy. The first question is therefore which V&V strategy to select at the beginning of the project, to get the best information to support the mitigation decision. The second question is what are the possible outcomes from executing that V&V strategy. We cannot know what the V&V evidence will say about the uncertainty and credibility, until the V&V work is completed. The decision analysis step must account for the uncertain outcomes and place a value for each V&V strategy, in order to select the one with the best chance to improve the final decision. The value of a V&V strategy is what the decision maker would be willing to pay to complete the relevant analyses—prior to knowing the results. The value is not tied to the outcome of the strategy, i.e., the results of the V&V analyses.

By viewing the challenge problem as a decision tree, we have identified several steps in the decision-making process and the role of V&V in that process. This will allow us to better understand methods and how to synthesize them into a strategy, to better predict the outcomes from V&V activities, to better tailor V&V activities to match the decision maker's needs and therefore produce better decisions. We hope that the challenge workshop and this special edition will nudge the V&V community in this direction.

This challenge problem and the resulting workshop were made possible with support from the Sandia National Laboratories and ASME. We are grateful to the workshop participants and attendees of the ASME V&V Symposium for many conversations that resulted in this discussion paper. Sandia National Laboratories is a multiprogram laboratory managed and operated by the Sandia Corporation, a wholly owned subsidiary of the Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.

ASME V&V 20, 2009, Standard for Verification and Validation in Computational Fluids and Heat Transfer, The American Society of Mechanical Engineers, New York.
ASME V&V 10-2006, 2006, Guide for Verification and Validation in Computational Solid Mechanics, The American Society of Mechanical Engineers, New York.
AIAA, 1998, “ Guide for the Verification and Validation of Computational Fluid Dynamics Simulations,” AIAA Paper No. G-077-1998.
Hu, K. T. , Carnes, B. , and Romero, V. , 2016, “ The 2014 Sandia Verification and Validation Challenge Workshop,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 010202. [CrossRef]
Schroeder, B. B. , Hu, K. T. , Winokur, J. G. , and Mullins, J. G. , 2016, “ Summary of the 2014 Sandia V&V Challenge Workshop,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), 015501. [CrossRef]
Choudhary, A. , Voyles, I. T. , Roy, C. J. , Oberkampf, W. L. , and Patil, M. , 2016, “ Probability Bounds Analysis Applied to the Sandia Verification and Validation Challenge Problem,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011003 [CrossRef]
Li, W. , Chen, S. , Jiang, Z. , Apley, D. W. , Lu, Z. , and Chen, W. , 2016, “ Integrating Bayesian Calibration, Bias Correction, and Machine Learning for the 2014 Sandia Verification and Validation Challenge Problem,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), 011004. [CrossRef]
Mullins, J. , and Mahadevan, S. , 2016, “ Bayesian Uncertainty Integration for Model Calibration, Validation, and Prediction,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011006. [CrossRef]
Beghini, L. L. , and Hough, P. D. , 2016, “ Sandia V&V Challenge Problem: A PCMM-Based Approach to Assessing Prediction Credibility,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), 011002. [CrossRef]
Xi, Z. , and Yang, R. J. , 2016, “ Reliability Analysis With Model Uncertainty Coupling With Parameter and Experimental Uncertainties: A Case Study of 2014 V&V Challenge Problem,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011005. [CrossRef]
Paez, P. J. , Paez, T. , and Hasselman, T. K. , 2016, “ Economics Analysis of Model Validation for a Challenge Problem,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011007. [CrossRef]
Youngblood, S. M. , “ Roadmap for VV&A Technology Advancement,” 2004, Foundations'04: A Workshop for V&V in the 21st Century, Defense Modeling and Simulation Office, Arizona State University.
Pace, D. , 2002, “ Foundations'02 Overview,” Foundations'02 a Workshop on Model and Simulation Verification and Validation for the 21st Century, D. Pace , ed., JHU/APL, Laurel, MD.
Oberkampf, W. L. , 1998, “ Bibliography for Verification and Validation in Computational Simulation,” Sandia National Laboratories, Report No. SAND98-2041.
Nitta, C. K. , and Logan, R. W. , 2004, “ ASCI V&V at LLNL: An Unclassified Bibliography,” Lawrence Livermore National Laboratory, Report No. UCRL-AR-203864.
Jahangirian, M. , Taylor, S. J. E. , and Young, T. , 2010, “ Economics of Modeling and Simulation: Reflections and Implications for Healthcare,” 2010 Winter Simulation Conference (WSC).
Kilikauskas, M. L. , and Hall, D. H. , 2002, “ Estimating V&V Resource Requirements and Schedule Impact,” Foundations for V&V in the 21st Century Workshop, S. Youngblood , ed., Johns Hopkins University Applied Physics Laboratory, Laurel, MD.
Back, G. , Love, G. , and Falk, J. , 2000, “ The Doing of Model Verification and Validation: Balancing Cost and Theory,” 18th International Conference of the System Dynamics Society, System Dynamics Society, Bergen, Norway.
Gray, P. , 1976, “ The Economics of Simulation,” 76 Bicentennial Winter Conference on Simulation, Winter Simulation Conference, Gaithersburg, MD, pp. 17–25.
Pace, D. , 2004, “ Modeling and Simulation Verification and Validation Challenges,” Johns Hopkins APL Tech. Dig., 25(2), pp. 163–172.
Oberkampf, W. L. , Pilch, M. , and Trucano, T. G. , 2007, “ Predictive Capability Maturity Model for Computational Modeling and Simulation,” Sandia National Laboratories, Report No. SAND2007-5948.
Easterling, R. G. , 2001, “ Measuring the Predictive Capability of Computational Models: Principles and Methods, Issues and Illustrations,” Sandia National Laboratories, Report No. SAND2001-0243.
Rizzi, A. , and Vos, J. , 1998, “ Toward Establishing Credibility in Computational Fluid Dynamics Simulations,” AIAA J., 36(5), pp. 668–675. [CrossRef]
Blattnig, S. R. , Green, L. , Luckring, J. , Morrison, J. , Tripathi, R. , and Zang, T. , 2008, “ Towards a Credibility Assessment of Models and Simulations,” AIAA Paper No. 2008-2156.
Hemez, F. , Atamturktur, H. S. , and Unal, C. , 2010, “ Defining Predictive Maturity for Validated Numerical Simulations,” Comput. Struct., 88(7–8), pp. 497–505. [CrossRef]
Balci, O. , and Sargent, R. G. , 1981, “ A Methodology for Cost-Risk Analysis in the Statistical Validation of Simulation Models,” Commun. ACM, 24(4), pp. 190–197. [CrossRef]
Muessig, P. R. , Laack, D. R. , and Wrobleski, J. W., Jr. , 1997, “ Optimizing the Selection of VV&A Activities: A Risk/Benefit Approach,” 29th Winter Conference on Simulation, IEEE Computer Society, Atlanta, GA, pp. 60–66.
Youngblood, S. M. , Stutzman, M. , Pace, D. K. , and Pandolfini, P. P. , 2011, “ Risk Based Methodology for Verification, Validation, and Accreditation (VV&A), M&S Use Risk Methodology (MURM),” The Johns Hopkins University Applied Physics Laboratory, Technical Report No. NSAD-R-2011-011.
Elele, J. N. , and Smith, J. , 2010, “ Risk-Based Verification, Validation, and Accreditation Process,” Proc. SPIE 7705.
Logan, R. W. , Nitta, C. K. , and Chidester, S. K. , 2005, “ Risk Reduction as the Product of Model Assessed Reliability, Confidence, and Consequence,” J. Def. Model. Simul.: Appl., Methodol., Technol., 2(4), pp. 191–207.
Nitta, C. , Logan, R. , Chidester, S. , and Foltz, M. F. , 2004, “ Benefit/Cost Ratio in Systems Engineering: Integrated Models, Tests, Design, and Production,” Lawrence Livermore National Laboratory, Report No. UCRL-TR-207610.
Paez, P. J. , Paez, T. L. , Hasselman, T. K. , and Hu, K. , 2015, “ The Economics of Model Validation and Solution of the 2014 Sandia V&V Challenge Problem,” Sandia National Laboratories, Report No. SAND2015-10560.
Waite, W. , Lightner, G. , Gravitz, R. , Severinghaus, R. , Waite, E. , Swenson, S. , Feinberg, J. , Cooley, T. , Gordon, S. , Oswalt, I. , 2008, “ Metrics for Modeling and Simulation (M&S) Investments,” NAVAIR, Report No. TJ-042608-RP013.
Oswalt, I. , Cooley, T. , Waite, W. , Waite, E. , Gordon, S. , Severinghaus, R. , Feinberg, J. , Lightner, G. , 2015, Calculating Return on Investment for U.S. Department of Defense Modeling and Simulation, Defense ARJ and Defense AT&L Publications.
Gibson, R. , Medeiros, D. J. , Sudar, A. , Waite, B. , and Rohrer, M. W. , 2003, “ Increasing Return on Investment From Simulation,” 2003 Winter Simulation Conference, S. Chick , P. J. Sánchez , D. Ferrin , and D. J. Morrice , eds., pp. 2027–2032.
Carter, J. R., III , 2001, “ A Business Case for Modeling and Simulation,” Aviation and Missile Research, Development, and Engineering Center, Special Report No. RD-AS-01-02.
Brown, C. D. , Grant, G. , Kotchman, D. , Reyenga, R. , and Szanto, T. , 2000, “ Building a Business Case for Modeling and Simulation,” Acquis. Rev. Q., pp. 311–328.
Dabney, J. B. , Barber, G. , and Ohi, D. , 2005, “ Computing Return on Investment of Risk-Reducing Systems Engineering Disciplines,” Space Systems Engineering and Risk Management Conference, Los Angeles, CA.
Dabney, J. B. , Barber, G. , and Ohi, D. , 2004, “ Estimating Direct Return on Investment of Independent Verification and Validation,” Eighth IASTED International Conference, Cambridge, MA.
Lederer, P. J. , and Rhee, S.-K. , 1995, “ Economics of Total Quality Management,” J. Oper. Manage., 12(3–4), pp. 353–367. [CrossRef]
Abdel-Hamid, T. K. , 1988, “ The Economics of Software Quality Assurance: A Simulation-Based Case Study,” MIS Q., 12(3), pp. 395–411. [CrossRef]
Shreve, C. M. , and Kelman, I. , 2014, “ Does Mitigation Save? Reviewing Cost-Benefit Analyses of Disaster Risk Reduction,” Int. J. Disaster Risk Reduct., 10, pp. 213–235. [CrossRef]
Wethli, K. , 2014, 2016, “ Benefit-Cost Analysis for Risk Management: Summary of Selected Examples,” World Development Report 2014, The World Bank, Washington, DC.
Hu, K. T. , and Orient, G. E. , 2016, “ The 2014 Sandia V&V Challenge: Problem Statement,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011001. [CrossRef]
Hu, K. T. , 2013, “ 2014 V&V Challenge: Problem Statement,” Sandia National Laboratories, Report No. SAND2013-10486P.
Porter, M. E. , 1985, Competitive Advantage: Creating and Sustaining Superior Performance, Simon and Schuster, New York.
Paté-Cornell, M. E. , and Dillon, R. L. , 2006, “ The Respective Roles of Risk and Decision Analyses in Decision Support,” Decis. Anal., 3(4), pp. 220–232. [CrossRef]
Clemen, R. T. , 1996, Making Hard Decisions: An Introduction to Decision Analysis, Duxbury Press, Boston, MA.
Berger, J. O. , 1985, Statistical Decision Theory and Bayesian Analysis, 2nd ed., Springer-Verlag, Berlin.
Artzner, P. , Delbaen, F. , Eber, J.-M. , and Heath, D. , 1999, “ Coherent Measures of Risk,” Math. Finance, 9(3), pp. 203–228. [CrossRef]
Adeyemo, A. M. , 2013, “ Stochastic Dominance for Project Screening and Selection Under Uncertainty,” Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA.
Bertsekas, D. P. , and Tsitsiklis, J. N. , 2002, Introduction to Probability, Athena Scientific, Belmont, MA.
Lee, J. R. , 1998, “ Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software,” Sandia National Laboratories, Report No. SAND98-2420.
Sandia, 1998, “ Strategic Computing & Simulation Validation & Verification Program: Program Plan,” Sandia National Laboratories, http://www.sandia.gov/asc/pubs_pres/pubs/vnvprogplan_FY98.html (Last accessed Nov. 26, 2011).
Klein, R. , Doebling, S. , Graziani, F. , Pilch, M. , and Trucano, T. G. , 2006, “ ASC Predictive Science Academic Alliance Program Verification and Validation Whitepaper,” Lawrence Livermore National Laboratories, Los Alamos National Laboratories, Sandia National Laboratories, Report No. UCRL-TR-220711.
Schwitters, R. , 2003, “ Requirements for ASCI,” MITRE, FSR-03-330.
Hodges, A. , Froehlich, G. , Peercy, D. , Pilch, M. , Meza, J. , Peterson, M. , LaGrange, J. , Cox, L. , Koch, K. , Storch, N. , Nitta, C. , and Dube, E. , 2001, “ ASCI Software Quality Engineering, Goals, Principles, and Guidelines,” DOE/DP/ASC-SQE-2000-FDRFR-VERS2.
Copyright © 2016 by ASME
View article in PDF format.

References

ASME V&V 20, 2009, Standard for Verification and Validation in Computational Fluids and Heat Transfer, The American Society of Mechanical Engineers, New York.
ASME V&V 10-2006, 2006, Guide for Verification and Validation in Computational Solid Mechanics, The American Society of Mechanical Engineers, New York.
AIAA, 1998, “ Guide for the Verification and Validation of Computational Fluid Dynamics Simulations,” AIAA Paper No. G-077-1998.
Hu, K. T. , Carnes, B. , and Romero, V. , 2016, “ The 2014 Sandia Verification and Validation Challenge Workshop,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 010202. [CrossRef]
Schroeder, B. B. , Hu, K. T. , Winokur, J. G. , and Mullins, J. G. , 2016, “ Summary of the 2014 Sandia V&V Challenge Workshop,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), 015501. [CrossRef]
Choudhary, A. , Voyles, I. T. , Roy, C. J. , Oberkampf, W. L. , and Patil, M. , 2016, “ Probability Bounds Analysis Applied to the Sandia Verification and Validation Challenge Problem,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011003 [CrossRef]
Li, W. , Chen, S. , Jiang, Z. , Apley, D. W. , Lu, Z. , and Chen, W. , 2016, “ Integrating Bayesian Calibration, Bias Correction, and Machine Learning for the 2014 Sandia Verification and Validation Challenge Problem,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), 011004. [CrossRef]
Mullins, J. , and Mahadevan, S. , 2016, “ Bayesian Uncertainty Integration for Model Calibration, Validation, and Prediction,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011006. [CrossRef]
Beghini, L. L. , and Hough, P. D. , 2016, “ Sandia V&V Challenge Problem: A PCMM-Based Approach to Assessing Prediction Credibility,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), 011002. [CrossRef]
Xi, Z. , and Yang, R. J. , 2016, “ Reliability Analysis With Model Uncertainty Coupling With Parameter and Experimental Uncertainties: A Case Study of 2014 V&V Challenge Problem,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011005. [CrossRef]
Paez, P. J. , Paez, T. , and Hasselman, T. K. , 2016, “ Economics Analysis of Model Validation for a Challenge Problem,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011007. [CrossRef]
Youngblood, S. M. , “ Roadmap for VV&A Technology Advancement,” 2004, Foundations'04: A Workshop for V&V in the 21st Century, Defense Modeling and Simulation Office, Arizona State University.
Pace, D. , 2002, “ Foundations'02 Overview,” Foundations'02 a Workshop on Model and Simulation Verification and Validation for the 21st Century, D. Pace , ed., JHU/APL, Laurel, MD.
Oberkampf, W. L. , 1998, “ Bibliography for Verification and Validation in Computational Simulation,” Sandia National Laboratories, Report No. SAND98-2041.
Nitta, C. K. , and Logan, R. W. , 2004, “ ASCI V&V at LLNL: An Unclassified Bibliography,” Lawrence Livermore National Laboratory, Report No. UCRL-AR-203864.
Jahangirian, M. , Taylor, S. J. E. , and Young, T. , 2010, “ Economics of Modeling and Simulation: Reflections and Implications for Healthcare,” 2010 Winter Simulation Conference (WSC).
Kilikauskas, M. L. , and Hall, D. H. , 2002, “ Estimating V&V Resource Requirements and Schedule Impact,” Foundations for V&V in the 21st Century Workshop, S. Youngblood , ed., Johns Hopkins University Applied Physics Laboratory, Laurel, MD.
Back, G. , Love, G. , and Falk, J. , 2000, “ The Doing of Model Verification and Validation: Balancing Cost and Theory,” 18th International Conference of the System Dynamics Society, System Dynamics Society, Bergen, Norway.
Gray, P. , 1976, “ The Economics of Simulation,” 76 Bicentennial Winter Conference on Simulation, Winter Simulation Conference, Gaithersburg, MD, pp. 17–25.
Pace, D. , 2004, “ Modeling and Simulation Verification and Validation Challenges,” Johns Hopkins APL Tech. Dig., 25(2), pp. 163–172.
Oberkampf, W. L. , Pilch, M. , and Trucano, T. G. , 2007, “ Predictive Capability Maturity Model for Computational Modeling and Simulation,” Sandia National Laboratories, Report No. SAND2007-5948.
Easterling, R. G. , 2001, “ Measuring the Predictive Capability of Computational Models: Principles and Methods, Issues and Illustrations,” Sandia National Laboratories, Report No. SAND2001-0243.
Rizzi, A. , and Vos, J. , 1998, “ Toward Establishing Credibility in Computational Fluid Dynamics Simulations,” AIAA J., 36(5), pp. 668–675. [CrossRef]
Blattnig, S. R. , Green, L. , Luckring, J. , Morrison, J. , Tripathi, R. , and Zang, T. , 2008, “ Towards a Credibility Assessment of Models and Simulations,” AIAA Paper No. 2008-2156.
Hemez, F. , Atamturktur, H. S. , and Unal, C. , 2010, “ Defining Predictive Maturity for Validated Numerical Simulations,” Comput. Struct., 88(7–8), pp. 497–505. [CrossRef]
Balci, O. , and Sargent, R. G. , 1981, “ A Methodology for Cost-Risk Analysis in the Statistical Validation of Simulation Models,” Commun. ACM, 24(4), pp. 190–197. [CrossRef]
Muessig, P. R. , Laack, D. R. , and Wrobleski, J. W., Jr. , 1997, “ Optimizing the Selection of VV&A Activities: A Risk/Benefit Approach,” 29th Winter Conference on Simulation, IEEE Computer Society, Atlanta, GA, pp. 60–66.
Youngblood, S. M. , Stutzman, M. , Pace, D. K. , and Pandolfini, P. P. , 2011, “ Risk Based Methodology for Verification, Validation, and Accreditation (VV&A), M&S Use Risk Methodology (MURM),” The Johns Hopkins University Applied Physics Laboratory, Technical Report No. NSAD-R-2011-011.
Elele, J. N. , and Smith, J. , 2010, “ Risk-Based Verification, Validation, and Accreditation Process,” Proc. SPIE 7705.
Logan, R. W. , Nitta, C. K. , and Chidester, S. K. , 2005, “ Risk Reduction as the Product of Model Assessed Reliability, Confidence, and Consequence,” J. Def. Model. Simul.: Appl., Methodol., Technol., 2(4), pp. 191–207.
Nitta, C. , Logan, R. , Chidester, S. , and Foltz, M. F. , 2004, “ Benefit/Cost Ratio in Systems Engineering: Integrated Models, Tests, Design, and Production,” Lawrence Livermore National Laboratory, Report No. UCRL-TR-207610.
Paez, P. J. , Paez, T. L. , Hasselman, T. K. , and Hu, K. , 2015, “ The Economics of Model Validation and Solution of the 2014 Sandia V&V Challenge Problem,” Sandia National Laboratories, Report No. SAND2015-10560.
Waite, W. , Lightner, G. , Gravitz, R. , Severinghaus, R. , Waite, E. , Swenson, S. , Feinberg, J. , Cooley, T. , Gordon, S. , Oswalt, I. , 2008, “ Metrics for Modeling and Simulation (M&S) Investments,” NAVAIR, Report No. TJ-042608-RP013.
Oswalt, I. , Cooley, T. , Waite, W. , Waite, E. , Gordon, S. , Severinghaus, R. , Feinberg, J. , Lightner, G. , 2015, Calculating Return on Investment for U.S. Department of Defense Modeling and Simulation, Defense ARJ and Defense AT&L Publications.
Gibson, R. , Medeiros, D. J. , Sudar, A. , Waite, B. , and Rohrer, M. W. , 2003, “ Increasing Return on Investment From Simulation,” 2003 Winter Simulation Conference, S. Chick , P. J. Sánchez , D. Ferrin , and D. J. Morrice , eds., pp. 2027–2032.
Carter, J. R., III , 2001, “ A Business Case for Modeling and Simulation,” Aviation and Missile Research, Development, and Engineering Center, Special Report No. RD-AS-01-02.
Brown, C. D. , Grant, G. , Kotchman, D. , Reyenga, R. , and Szanto, T. , 2000, “ Building a Business Case for Modeling and Simulation,” Acquis. Rev. Q., pp. 311–328.
Dabney, J. B. , Barber, G. , and Ohi, D. , 2005, “ Computing Return on Investment of Risk-Reducing Systems Engineering Disciplines,” Space Systems Engineering and Risk Management Conference, Los Angeles, CA.
Dabney, J. B. , Barber, G. , and Ohi, D. , 2004, “ Estimating Direct Return on Investment of Independent Verification and Validation,” Eighth IASTED International Conference, Cambridge, MA.
Lederer, P. J. , and Rhee, S.-K. , 1995, “ Economics of Total Quality Management,” J. Oper. Manage., 12(3–4), pp. 353–367. [CrossRef]
Abdel-Hamid, T. K. , 1988, “ The Economics of Software Quality Assurance: A Simulation-Based Case Study,” MIS Q., 12(3), pp. 395–411. [CrossRef]
Shreve, C. M. , and Kelman, I. , 2014, “ Does Mitigation Save? Reviewing Cost-Benefit Analyses of Disaster Risk Reduction,” Int. J. Disaster Risk Reduct., 10, pp. 213–235. [CrossRef]
Wethli, K. , 2014, 2016, “ Benefit-Cost Analysis for Risk Management: Summary of Selected Examples,” World Development Report 2014, The World Bank, Washington, DC.
Hu, K. T. , and Orient, G. E. , 2016, “ The 2014 Sandia V&V Challenge: Problem Statement,” ASME J. Verif., Validation, Uncertainty Quantif., 1(1), p. 011001. [CrossRef]
Hu, K. T. , 2013, “ 2014 V&V Challenge: Problem Statement,” Sandia National Laboratories, Report No. SAND2013-10486P.
Porter, M. E. , 1985, Competitive Advantage: Creating and Sustaining Superior Performance, Simon and Schuster, New York.
Paté-Cornell, M. E. , and Dillon, R. L. , 2006, “ The Respective Roles of Risk and Decision Analyses in Decision Support,” Decis. Anal., 3(4), pp. 220–232. [CrossRef]
Clemen, R. T. , 1996, Making Hard Decisions: An Introduction to Decision Analysis, Duxbury Press, Boston, MA.
Berger, J. O. , 1985, Statistical Decision Theory and Bayesian Analysis, 2nd ed., Springer-Verlag, Berlin.
Artzner, P. , Delbaen, F. , Eber, J.-M. , and Heath, D. , 1999, “ Coherent Measures of Risk,” Math. Finance, 9(3), pp. 203–228. [CrossRef]
Adeyemo, A. M. , 2013, “ Stochastic Dominance for Project Screening and Selection Under Uncertainty,” Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA.
Bertsekas, D. P. , and Tsitsiklis, J. N. , 2002, Introduction to Probability, Athena Scientific, Belmont, MA.
Lee, J. R. , 1998, “ Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software,” Sandia National Laboratories, Report No. SAND98-2420.
Sandia, 1998, “ Strategic Computing & Simulation Validation & Verification Program: Program Plan,” Sandia National Laboratories, http://www.sandia.gov/asc/pubs_pres/pubs/vnvprogplan_FY98.html (Last accessed Nov. 26, 2011).
Klein, R. , Doebling, S. , Graziani, F. , Pilch, M. , and Trucano, T. G. , 2006, “ ASC Predictive Science Academic Alliance Program Verification and Validation Whitepaper,” Lawrence Livermore National Laboratories, Los Alamos National Laboratories, Sandia National Laboratories, Report No. UCRL-TR-220711.
Schwitters, R. , 2003, “ Requirements for ASCI,” MITRE, FSR-03-330.
Hodges, A. , Froehlich, G. , Peercy, D. , Pilch, M. , Meza, J. , Peterson, M. , LaGrange, J. , Cox, L. , Koch, K. , Storch, N. , Nitta, C. , and Dube, E. , 2001, “ ASCI Software Quality Engineering, Goals, Principles, and Guidelines,” DOE/DP/ASC-SQE-2000-FDRFR-VERS2.

Figures

Grahic Jump Location
Fig. 1

A notional value chain focusing on V&V

Grahic Jump Location
Fig. 2

A sample decision tree for the challenge problem's mitigation decision

Grahic Jump Location
Fig. 3

A sequential decision tree, with uncertain outcomes from three V&V strategies

Grahic Jump Location
Fig. 4

Analysis of V&V strategy 1

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In