0
Research Papers

Applicability Analysis of Validation Evidence for Biomedical Computational Models OPEN ACCESS

[+] Author and Article Information
Pras Pathmanathan

Office of Science and Engineering
Laboratories (OSEL),
Center for Devices and Radiological
Health (CDRH),
U.S. Food and Drug Administration (FDA),
Silver Spring, MD 20993
e-mail: pras.pathmanathan@fda.hhs.gov

Richard A. Gray, Tina M. Morrison

Office of Science and Engineering
Laboratories (OSEL),
Center for Devices and Radiological
Health (CDRH),
U.S. Food and Drug Administration (FDA),
Silver Spring, MD 20993

Vicente J. Romero

Sandia National Laboratories,
Albuquerque, NM 87185

1Corresponding author.

Manuscript received March 1, 2017; final manuscript received August 1, 2017; published online September 7, 2017. Assoc. Editor: Marc Horner.This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States. Approved for public release; distribution is unlimited.

J. Verif. Valid. Uncert 2(2), 021005 (Sep 07, 2017) (11 pages) Paper No: VVUQ-17-1006; doi: 10.1115/1.4037671 History: Received March 01, 2017; Revised August 01, 2017

Computational modeling has the potential to revolutionize medicine the way it transformed engineering. However, despite decades of work, there has only been limited progress to successfully translate modeling research to patient care. One major difficulty which often occurs with biomedical computational models is an inability to perform validation in a setting that closely resembles how the model will be used. For example, for a biomedical model that makes in vivo clinically relevant predictions, direct validation of predictions may be impossible for ethical, technological, or financial reasons. Unavoidable limitations inherent to the validation process lead to challenges in evaluating the credibility of biomedical model predictions. Therefore, when evaluating biomedical models, it is critical to rigorously assess applicability, that is, the relevance of the computational model, and its validation evidence to the proposed context of use (COU). However, there are no well-established methods for assessing applicability. Here, we present a novel framework for performing applicability analysis and demonstrate its use with a medical device computational model. The framework provides a systematic, step-by-step method for breaking down the broad question of applicability into a series of focused questions, which may be addressed using supporting evidence and subject matter expertise. The framework can be used for model justification, model assessment, and validation planning. While motivated by biomedical models, it is relevant to a broad range of disciplines and underlying physics. The proposed applicability framework could help overcome some of the barriers inherent to validation of, and aid clinical implementation of, biomedical models.

FIGURES IN THIS ARTICLE
<>

In the 20th century, computational modeling and simulation revolutionized physics and engineering. In the 21st century, arguably the most profound potential impact of computational modeling is a transformation of medicine and patient care. In silico methods are currently being developed to augment in vitro and in vivo evaluation methods for pharmaceutical products, medical devices, and biological products [1]. A grand promise lies in reducing the number and size of clinical trials by augmenting them with data from in silico trials [2,3] and realizing precision medicine through simulation-based individualized diagnosis, therapy, and clinical guidance [47]. Despite decades of research, however, progress in translating computational models to clinical care has been limited. Numerous challenges remain, such as reproduction of biological mechanisms across different scales [4], the inability to accurately describe or measure in vivo model parameters and boundary conditions, and the difficulty in characterizing and quantifying the variability inherent to biological systems.

In addition to these fundamental scientific challenges, a major challenge lies in demonstrating the reliability of predictions from computational approaches. It is crucial to rigorously evaluate the credibility of a computational model, defined as the belief in the model's predictive capability for a specified context of use (COU). The COU is defined as the specific role and scope of the computational model and the simulation results used to inform a decision [8]. There are numerous elements to credibility assessment, including verification, validation, sensitivity analysis, and uncertainty quantification [811], which we collectively refer to as VVUQ. The questions these methods address are provided in Table 1. One aspect of credibility assessment for which relatively little guidance is available is the assessment of applicability, defined as the relevance of the validation evidence to support using the model for a specific COU. Applicability assessment is closely related to the overall validation process, but it is distinct from the process of running validation experiments and simulations. Applicability regards the question: would favorable validation results lead to trustworthy predictions in the COU? In biomedical modeling, there may be significant differences between how a model is validated versus how it will be used. Therefore, in rigorous evaluation of a model credibility, it is critical that applicability be carefully assessed. However, it is also common in biomedical modeling that the relevance of validation evidence to the proposed COU is not fully assessed or left implicit (see Table 2). The aim of this paper is to propose a novel framework for applicability analysis, that is, for systematically assessing the relevance of the validation evidence for the proposed COU. The proposed framework enables the systematic generation of a body of evidence which (i) explicitly documents differences between validation and COU and (ii) provides rationale for why the model may or may not be trusted despite/given those differences.

One contributor to the success of computational modeling in engineering applications is the ability to perform a validation study using a carefully designed comparator (e.g., an experimental setup) that closely matches the setting of the COU. For example, for a computational model of an automobile crash, validation can be performed by comparing model predictions to physical crash test results. If the COU is crash simulation for a new automobile design, the setting of the COU is very similar, although not identical, to the validation setting. Often however, and especially for biomedical models, closely matching the validation and COU settings is not possible. For biomedical applications, possible reasons include ethical concerns (e.g., validation of the model would require human experimentation), technological difficulties (e.g., the model predicts a physiological quantity of interest (QOI) for which in vivo measurement is not possible), or financial limitations. In such instances, a validation study must involve a comparator with significant differences relative to the setting of the COU. For models with clinical COUs, comparators might involve animals, cadavers, in vitro specimens, bench-top systems, or phantoms (synthetic tissue-mimicking objects). Often, many types of such validation experiments are performed and collected together as evidence supporting the use of the model. Even when clinical data are used for validating a biomedical model with a clinical COU, there might be major differences between validation and COU settings, for example, healthy versus diseased state or adult versus pediatric subjects.

If there are major differences between the validation and COU settings, questions can be raised regarding the applicability of the computational model and the validation results to the COU. Specifically, one should ask: given the level of agreement between the outputs from the model and comparator in the validation setting(s), can we (or: why can we) be confident in the model predictions for the COU? Answering this question requires careful consideration of the computational model, the COU, and the available evidence. Usually, a subjective decision is made using scientific judgment, based on all the available evidence and subject matter expertise.

Currently, there is no well-established method for assessing applicability. The main concept that seems to drive decision-making with computational modeling in many disciplines is predictive capability, which involves quantitative metrics that can guide decision makers [1214]. The Predictive Capability Maturity Model (PCMM) can be used to assess the level of maturity and adequacy of computational modeling efforts [15,16]. PCMM was developed by researchers from the engineering community and its utility for biomedical models and applications has not yet been evaluated. One concept utilized in applicability assessment is the notion of “domain of applicability” or “validation domain,” and the related notions of interpolation and extrapolation [9,12,17]. These concepts, illustrated in Fig. 1, are useful in many cases. For example, when the validation and COU settings are sufficiently similar, quantitative or statistical methods, e.g., see Refs. [17,19,20], based on simple parametric differences between validation and COU conditions implied by Fig. 1 are potentially very powerful. However, methods based on the idealized case in Fig. 1 have several practical limitations with respect to assessment of biomedical and other models. First, they might not be legitimately applicable if the validation experiments have significant differences to the reality of interest in the COU (e.g., if the validation uses ex vivo experiments, but the COU setting is the in vivo environment), which might not sufficiently meet the assumptions of the quantitative parametric interpolation or extrapolation methods. Moreover, Fig. 1 only conveys parametric differences between simulations used for validation versus those used for the COU. In general, there are several other possibilities for how validation simulations might differ from COU simulations, including COU simulations might require changes to the underlying mathematical model; COU simulations might require changes to more complex inputs (e.g., geometrical information, time-series, scalar or vector fields); or the COU “quantity of interest”—the model output analyzed to answer questions about the COU—might be different from the “validated” QOIs.

To address credibility of biomedical models for COUs where the validation setting is not obviously similar to the COU setting, and to foster greater confidence in biomedical computational models, we believe it is crucial that applicability is explicitly and rigorously addressed as part of the assessment or model justification process. However, we believe that current approaches for assessing applicability are not sufficiently well developed to be relevant to the broad range of models, applications, and feasible validation settings that occur with biomedical models. Therefore, we propose in this paper a novel framework for performing applicability analysis. The aim of our applicability framework is not to quantify applicability of the model to the COU. Instead, it is to enable the practitioner to systematically convert the broad question of applicability into a series of focused questions for evaluating the trustworthiness of a model for the COU. The focused questions can then be answered by referencing multiple sources of evidence and subject matter expertise. This framework uses a novel structure and approach as compared to the existing body of work in this area.

Several key concepts used in the framework are illustrated in Fig. 2. Evidence regarding trustworthiness of model predictions for a COU can come in a variety of forms, such as multiple sources of validation evidence, historical validation data regarding related models, and subject matter expertise. These are represented in the right-hand box in Fig. 2. For the framework, we assume that there exists a set of validation evidence that provides the most relevant evidence for the COU, which we call the “primary validation evidence.” The lower left box in Fig. 2 represents the setting of the primary validation evidence, of which there are two elements: the “reality” element, which describes the physical experimental setting (denoted as R-VAL), and the model element, which describes the computational model and simulations used for validation (denoted as M-VAL). The upper left part of Fig. 2 represents the COU. As with the primary validation evidence, there is a reality and a model element. We define the reality element as the actual or envisioned real world setting that the computational model will be used to make a decision/address questions about (denoted as R-COU). The model element is the computational model and the simulations that will be used to make this decision/address these questions (denoted as M-COU).

Additionally, we denote the differences between the two reality settings (R-VAL and R-COU) by ΔR and the model settings (M-VAL and M-COU) by ΔM. ΔR represents all the differences between the validation experimental setting and the reality of interest. ΔM represents all modifications made to the model that was used for validation to apply it to the COU. (See earlier discussion of limitations of Fig. 1, for examples). Ideally, both ΔR and ΔM will be kept small by designing validation experiments that are similar to the COU. However, as discussed earlier, this is often not possible. Our approach to assess the applicability of the model for the COU involves considering differences listed in ΔR and the modifications listed in ΔM, in turn. Step-by-step instructions are provided in the following. The first seven steps are descriptive steps that involve describing the different components in Fig. 2. The remaining five steps are the assessment steps. An example is provided in Sec. 4; we recommend reviewing it alongside the instructions.

Step 1: Describe the Aim of the Computational Modeling.

Briefly describe the aim of the computational modeling, including the question of interest or the decision to be made based on model predictions.

Step 2: Describe the Reality and Model Elements of the COU.

R-COU: Summarize the actual or envisioned real world setting that the computational model will be used to answer questions about or provide information regarding a decision. Include the phenomena that will be modeled and the aspects of the COU that will affect the decision. Describe sources of natural variability and uncertainty that may be relevant to the decision.

M-COU: Describe the computational model that will be used to replicate reality and inform the decision. Include details regarding the model form/structure (i.e., the governing equations or rules underlying the mathematical and computational model), the required model inputs (i.e., quantities that are needed for the computational model, such as geometry, parameter values, or boundary/loading/initial conditions), and values of the inputs. Describe the specific simulations (i.e., specific “runs” of the computational model with specified inputs) that will be performed, and the specific quantities of interest (QOIs) that will be extracted from the simulation results to inform the decision. If one computational model is solved to obtain certain results, and then a different model is used for postprocessing to obtain the desired QOIs, then the two models should be considered as two distinct components or submodels of M-COU; describe them both. If multiple models are used, then describe them all.

Step 3: Describe the Sources of Validation Evidence.

Describe the different types of validation results available and other sources of evidence that could be used to support trustworthiness of the predictions from the computational model. Sources of evidence might include data from validation experiments or historical validation data regarding the computational model at hand or related models. Denote one of the available sources as the primary validation evidence. This is likely to be the source of evidence that was collected in a setting that is the most relevant to the COU. The other (nonprimary) sources of evidence can be used as supporting material for the assessment steps of the framework.

Step 4: Describe the Reality and Model Elements of the Primary Validation Evidence.

R-VAL: Summarize the physical setting in which the primary validation evidence was collected. The validation evidence is typically gathered from laboratory experiments. Therefore, describe the phenomena that was captured, the laboratory setup, the sources of natural and controlled variability (and uncertainty), the method employed to collect the evidence, and the range of the samples and test conditions. Include the measured QOI(s) that were used for the validation comparison.

M-VAL: Describe the computational model that was used to replicate the validation experiments R-VAL. Include details regarding the model form/structure (e.g., governing equations/rules), the required model inputs (e.g., geometry, parameters, boundary/loading/initial conditions), and the values of the inputs. Describe the simulations that were run and the specific QOIs extracted from the simulation results for the output comparison. Describe any validation comparison metrics and results. If one computational model is solved to obtain certain results, and then a different model is used in a postprocessing stage to obtain the desired QOIs, then describe them both. If multiple models are used, then describe them all.

Step 5: Describe the Aspects of the Computational Model That Are the Identical in M-VAL and M-COU.

While the computational model that was validated using the primary validation evidence (M-VAL) will likely not be identical to the computational model that is used for predictions (M-COU), there will be many aspects that are common between M-VAL and M-COU. For example, the model form might remain the same, but the values of some of the inputs might change. Therefore, describe the aspects of the computational model that are identical between M-COU and M-VAL. (These are called “traveling” aspects of the model in other VVUQ literature [21,22].) Be specific when describing the model form, the inputs and the values of the inputs. Additionally, while the verification process itself should be addressed outside this framework, numerical solver aspects (e.g., mesh discretization, numerical solver settings) could be included here.

Step 6: Describe the Aspects of the Computational Model That Are Different Between M-VAL and M-COU.

ΔM: Describe all of the ways that M-VAL was modified to obtain M-COU. Such modifications could be fundamental, such as changes to the governing equations, or they could merely be modified input parameter values. If the simulations for M-COU are exactly identical to M-VAL (i.e., the exact same numerical results expected), then there are no modifications and this section should be left blank. Be sure to consider all of the following potential modifications: new submodels introduced, model form, input types, and/or values. Also, consider if the QOIs used in M-VAL and M-COU are different from one another. (Here we are referring to differences between the type of QOI, i.e., displacement as compared to force, not the differences in values). For example, it might not have been possible to physically measure the quantity being predicted in the COU, resulting in a different QOI being validated. If so, include this difference here. Finally, while the verification process itself should be addressed outside this framework, numerical solver modifications (e.g., mesh discretization, numerical solver settings) could be included here.

Step 7: Describe the Relevant Differences Between R-VAL and R-COU.

ΔR: The experimental setting of the validation evidence will not be identical to the reality of interest described in R-COU. Therefore, it is important to characterize the fundamental differences between R-COU and R-VAL. However, unlike with ΔM, it will be impossible to comprehensively describe all of the differences. Therefore, describe differences that could affect the QOI for the COU. (Note that the phenomena identification ranking table (PIRT) methodology [23] might be helpful for determining which differences to include in ΔR). Consider how representative R-VAL is of R-COU, and for what reasons R-VAL might not be representative of R-COU. Usually, a modification of the computational model presented in ΔM will have a corresponding difference in ΔR, because the modifications in ΔM are likely driven by ΔR (it can be helpful to list these first). However, there might be differences in ΔR for which there are no corresponding modifications in ΔM (because, for example, the difference was difficult to model or not considered important). Also include sources of variability in R-COU that were not present in R-VAL.

The previous seven steps are descriptive steps. The next steps are justification/assessment steps. They involve consideration of model aspects that are the same in M-VAL and M-COU (step 8), and then model aspects that are different (steps 9 and 10), then followed by M-COU in its entirety (step 11).

Step 8: Is It Appropriate to Use the Model Aspects Listed in Step 5 to Make Predictions About R-COU? Provide Rationale, Evidence, or Discussion. Assume That These Model Aspects Are Appropriate for R-VAL (or Refer to the Validation Results) and Then Consider Each of the Differences in ΔR (Listed in Step 7).

The validation evidence will provide information regarding how well the computational model (M-VAL) reproduces the validation experiments (R-VAL). The aim of this step is to ask: if the validation comparison is deemed adequate, can we be confident that the aspects that are identical in M-VAL and M-COU, as listed in step 5, are appropriate for the COU? Differences between R-VAL and R-COU, listed in step 7, might mean that this is not so. For example, if the “model form” is identical in M-VAL and M-COU, but validation was performed using ex vivo conditions but the COU is in vivo, we should ask: is it acceptable to use the same model form given the difference “ex vivo to in vivo”? Therefore, for each aspect listed in step 5, provide rationale, evidence, or discussion on whether the model aspect is appropriate for the COU, by considering each of the relevant differences presented in step 7.

A table can be used to consider each model aspect from step 5 versus each difference listed in ΔR, as illustrated in Table 3. The two left-hand columns should be populated with the specific aspects of R-VAL and R-COU (taken from step 2 and step 4) that are related. Populate the top row with model aspects listed in step 5. For each entry, ask the question: “is it acceptable to use this model aspect (associated column) for making predictions about R-COU, given this difference (row)?” For many entries, the computational model aspect (column) will be completely unrelated to a difference in the reality (row), and thus, no issues are raised. If they are related, then questions might be raised regarding applicability. Note that it may be necessary to revisit steps 5 and/or 7 when performing this step.

When questions are raised, the use of the computational model aspect in the COU, despite the difference in reality, could be supported by theoretical reasoning (including subject matter expertise), sensitivity analysis, and/or other supporting validation evidence (e.g., those listed in step 3).

If this step raises major concerns regarding the applicability of the model for the COU that cannot be resolved using supporting evidence or subject matter expertise, proceed directly to step 12. Otherwise, proceed to step 9.

Step 9: Do the Modifications to the Computational Model (Listed in Step 6) Result in Trustworthy Predictions for the COU? Provide Rationale, Evidence or Discussion.

Step 8 assessed whether aspects of the model that are proposed to be common between the validation and COU settings are still appropriate given the physical differences between the two settings. If so, we can ask: for the aspects of the computational model that change from M-VAL to M-COU instantiations, are these changes adequately representative of the physical changes or differences between validation and COU settings, and can predictions be trusted given these changes? Therefore, for each aspect of the computational model that was modified to apply it to the COU (those listed as ΔM in step 6; excluding any QOI differences listed, which will be considered in step 10), provide rationale, evidence, or discussion on how the modification affects trustworthiness of predictions. Be sure to consider how appropriate the modification is given the description of R-COU in step 2. Revisit step 6 and group modifications together if convenient. Analysis of the primary validation results, other sources of validation evidence, and/or subject matter expertise could be used to support trustworthiness given a modification.

Step 10: Provide Rationale for Trustworthiness If the COU QOIs Differ From Validation QOIs.

In general, the QOIs used in M-VAL and M-COU might be different from one another, as discussed in step 6. If so, provide rationale for trustworthiness of predictions of the COU QOI. (Note: another possibility that sometimes occurs is that the QOIs from R-VAL and M-VAL are different. If so, then this should be justified as part of the reporting of validation activities).

Step 11: Consider the Overall Computational Model M-COU, in the Context of Differences Between R-VAL and R-COU.

Step 8 considered the identical computational model aspects between M-VAL and M-COU, and steps 9 and 10 considered all modified aspects from M-VAL to M-COU. Now consider the overall computational model to be used for the COU, i.e., M-COU. The aim of this step is to determine if there are other issues that were not raised in the previous steps that require supporting evidence for applicability of the model for the COU. Using Table 4, populate the two left-hand columns in the same fashion as in step 8. For this step, populate the right-hand column with any additional questions that are raised due to the overall consideration of the computational model being applied to the COU.

Step 12: Assess the Overall Applicability of the Computational Model for the COU.

Specific questions regarding the applicability of the computational model for the COU should have been raised in steps 8–11. By considering the responses to these questions, assess the overall applicability of the computational model for the COU using sound scientific (albeit subjective) judgment.

In this section, we demonstrate how to perform applicability analysis with a computational model in medical devices. The example is hypothetical but based on actual regulatory submissions.

A stent is a tubular structure implanted in a blood vessel used to treat blockages and support blood flow. We suppose that a medical device company, which currently has a family of stents on the market, develops a new family of nickel-titanium (nitinol) stents. A clinical trial is needed to assess the safety and effectiveness of the new device. To support the initiation of the clinical trial, preclinical data are required to demonstrate an initial level of safety regarding the mechanical performance of the stent. In particular, it is important to assess the fatigue life, or the potential for fracture, of the nitinol stents. A wide range of mechanical bench tests are typically performed to obtain these data [24]. However, for the new stent family, there are numerous stent diameter sizes, numerous bloods vessels sizes that the stents can treat, and different types of loads that the stents can experience in the clinical environment. This results in scores of conditions under which the stent family can be evaluated. Therefore, finite element analysis (FEA) is used to simulate the different stents under various possible loading conditions. The simulation results supplement the bench test results; together they serve as evidence for demonstrating the stent's mechanical performance that is needed for initiating the clinical trial.

The FEA model first computes the strains in the stents under the simulated in vivo conditions, and then computes fatigue safety factors (FSF) by comparing the predicted strain values to a material failure criterion. Because in vivo conditions are simulated, “ideal” validation of the computational model would require a clinical study with the new stent, which is not possible because safety has not yet been demonstrated. Therefore, validation is performed using an experimental setting with significant differences to the clinical COU. Figure 3 illustrates the COU and the validation settings.

Step 1: Describe the Aim of the Computational Modeling.

The aim of the computational modeling is to simulate loading on the stents under in vivo conditions, to address the following question: are strains exhibited by the stents under in vivo conditions less than the material endurance limit? If this is the case, the modeling results can be used as evidence to support the initiation of a clinical trial.

Step 2: Describe the Reality and Model Elements of the COU.

R-COU: The actual real-world setting that the computational model will be used to address questions about is the proposed clinical trial, which could potentially involve hundreds of participants from the patient population. For each patient, the nitinol stent is loaded onto the delivery system, tracked through the patients' arterial tree, and then deployed in the blocked blood vessel to be opened with the stent. There are numerous sources of variability in this clinical setting that can affect the mechanical performance:

  • a range of stent diameters and lengths;

  • various blood vessel diameters and disease states of the patients in the clinical trial;

  • broad range of the normal daily activities of the patients that translate to different loading conditions and magnitudes that are imposed on the stents [25]. These include radial loading due to cardiac pulsatility, and bending and axial shortening/elongation due to musculoskeletal motion.

    M-COU: There are two components to the model: strain prediction and fatigue safety factor. The following is performed using models of each stent in the new stent family, deployed into virtual blood vessels of varying sizes.

  • Component 1: Strain predictions

    Finite element analysis is performed on a virtual stent prescribed with material behavior and properties (i.e., constitutive model) derived from testing of the nitinol material at 37 °C to characterize its stress/strain behavior at body temperature. Simulations involve the stent inside a virtual vessel of fixed compliance. Radial, bending, and axial loading are all simulated. For each of the loading conditions, the stent is subjected to three simulation steps: the stent is loaded onto a virtual delivery system and tracked through virtual tortuous vessels, then deployed into the destination vessel, and then subjected to one of the three loads. To simulate radial loading, a rigid cylinder is used to apply a compressive radial load. To simulate bending, the ends of the vessel are deformed to a U-shape of a specific radius. To simulate axial loading, the vessel is axially compressed between two rigid plates. For each type of loading condition, strains are predicted at two load magnitudes, and the alternating strain (difference between the two strain states) is calculated.

  • Component 2: Fatigue safety factor

    Fatigue safety factors is computed as the ratio of the endurance limit, which is determined from fatigue testing of the processed nitinol material, to peak alternating strain. If FSF > 1 at all peak strain locations along the stent, the stent is not expected to fracture under the simulated loads.

Step 3: Describe the Sources of Validation Evidence.

We assume the following sources of validation evidence are available.

  1. (1)Quasi-static bench testing to determine the relationship between stent diameter and radial force (force–diameter curves) under radial loading, for different stents from the stent family in question. See Fig. 3. The observed force–diameter curves are compared to model predictions.
  2. (2)Durability test data, clinical trial results, and simulation results, for other stents currently marketed by the company.
  3. (3)Validation of the constitutive model: after samples of processed nitinol that represent the final device are mechanically tested to determine the constitutive model to be used in simulation, the company then validates the constitutive model by simulating the mechanical behavior of the nitinol material under uniform tensile loading.

The quasi-static bench testing will be used as the primary validation evidence because it involves testing of the new stent; we consider this the most relevant aspect regarding the COU.

Step 4: Describe the Reality and Model Elements of the Primary Validation Evidence.

R-VAL: The primary validation setting is a bench test used to determine the force-diameter curves for the stent in question. Testing is performed on the entire new stent family. The test apparatus radially compresses the stent to its delivery system diameter, then slowly releases the stent to its fully deployed state, and recompresses it back to its delivery system diameter. During the test, the apparatus measures the outward radial force of the stent at incremental diameters. Test stents are “preconditioned” by loading onto the delivery system, tracking through tortuous mock vascular anatomy and deploying them into the apparatus. The test is conducted in warm air, 37 °C, to mimic body temperature. The quantity of interest from this test is the force-diameter curve; see bottom-left inset in Fig. 3.

M-VAL: The computational model is of the stent family in question with matching diameters to the stents that were tested in R-VAL. Material properties of nitinol at 37 °C are imposed. The stent is preconditioned with tracking in a tortuous virtual vessel and then deployed in a virtual rigid cylinder. The rigid cylinder then compresses the stent to its delivery system diameter and slowly releases the stent to its fully deployed state. The simulation predicts the force-diameter curve like the one generated in R-VAL.

Step 5: Describe the Aspects of the Computational Model That Are the Identical in M-VAL and M-COU.

The following model aspects are identical in M-COU and M-VAL:

  1. (1)Finite element analysis equations for stent deformation;
  2. (2)Stent geometrical models, including specific stent diameters used;
  3. (3)Constitutive model for the stent's mechanical behavior;
  4. (4)Preconditioning method: loading the stent onto the virtual delivery system and tracking through a virtual tortuous anatomy;
  5. (5)Boundary and loading conditions during radial loading simulations (loads applied using a rigid cylinder).

Step 6: Describe the Aspects of the Computational Model that are Different Between M-VAL and M-COU.

ΔM:

  1. (1)In M-VAL, there is no virtual vessel, whereas in M-COU vessels of various diameters are simulated;
  2. (2)In M-VAL only radial loads are applied, in M-COU the loads are radial, bending and axial shortening. Loading is applied through the mock vessel for the bending and axial shortening conditions.
  3. (3)In M-VAL the QOI is the force-diameter relationship, whereas in M-COU the QOI is FSF.

Remark: modifications should be grouped together as appropriate for later justification in step 9.

Step 7: Describe the Relevant Differences Between R-VAL and R-COU.

ΔR:

  1. (1)R-VAL loads the stent under uniform radial loading conditions, whereas R-COU involves a range of loading conditions (radial, bending, axial loading, and combinations of these) that represent the patient daily activities.
  2. (2)In R-VAL, there is no blood vessel, just a rigid cylinder (range of diameters), whereas R-COU is the actual in vivo clinical environment with variability in disease state and vessel diameter.
  3. (3)R-VAL is static, while R-COU is the dynamic cyclic environment.
  4. (4)R-VAL is a short-term test whereas the stent implanted in the patient is permanent.
  5. (5)R-VAL is conducted in air, whereas in R-COU the stent is enveloped by blood and tissue.

Step 8: Is It Appropriate to Use the Model Aspects Listed in Step 5 to Make Predictions About R-COU? Provide Rationale, Evidence, or Discussion. Assume That These Model Aspects Are Appropriate for R-VAL (or Refer to the Validation Results) and Then Consider Each of the Differences in ΔR (Listed in Step 7).

Table 5 considers each model aspect that is identical in M-VAL and M-COU (those listed in step 5; columns), in light of each difference identified in ΔR (rows). For each entry, we ask: is it acceptable to use this model aspect (column) for making predictions about R-COU, given this difference (row)? Some questions that are raised are provided in the table. The reader may raise additional questions. It should be understood that the table is a tool to raise specific questions regarding applicability. Each model aspect is discussed, in turn, in the following. The discussions emphasize the questions raised, rather than responses to these questions.

Stent FEA Equations.

We can ask: “Is it appropriate to use the stent FEA equations for making predictions about R-COU?” Agreement between experiment and simulation in the primary validation results provides confidence that the FEA equations used are appropriate for simulating R-VAL (only). Table 5, column 3 considers the stent FEA equations versus each of the differences between R-VAL and R-COU, in turn. One specific question is raised: is it acceptable to use these FEA equations for the stent when simulating bending and axial loading? This would depend on the specific FEA model chosen.

Stent Geometrical Model.

We can ask: “Is it appropriate to use the stent geometrical model for making predictions about R-COU?” Agreement between experiment and simulation in the primary validation results provides confidence that the fidelity of the geometrical representation of the stent is appropriate for simulating R-VAL (only). None of the differences between R-VAL and R-COU raise any questions about the stent geometrical model (Table 5, column 4), suggesting that the fidelity of the geometrical model is also appropriate for the COU.

Stent Constitutive Model.

We can ask: “Is it appropriate to use the constitutive model for making predictions about R-COU?” The primary validation evidence only provides confidence that the constitutive model captures the mechanical behavior of the stent under radial loading. The supporting validation evidence (step 3, item 3) might additionally demonstrate that the constitutive model has captured the mechanical behavior of nitinol under tensile loading. Table 5, column 5 considers each difference between the primary validation setting and R-COU, in turn. Several specific questions are raised. One question immediately raised is whether the constitutive model is appropriate for bending and axial loading. It can be noted that radial loading has both compressive and tensile loads, which are important features of the loading state for the other complex loading modes. However, as reflected by the questions in Table 5, column 5, there are several other aspects of the constitutive model that might come into question, based on how the environment and cyclic loading could affect the material behavior in vivo. Research has shown that the chemistry and temperature can affect nitinol behavior [26]. It has also been shown that cyclic loading can affect the uniaxial behavior of nitinol [27]. Therefore, justification will be needed to demonstrate that the constitutive model captures all these relevant phenomena or that their effect can be safely neglected.

Preconditioning Method.

Table 5, column 6 raises one question to be considered regarding use of the same preconditioning method in M-VAL and M-COU: does the wet versus dry environment affect preconditioning? If so, it may be that the preconditioning method that was used in M-VAL should have been altered in M-COU to accurately reproduce R-COU.

Boundary Conditions for Radial Loading Simulations.

The method of applying the radial loading in M-VAL and M-COU is the same. The questions raised in Table 5, column 7 capture additional considerations regarding the environment of R-COU as compared to R-VAL. For example, all radial loading simulations (M-VAL and M-COU) involve uniform loading, but how uniform is pulsatile loading in vivo and given different potential disease states? For example, the stent will be deployed onto plaques that have varying stiffness both longitudinally and radially, presenting heterogeneous loads onto the stent. In addition, in the R-COU setting, the stent is loaded in the lubricous environment of blood. Therefore, additional justification is needed regarding the trustworthiness of the COU radial loading simulations despite these factors.

Step 9: Do the Modifications to the Computational Model (Listed in Step 6) Result in Trustworthy Predictions for the COU? Provide Rationale, Evidence, or Discussion
Virtual Vessel.

This step asks if predictions can be trusted given the inclusion of the virtual vessel in the COU simulations. More specifically, we can ask how realistic are the deformation and stresses generated in the vessel, since that will influence the loads imparted to the stent. Various questions could be raised regarding the material properties prescribed to the vessel and its geometric fidelity. Justification that these are appropriate given the COU could be based on separate (prior) validation evidence involving previous stent models (step 3, item 2), if that used the same vessel model.

Introduction of Bending and Axial Shortening Loading in M-COU.

The primary validation evidence will provide confidence (or not) that the output from the computational model in the validation setting is able to predict the stent under radial loading only. However, M-COU involves bending and axial loading, in addition to the radial loading. Justification is needed for why the bending and axial simulations can be trusted, given there is no validation evidence involving these modes. One approach might be to argue that the radial loading mode can capture similar effects of the stent under bending and axial shortening. For example, it could be argued that when the stent deforms under bending and axial shortening, the struts will experience tension and compression strain states similar to those that the stent will experience under radial loading. However, justification will be needed to address this because different parts of the stent (e.g., the stent apex versus stent bridge) might experience the tension and compression strain states and at different magnitudes. An alternative approach might be to refer to data for the previously marketed stent family (step 3, item 2), assuming such data included comparison of simulation and physical testing under bending and axial shortening.

The third modification listed in ΔM regards different QOIs. As described in the instructions, these are considered in step 10 not step 9.

Remark: Let us briefly consider simple parameter modifications from M-VAL to M-COU, since this is very common but did not occur in this example. Suppose the COU simulations involved stents that had different radii and lengths to the stents used in the validation simulations and experiments, but otherwise were identical. In this case, “modified values of stent radius and length” would be listed in step 6 under ΔM, and rationale for trustworthiness given this modification would be required in step 9. Assuming several different radii and lengths were used in the validation, rationale for trustworthiness could be based on concepts of validation domain and parametric interpolation/extrapolation and other work [17,19,20], see Fig. 1 and earlier discussion. However, if a parameter takes only one value in validation simulations and a different value in COU simulations (e.g., room temperature validation experiments and simulations, COU simulations using 37 °C), then a different type of rationale is required.

Step 10: Provide Rationale for Trustworthiness if the COU QOIs Differ From Validation QOIs.

The quantity of interest in M-VAL is the force required to deform the stent to various diameters (displacement is applied and force is calculated). For M-COU, the QOI is fatigue safety factor, with an intermediate QOI of alternating strain. Trustworthiness of the alternating strain predictions could be argued for on the basis of the close relationship between force–displacement and stress–strain. Justification is needed, however, for why local strains predictions can be trusted, given only global force–displacement was validated. Finally, the validation evidence does not directly address the trustworthiness of the FSF predictions which are a function of local strains. Rationale is required, which could refer to general validation evidence regarding the FSF methodology for predicting fracture. Additionally, referring back to the validation evidence #2 (step 3), if there were no fractures of the previously marketed stent under durability testing, and the FEA predicted no fractures under the same conditions, then this could potentially be used to argue for the same methodology being applied to the new stent family.

Step 11: Consider the Overall Computational Model M-COU, in the Context of Differences Between R-VAL and R-COU.

Table 6 considers M-COU given each of the differences listed in ΔR and presents several additional issues that may be raised.

Step 12: Assess the Overall Applicability of the Computational Model for the COU.

This final step asks for a conclusion about the overall applicability of the computational model and the validation evidence to the COU. This requires careful consideration of each of the specific questions raised in steps 8–11, and how well they were addressed using the supporting evidence or subject matter expertise. Because this example involves a hypothetical device, and we only discussed how the questions might be addressed, we do not include a final summary of concluding remarks. However, we hope that it is clear how the applicability framework provides a methodological approach to break down the question of applicability into a series of tractable questions to assess the use of the computational model for the COU.

In this paper, we have proposed a novel framework for assessing the applicability of a computational model and its validation evidence for a specific COU. We have demonstrated why this approach is relevant to computational models with biomedical applications, although we believe applicability analysis and this framework may have broad utility across a wide range of disciplines. In this section, we discuss the structure, limitations, and utility of the proposed framework.

Our proposed applicability framework is a conceptual approach based around describing the differences between the primary validation setting and the COU, for both the model (ΔM) and “reality” (ΔR). This approach can be considered to be more general than, while complementing, quantitative methods based around parametric interpolation/extrapolation (loosely depicted in Fig. 1) or other work [17,19,20]. Specifically, parametric modifications to a model should be listed in step 6 of the framework. Rationale for trustworthiness given a parametric modification, which is required in step 9, could utilize those methods, or other rationale. Our approach shares some similarities with PCMM and PIRT, although PCMM and PIRT do not break down the assessment of the model in the same way as our steps. Our applicability assessment framework may, therefore, be useful in PIRT and PCMM assessments. It should be noted that the proposed framework is just one of many possible within this conceptual structure. Different assessment steps could be developed with the same descriptive steps (steps 1–7). We expect that the present framework could be improved or refined in the future, and even domain-specific frameworks could be developed. Also, note that performing the descriptive steps only can produce much clarity regarding use of a computational model.

One limitation with the proposed framework is that it requires that there exist a set of validation evidence that can serve as the “primary validation evidence.” If no obvious choice exists—for example, in the case of multiple sets of validation evidence that test different components of a complex model, but no validation exists of the system as a whole—then it might not be clear to the user how to apply the framework. Another limitation is that while the framework has delineated steps, addressing each step may require multiple iterations. One of the reasons for this is that as the user addresses each step, more questions arise and this may trigger an additional response to an already “completed” step. Also, the content, phrasing and grouping of items listed in steps 5–7 might require modifications to facilitate the response to later steps. Finally, the same or highly related questions can be raised in multiple steps; this repetition can cause confusion.

Despite these limitations, we believe that the proposed framework (and potential future variants of it) is extremely useful and powerful, for several reasons. First, it is relevant to many models and COUs for which current methods for assessing applicability are not immediately relevant, such as the example in Sec. 4. Applicability is rarely explicitly discussed in biomedical modeling, where it is common practice to perform validation and then modify a model for the COU, sometimes significantly, without explicitly explaining why the COU predictions should be trusted given the modifications. We hope that by providing a framework for performing applicability analysis, we can help reduce this practice. Second, the framework provides a structured reasoning approach for assessing applicability and generating a series of specific questions that should be asked, as evidenced by the series of questions systematically raised in the example in Sec. 4. Therefore, the framework provides a method for making transparent the limitations in computational modeling, and for identifying potential “leaps of faith” that might have been otherwise overlooked. Third, the framework provides a potential method for demonstrating that validation evidence, together with other available evidence, supports the use of a computational model for a COU. It could, therefore, prove a useful tool for communicating the fact that the available scientific evidence provides confidence that a model is applicable to a COU, when this is the case. Alternately, it can provide rational reasoning for why it does not and might elucidate a clear path forward. Finally, the framework could also be invaluable in the design of validation experiments by applying it in advance of performing the validation. By assessing the applicability of proposed validation evidence to a proposed COU, the framework will identify knowledge gaps that cast doubt on the applicability of the model for the COU, even if favorable validation results were to be obtained.

Ultimately, we regard applicability analysis as an important complement to the current VVUQ paradigm. We believe that this framework, and the ideas motivating it, will provide a powerful tool for rigorously assessing computational models with biomedical applications—and potentially models from other disciplines—by helping to identify potential gaps in knowledge, justifying why modeling results are credible, and for communicating and demonstrating the trustworthiness, or lack thereof, of a computational model.

Authors would like to thank the following for their input: Andrew Baumann (FDA), Jeff Bischoff (Zimmer Biomet, Inc.), Mehul Dharia (Zimmer Biomet, Inc.), Finn Donaldson (FDA), David Gavaghan (Oxford University), Lealem Mulugeta (formerly NASA), Ryan Ortega (FDA) and Christopher Scully (FDA).

FDA, 2011, “ Advancing Regulatory Science at FDA: A Strategic Plan,” U.S. Food and Drug Administration, Silver Spring, MD. https://www.fda.gov/scienceresearch/specialtopics/regulatoryscience/ucm267719.htm
Viceconti, M. , Henney, A. , and Morley-Fletcher, E. , 2016, “ In Silico Clinical Trials: How Computer Simulation Will Transform the Biomedical Industry,” Int. J. Clin. Trials, 3(2), pp. 37–46.
Haddad, T. , Himes, A. , Thompson, L. , Irony, T. , Nair, R. , and MDIC Computer Modeling and Simulation Working Group Participants, 2017, “ Incorporation of Stochastic Engineering Models as Prior Information in Bayesian Medical Device Trials,” J. Biopharm. Stat., epub.
Winslow, R. L. , Trayanova, N. , Geman, D. , and Miller, M. I. , 2012, “ Computational Medicine: Translating Models to Clinical Care,” Sci. Transl. Med., 4(158), p. 158rv111. [CrossRef]
Kitano, H. , 2002, “ Overview Computational Systems Biology,” Nature, 420(6912), pp. 206–210. [CrossRef] [PubMed]
Taylor, C. A. , Draney, M. T. , Ku, J. P. , Parker, D. , Steele, B. N. , Wang, K. , and Zarins, C. K. , 1999, “ Predictive Medicine: Computational Techniques in Therapeutic Decision-Making,” Comput. Aided Surg., 4(5), pp. 231–247. [CrossRef] [PubMed]
Metaxas, D. N. , 2012, Physics-Based Deformable Models: Applications to Computer Vision, Graphics and Medical Imaging, Springer Science & Business Media, New York.
ASME, 2016, “ Draft V&V 40 - Standard for Verification and Validation in Computational Methods for Medical Devices,” American Society of Mechanical Engineers, New York.
Oberkampf, W. L. , and Roy, C. J. , 2010, Verification and Validation in Scientific Computing, Cambridge University Press, New York. [CrossRef]
National Research Council, 2012, Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification, National Academies Press, Washington, DC.
Pathmanathan, P. , and Gray, R. A. , 2013, “ Ensuring Reliability of Safety-Critical Clinical Applications of Computational Cardiac Models,” Front. Physiol., 4, p. 358. [CrossRef] [PubMed]
Hemez, F. , Atamturktur, H. S. , and Unal, C. , 2010, “ Defining Predictive Maturity for Validated Numerical Simulations,” Comput. Struct., 88(7), pp. 497–505. [CrossRef]
Elele, J. , and Smith, J. , 2010, “ Risk-Based Verification, Validation, and Accreditation Process,” Proc. SPIE, 7705, p. 77050E.
Oberkampf, W. L. , Trucano, T. G. , and Hirsch, C. , 2004, “ Verification, Validation, and Predictive Capability in Computational Engineering and Physics,” ASME Appl. Mech. Rev., 57(5), pp. 345–384. [CrossRef]
Oberkampf, W. L. , Pilch, M. , and Trucano, T. G. , 2007, “ Predictive Capability Maturity Model for Computational Modeling and Simulation,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2007-5948 https://cfwebprod.sandia.gov/cfdocs/CompResearch/docs/Oberkampf-Pilch-Trucano-SAND2007-5948.pdf.
Beghini, L. L. , and Hough, P. D. , 2016, “ Sandia Verification and Validation Challenge Problem: A PCMM-Based Approach to Assessing Prediction Credibility,” ASME J. Verif. Validation Uncertainty Quantif., 1(1), p. 011002. [CrossRef]
Trucano, T. G. , Swiler, L. P. , Igusa, T. , Oberkampf, W. L. , and Pilch, M. , 2006, “ Calibration, Validation, and Sensitivity Analysis: What's What,” Reliab. Eng. Syst. Saf., 91(10–11), pp. 1331–1357. [CrossRef]
Thacker, B. H. , Doebling, S. W. , Hemez, F. M. , Anderson, M. C. , Pepin, J. E. , and Rodriguez, E. A. , 2004, “ Concepts of Model Verification and Validation,” Los Alamos National Laboratory, Los Alamos, NM, Technical Report No. LA-14167. https://inis.iaea.org/search/search.aspx?orig_q=RN:36030870
Kennedy, M. C. , and O'Hagan, A. , 2001, “ Bayesian Calibration of Computer Models,” J. R. Stat. Soc. B, 63(3), pp. 425–450. [CrossRef]
Hills, R. G. , 2013, “ Roll-Up of Validation Results to a Target Application,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2013-7424. http://prod.sandia.gov/techlib/access-control.cgi/2013/137424.pdf
Romero, V. , 2016, “ An Introduction to Some Model Validation Concepts and Paradigms and the Real Space Approach to Model Validation,” Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification, Joint Army/Navy/NASA/Air Force (JANNAF) and NASA, Washington, DC.
Romero, V. J. , 2008, “ Type X and Y Errors and Data & Model Conditioning for Systematic Uncertainty in Model Calibration, Validation, and Extrapolation 1,” SAE Paper No. 0148-7191.
Diamond, D. J. , 2006, “ Experience Using Phenomena Identification and Ranking Technique (PIRT) for Nuclear Analysis,” PHYSOR-2006 Topical Meeting, Vancouver, BC, Canada, Sept. 10–14, Paper No. BNL-76750-2006-CP https://www.bnl.gov/isd/documents/32315.pdf.
FDA, 2010, “ Non-Clinical Engineering Tests and Recommended Labeling for Intravascular Stents,” Food and Drug Administration, Silver Spring, MD. https://www.fda.gov/MedicalDevices/ucm071863.htm
Ansari, F. , Pack, L. K. , Brooks, S. S. , and Morrison, T. M. , 2013, “ Design Considerations for Studies of the Biomechanical Environment of the Femoropopliteal Arteries,” J. Vasc. Surg., 58(3), pp. 804–813. [CrossRef] [PubMed]
Trépanier, C. , and Pelton, A. R. , 2004, “ Effect of Temperature and pH on the Corrosion Resistance of Nitinol,” International Conference on Shape Memory and Superelastic Technology (SMST), Baden-Baden, Germany, Oct. 3–7 https://www.researchgate.net/publication/242400171_EFFECT_OF_TEMPERATURE_AND_pH_ON_THE_CORROSION_RESISTANCE_OF_PASSIVATED_NITINOL_AND_STAINLESS_STEEL.
Schlun, M. , Zipse, A. , Dreher, G. , and Rebelo, N. , 2011, “ Effects of Cyclic Loading on the Uniaxial Behavior of Nitinol,” J. Mater. Eng. Perform., 20(4–5), pp. 684–687. [CrossRef]
Copyright © 2017 by ASME
View article in PDF format.

References

FDA, 2011, “ Advancing Regulatory Science at FDA: A Strategic Plan,” U.S. Food and Drug Administration, Silver Spring, MD. https://www.fda.gov/scienceresearch/specialtopics/regulatoryscience/ucm267719.htm
Viceconti, M. , Henney, A. , and Morley-Fletcher, E. , 2016, “ In Silico Clinical Trials: How Computer Simulation Will Transform the Biomedical Industry,” Int. J. Clin. Trials, 3(2), pp. 37–46.
Haddad, T. , Himes, A. , Thompson, L. , Irony, T. , Nair, R. , and MDIC Computer Modeling and Simulation Working Group Participants, 2017, “ Incorporation of Stochastic Engineering Models as Prior Information in Bayesian Medical Device Trials,” J. Biopharm. Stat., epub.
Winslow, R. L. , Trayanova, N. , Geman, D. , and Miller, M. I. , 2012, “ Computational Medicine: Translating Models to Clinical Care,” Sci. Transl. Med., 4(158), p. 158rv111. [CrossRef]
Kitano, H. , 2002, “ Overview Computational Systems Biology,” Nature, 420(6912), pp. 206–210. [CrossRef] [PubMed]
Taylor, C. A. , Draney, M. T. , Ku, J. P. , Parker, D. , Steele, B. N. , Wang, K. , and Zarins, C. K. , 1999, “ Predictive Medicine: Computational Techniques in Therapeutic Decision-Making,” Comput. Aided Surg., 4(5), pp. 231–247. [CrossRef] [PubMed]
Metaxas, D. N. , 2012, Physics-Based Deformable Models: Applications to Computer Vision, Graphics and Medical Imaging, Springer Science & Business Media, New York.
ASME, 2016, “ Draft V&V 40 - Standard for Verification and Validation in Computational Methods for Medical Devices,” American Society of Mechanical Engineers, New York.
Oberkampf, W. L. , and Roy, C. J. , 2010, Verification and Validation in Scientific Computing, Cambridge University Press, New York. [CrossRef]
National Research Council, 2012, Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification, National Academies Press, Washington, DC.
Pathmanathan, P. , and Gray, R. A. , 2013, “ Ensuring Reliability of Safety-Critical Clinical Applications of Computational Cardiac Models,” Front. Physiol., 4, p. 358. [CrossRef] [PubMed]
Hemez, F. , Atamturktur, H. S. , and Unal, C. , 2010, “ Defining Predictive Maturity for Validated Numerical Simulations,” Comput. Struct., 88(7), pp. 497–505. [CrossRef]
Elele, J. , and Smith, J. , 2010, “ Risk-Based Verification, Validation, and Accreditation Process,” Proc. SPIE, 7705, p. 77050E.
Oberkampf, W. L. , Trucano, T. G. , and Hirsch, C. , 2004, “ Verification, Validation, and Predictive Capability in Computational Engineering and Physics,” ASME Appl. Mech. Rev., 57(5), pp. 345–384. [CrossRef]
Oberkampf, W. L. , Pilch, M. , and Trucano, T. G. , 2007, “ Predictive Capability Maturity Model for Computational Modeling and Simulation,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2007-5948 https://cfwebprod.sandia.gov/cfdocs/CompResearch/docs/Oberkampf-Pilch-Trucano-SAND2007-5948.pdf.
Beghini, L. L. , and Hough, P. D. , 2016, “ Sandia Verification and Validation Challenge Problem: A PCMM-Based Approach to Assessing Prediction Credibility,” ASME J. Verif. Validation Uncertainty Quantif., 1(1), p. 011002. [CrossRef]
Trucano, T. G. , Swiler, L. P. , Igusa, T. , Oberkampf, W. L. , and Pilch, M. , 2006, “ Calibration, Validation, and Sensitivity Analysis: What's What,” Reliab. Eng. Syst. Saf., 91(10–11), pp. 1331–1357. [CrossRef]
Thacker, B. H. , Doebling, S. W. , Hemez, F. M. , Anderson, M. C. , Pepin, J. E. , and Rodriguez, E. A. , 2004, “ Concepts of Model Verification and Validation,” Los Alamos National Laboratory, Los Alamos, NM, Technical Report No. LA-14167. https://inis.iaea.org/search/search.aspx?orig_q=RN:36030870
Kennedy, M. C. , and O'Hagan, A. , 2001, “ Bayesian Calibration of Computer Models,” J. R. Stat. Soc. B, 63(3), pp. 425–450. [CrossRef]
Hills, R. G. , 2013, “ Roll-Up of Validation Results to a Target Application,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2013-7424. http://prod.sandia.gov/techlib/access-control.cgi/2013/137424.pdf
Romero, V. , 2016, “ An Introduction to Some Model Validation Concepts and Paradigms and the Real Space Approach to Model Validation,” Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification, Joint Army/Navy/NASA/Air Force (JANNAF) and NASA, Washington, DC.
Romero, V. J. , 2008, “ Type X and Y Errors and Data & Model Conditioning for Systematic Uncertainty in Model Calibration, Validation, and Extrapolation 1,” SAE Paper No. 0148-7191.
Diamond, D. J. , 2006, “ Experience Using Phenomena Identification and Ranking Technique (PIRT) for Nuclear Analysis,” PHYSOR-2006 Topical Meeting, Vancouver, BC, Canada, Sept. 10–14, Paper No. BNL-76750-2006-CP https://www.bnl.gov/isd/documents/32315.pdf.
FDA, 2010, “ Non-Clinical Engineering Tests and Recommended Labeling for Intravascular Stents,” Food and Drug Administration, Silver Spring, MD. https://www.fda.gov/MedicalDevices/ucm071863.htm
Ansari, F. , Pack, L. K. , Brooks, S. S. , and Morrison, T. M. , 2013, “ Design Considerations for Studies of the Biomechanical Environment of the Femoropopliteal Arteries,” J. Vasc. Surg., 58(3), pp. 804–813. [CrossRef] [PubMed]
Trépanier, C. , and Pelton, A. R. , 2004, “ Effect of Temperature and pH on the Corrosion Resistance of Nitinol,” International Conference on Shape Memory and Superelastic Technology (SMST), Baden-Baden, Germany, Oct. 3–7 https://www.researchgate.net/publication/242400171_EFFECT_OF_TEMPERATURE_AND_pH_ON_THE_CORROSION_RESISTANCE_OF_PASSIVATED_NITINOL_AND_STAINLESS_STEEL.
Schlun, M. , Zipse, A. , Dreher, G. , and Rebelo, N. , 2011, “ Effects of Cyclic Loading on the Uniaxial Behavior of Nitinol,” J. Mater. Eng. Perform., 20(4–5), pp. 684–687. [CrossRef]

Figures

Grahic Jump Location
Fig. 1

Left: illustration of a “validation domain,” defined through input parameters values at which validation was performed, and possible COU parameter values inside (“interpolation”) or outside (“extrapolation”) the validation domain. Right: alternative conceptual approach which relates confidence of predictions (denoted by the different shades of gray) to distance to validation points (see Ref. [18]).

Grahic Jump Location
Fig. 2

The major concepts of the applicability framework. See text for discussion.

Grahic Jump Location
Fig. 3

The four settings R-COU, M-COU, R-VAL, and M-VAL for the stent example. Images are samples provided by Confluent Medical Technologies, Inc., Fremont, CA (Note that pinching and torsion are possible in vivo loading modes as illustrated (but not labeled) in the R-COU image—for simplicity, consideration of these modes was not included in the example).

Tables

Table Grahic Jump Location
Table 1 The questions asked by verification, validation, sensitivity analysis, uncertainty quantification, and the focus of this paper, applicability analysis. Credibility assessment involves all of these stages.
Table Grahic Jump Location
Table 2 Some common current practice regarding validation of biomedical models, why such practice may be suboptimal, and advantages of using the proposed framework to assess the validation evidence
Table Grahic Jump Location
Table 3 An example table to support the assessment in step 8
Table Grahic Jump Location
Table 4 An example table to support the assessment in step 11
Table Grahic Jump Location
Table 5 Questions raised for step 8. Each entry is populated by asking: “is it acceptable to use this model aspect (column) for making predictions about R-COU, given this difference (row)?.”
Table Grahic Jump Location
Table 6 Examples of questions raised in step 11

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In