Skip Nav Destination
Filter
Filter
Filter
Filter
Filter

Update search

Filter

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Journal Volume Number
- References
- Conference Volume Title
- Paper No

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Journal Volume Number
- References
- Conference Volume Title
- Paper No

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Journal Volume Number
- References
- Conference Volume Title
- Paper No

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Journal Volume Number
- References
- Conference Volume Title
- Paper No

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Journal Volume Number
- References
- Conference Volume Title
- Paper No

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Journal Volume Number
- References
- Conference Volume Title
- Paper No

### NARROW

Format

Journal

Article Type

Conference Series

Subject Area

Topics

Date

Availability

1-15 of 15

James J. Filliben

Close
**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Proceedings Papers

*Proc. ASME*. PVP2019, Volume 6A: Materials and Fabrication, V06AT06A058, July 14–19, 2019

Paper No: PVP2019-93502

Abstract

When a small crack is detected in a pressure vessel or piping, we can estimate the fatigue life of the vessel or piping by applying the classical law of fracture mechanics for crack growth if we are certain that the crack growth exponent is correct and the crack geometry is a simple plane. Unfortunately, for an ageing vessel or piping, the degradation will, in practice, change not only the crack growth exponent but the crack shape from a simple plane to a zig-zag pattern. To validate the crack growth exponent for an ageing vessel or piping, we present the design of an Intelligent PYTHON (IP) code to convert the information of the growing crack geometry measured by monitoring a small crack that was initially detected and subsequently continuously monitored over a period of time such that the IP-based analysis code will use the realistic zig-zag crack geometry as a series of re-meshed finite-element meshes for finding the correct crack growth exponent. Using a numerical example, we show that such an IP-assisted continuous monitoring program, using PYTHON as the management tool, TRUEGRID as the topological crack meshing tool, and two finite-element analysis codes for verifiable stress analysis, is feasible for predicting more accurately the fatigue life of a cracked vessel or piping because the material model has a field-validated crack growth exponent. Significance and limitations of this IP-assisted approach are discussed.

Proceedings Papers

*Proc. ASME*. PVP2018, Volume 6B: Materials and Fabrication, V06BT06A074, July 15–20, 2018

Paper No: PVP2018-84730

Abstract

In the aerospace industry, open hole specimens of composite laminates have been used in standardized tests to generate design allowables. Using finite element method (FEM) based tool MicMac/FEA with AB AQUS code interface and statistical design of experiments, Shah, et al. in 2010 [11] studied average-property-based failure envelope with uncertainty estimates of open hole specimen with quasi-isotropic carbon fiber-epoxy laminate. However, their FEM model is deterministic, without uncertainty analysis. In this paper, based on Shah’s FEM model, we developed FEM model of uni-axial strength test of holed composite laminates using ABAQUS with a serious of quadrilateral S4R and trilateral S3R shell element designs. The mesh density ranges from the original 8 × 8 (very coarse) to 48 × 48 (very fine). For each of the meshes, we compute the failure strength from Hasin failure criteria. Then we use a 4-parameter logistic function nonlinear least squares fit algorithm to obtain an estimate of the failure strength at infinite degrees of freedom (d.o.f) as well as its uncertainty at 50,000-d.o.f. and relative error convergence rates. Our results are then compared with Shah’s with the additional advantage that our results have uncertainty quantification that can be compared with experimental data. The significance and limitation of our method on the uncertainty quantification of FEM model of uniaxial strength test of holed composite laminates are discussed.

Proceedings Papers

*Proc. ASME*. PVP2018, Volume 6B: Materials and Fabrication, V06BT06A075, July 15–20, 2018

Paper No: PVP2018-84739

Abstract

A large number of fatigue life models for engineering materials such as concrete and steel are simply a linear or nonlinear relationship between the cyclic stress amplitude, σ a , and the log of the number of cycles to failure, N f . In the linear case, the relationship is a power-law relation between σ a and N f , with two constants determined by a linear least squares fit algorithm. The disadvantage of this simple linear fit of fatigue test data is that it fails to predict the existence of an endurance limit, which is defined as the cyclic stress amplitude at which the number of cycles is infinity. In this paper, we introduce a nonlinear least square fit based on a 4-parameter logistic function, where the curve of the y vs. x plot will have two horizontal asymptotes, namely, y 0 , at the left infinity, and y 1 , at the right infinity with y 1 < y 0 to simulate a fatigue model with a decreasing y for an increasing x . In addition, we need a third parameter, k , to denote the slope of the curve as it traverses from the left horizontal asymptote to the lower right horizontal asymptote, and a fourth parameter, x 0 , to denote the center of the curve where it crosses a horizontal line half-way between y 0 and y 1 . In this paper, the 4-parameter logistic function is simplified to a 3-parameter function as we apply it to model a fatigue sress-life relationship, because in a stress-log (life) plot, the left upper horizontal asymptote, y 0 , can be assumed as a constant equal to the static ultimate strength of the material, U 0 . This simplification reduces the logistic function to the following form: y = U 0 − (U 0 − y 1 ) / (1 + exp(−k (x − x 0 )), where y = σ a , and x = log(N f ). The fit algorithm allows us to quantify the uncertainty of the model and the estimation of an endurance limit, which is the parameter, y 1 . An application of this nonlinear modeling technique is applied to fatigue data of plain concrete in the literature with excellent results. Significance and limitations of this new fit algorithm to the interpretation of fatigue stress-life data are presented and discussed.

Proceedings Papers

*Proc. ASME*. PVP2018, Volume 1A: Codes and Standards, V01AT01A007, July 15–20, 2018

Paper No: PVP2018-84771

Abstract

The ASME Boiler & Pressure Vessel Code Section XI Committee is currently developing a new Division 2 nuclear code entitled the “Reliability and Integrity Management (RIM) program,” with which one is able to arrive at a risk-informed, NDE-based engineering maintenance decision by estimating and managing all uncertainties for the entire life cycle including design, material selection, degradation processes, operation and non-destructive examination (NDE). This paper focuses on the uncertainty of the NDE methods employed for preservice and inservice inspections due to a large number of factors such as the NDE equipment type and age, the operator’s level and years of experience, the angle of probe, the flaw type, etc. In this paper, we describe three approaches with which uncertainty in NDE-risk-informed decision making can be quantified: (1) A regression model approach in analyzing round-robin experimental data such as the 1981–82 Piping Inspection Round Robin (PIRR), the 1986 Mini-Round Robin (MRR) on intergranular stress corrosion cracking (IGSCC) detection and sizing, and the 1989–90 international Programme for the Inspection of Steel Components III-Austenitic Steel Testing (PISC-AST). (2) A statistical design of experiments approach. (3) An expert knowledge elicitation approach. Based on a 2003 Pacific Northwest National Laboratory (PNNL) report by Heasler and Doctor (NUREG/CR-6795), we observe that the first approach utilized round robin studies that gave NDE uncertainty information on the state of the art of the NDE technology employed from the early 1980s to the early 1990s. This approach is very time-consuming and expensive to implement. The second approach is based on a design-of-experiments (DEX) of eight field inspection exercises for finding the length of a subsurface crack in a pressure vessel head using ultrasonic testing (UT), where five factors (operator’s service experience, UT machine age, cable length, probe angle, and plastic shim thickness), were chosen to quantify the sizing uncertainty of the UT method. The DEX approach is also time-consuming and costly, but has the advantage that it can be tailored to a specific defect-detection and defect-sizing problem. The third approach using an expert panel is the most efficient and least costly approach. Using the crack length results of the second approach, we introduce in this paper how the expert panel approach can be implemented with the application of a software package named the Sheffield Elicitation Framework (SHELF). The crack length estimation with uncertainty results of the three approaches are compared and discussed. Significance and limitations of the three uncertainty quantification approaches to risk assessment of NDE-based engineering decisions are presented and discussed.

Proceedings Papers

*Proc. ASME*. VVS2018, ASME 2018 Verification and Validation Symposium, V001T12A001, May 16–18, 2018

Paper No: VVS2018-9320

Abstract

Errors and uncertainties in finite element method (FEM) computing can come from the following eight sources, the first four being FEM-method-specific, and the second four, model-specific: (1) Computing platform such as ABAQUS, ANSYS, COMSOL, LS-DYNA, etc.; (2) choice of element types in designing a mesh; (3) choice of mean element density or degrees of freedom (d.o.f.) in the same mesh design; (4) choice of a percent relative error (PRE) or the Rate of PRE per d.o.f. on a log-log plot to assure solution convergence; (5) uncertainty in geometric parameters of the model; (6) uncertainty in physical and material property parameters of the model; (7) uncertainty in loading parameters of the model, and (8) uncertainty in the choice of the model. By considering every FEM solution as the result of a numerical experiment for a fixed model, a purely mathematical problem, i.e., solution verification, can be addressed by first quantifying the errors and uncertainties due to the first four of the eight sources listed above, and then developing numerical algorithms and easy-to-use metrics to assess the solution accuracy of all candidate solutions. In this paper, we present a new approach to FEM verification by applying three mathematical methods and formulating three metrics for solution accuracy assessment. The three methods are: (1) A 4-parameter logistic function to find an asymptotic solution of FEM simulations; (2) the nonlinear least squares method in combination with the logistic function to find an estimate of the 95% confidence bounds of the asymptotic solution ; and (3) the definition of the Jacobian of a single finite element in order to compute the Jacobians of all elements in a FEM mesh. Using those three methods, we develop numerical tools to estimate (a) the uncertainty of a FEM solution at one billion d.o.f., (b) the gain in the rate of PRE per d.o.f. as the asymptotic solution approaches very large d.o.f.’s, and (c) the estimated mean of the Jacobian distribution (mJ) of a given mesh design. Those three quantities are shown to be useful metrics to assess the accuracy of candidate solutions in order to arrive at a so-called “best” estimate with uncertainty quantification. Our results include calibration of those three metrics using problems of known analytical solutions and the application of the metrics to sample problems, of which no theoretical solution is known to exist.

Proceedings Papers

*Proc. ASME*. ETAM2018, ASME 2018 Symposium on Elevated Temperature Application of Materials for Fossil, Nuclear, and Petrochemical Industries, V001T04A002, April 3–5, 2018

Paper No: ETAM2018-6711

Abstract

Uncertainty in modeling the creep rupture life of a full-scale component using experimental data at microscopic (Level 1), specimen (Level 2), and full-size (Level 3) scales, is addressed by applying statistical theory of prediction intervals, and that of tolerance intervals based on the concept of coverage, p . Using a nonlinear least squares fit algorithm and the physical assumption that the one-sided Lower Tolerance Limit ( LTL ), at 95 % confidence level, of the creep rupture life, i.e., the minimum time-to-failure, minTf , of a full-scale component, cannot be negative as the lack or “Failure” of coverage ( Fp ), defined as 1 - p , approaches zero, we develop a new creep rupture life model, where the minimum time-to-failure, minTf , at extremely low “Failure” of coverage, Fp , can be estimated. Since the concept of coverage is closely related to that of an inspection strategy, and if one assumes that the predominent cause of failure of a full-size component is due to the “Failure” of inspection or coverage, it is reasonable to equate the quantity, Fp , to a Failure Probability, FP , thereby leading to a new approach of estimating the frequency of in-service inspection of a full-size component. To illustrate this approach, we include a numerical example using the published creep rupture time data of an API 579-1/ASME FFS-1 Grade 91 steel at 571.1 C (1060 F) (API-STD-530, 2007), and a linear least squares fit to generate the necessary uncertainties for ultimately performing a dynamic risk analysis, where a graphical plot of an estimate of risk with uncertainty vs. a predicted most likely date of a high consequence failure event due to creep rupture becomes available for a risk-informed inspection strategy associated with an energy-generation or chemical processing plant equipment.

Proceedings Papers

*Proc. ASME*. PVP2016, Volume 1B: Codes and Standards, V01BT01A055, July 17–21, 2016

Paper No: PVP2016-63350

Abstract

Recent experimental results on creep-fracture damage with minimum time to failure (minTTF) varying as the 9 th power of stress, and a theoretical consequence that the coefficient of variation (CV) of minTTF is necessarily 9 times that of the CV of the stress, created a new engineering requirement that the finite element analysis of pressure vessel and piping systems in power generation and chemical plants be more accurate with an allowable error of no more than 2% when dealing with a leak-before-break scenario. This new requirement becomes more critical, for example, when one finds a small leakage in the vicinity of a hot steam piping weldment next to an elbow. To illustrate the critical nature of this creep and creep-fatigue interaction problem in engineering design and operation decision-making, we present the analysis of a typical steam piping maintenance problem, where 10 experimental data on the creep rupture time vs. stress (83 to 131 MPa) for an API Grade 91 steel at 571.1 C (1060 F) are fitted with a straight line using the linear least squares (LLSQ) method. The LLSQ fit yields not only a two-parameter model, but also an estimate of the 95% confidence upper and lower limits of the rupture time as basis for a statistical design of creep and creep-fatigue. In addition, we will show that when an error in stress estimate is 2% or more, the 95% confidence lower limit for the rupture time will be reduced from the minimum by as much as 40%.

Proceedings Papers

Jeffrey T. Fong, Stephen R. Gosselin, Pedro V. Marcal, James J. Filliben, N. Alan Heckert, Robert E. Chapman

*Proc. ASME*. PVP2010, ASME 2010 Pressure Vessels and Piping Conference: Volume 6, Parts A and B, 1065-1089, July 18–22, 2010

Paper No: PVP2010-25168

Abstract

This paper is a continuation of a recent ASME Conference paper entitled “Design of a Python-Based Plug-in for Benchmarking Probabilistic Fracture Mechanics Computer Codes with Failure Event Data” (PVP2009-77974). In that paper, which was co-authored by Fong, deWit, Marcal, Filliben, Heckert, and Gosselin, we designed a probability-uncertainty plug-in to automate the estimation of leakage probability with uncertainty bounds due to variability in a large number of factors. The estimation algorithm was based on a two-level full or fractional factorial design of experiments such that the total number of simulations will be small as compared to a Monte-Carlo method. This feature is attractive if the simulations were based on finite element analysis with a large number of nodes and elements. In this paper, we go one step further to derive a risk-uncertainty formula by computing separately the probability-uncertainty and the consequence-uncertainty of a given failure event, and then using the classical theory of error propagation to compute the risk-uncertainty within the domain of validity of that theory. The estimation of the consequence-uncertainty is accomplished by using a public-domain software package entitled “Cost-Effectiveness Tool for Capital Asset Protection, version 4.0, 2008” ( http://www.bfrl.nist.gov/oae/ or NIST Report NISTIR-7524 ), and is more fully described in a companion paper entitled “An Economics-based Intelligence (EI) Tool for Pressure Vessels & Piping (PVP) Failure Consequence Estimation,” (PVP2010-25226, Session MF-23.4 of this conference). A numerical example of an application of the risk-uncertainty formula using a 16-year historical database of probability and consequence of main steam and hot reheat piping systems is presented. Implication of this risk-uncertainty estimation tool to the design of a risk-informed in-service inspection program is discussed.

Proceedings Papers

Robert E. Chapman, Jeffrey T. Fong, David T. Butry, Douglas S. Thomas, James J. Filliben, N. Alan Heckert

*Proc. ASME*. PVP2010, ASME 2010 Pressure Vessels and Piping Conference: Volume 6, Parts A and B, 1091-1105, July 18–22, 2010

Paper No: PVP2010-25226

Abstract

This paper is built around ASTM E 2506, Standard Guide for Developing a Cost-Effective Risk Mitigation Plan for New and Existing Constructed Facilities. E 2506 establishes a three-step protocol—perform risk assessment, specify combinations of risk mitigation strategies for evaluation, and perform economic evaluation—to insure that the decision maker is provided the requisite information to choose the most cost effective combination of risk mitigation strategies. Because decisions associated with low-probability, high-consequence events involve uncertainty both in terms of appropriate evaluation procedures and event-related measures of likelihood and consequence, NIST developed a Risk Mitigation Toolkit. This paper uses (a) a data center undergoing renovation for improved security, and (b) a PVP-related failure event to illustrate how to perform the E 2506 three-step protocol with particular emphasis on the third step—perform economic evaluation. The third step is built around the Cost-Effectiveness Tool for Capital Asset Protection (CET), which was developed by NIST. Version 4.0 of CET is used to analyze the security- or failure-related event with a focus on consequence estimation and consequence assessment via Monte Carlo techniques. CET 4.0 includes detailed analysis and reporting features designed to identify key cost drivers, measure their impacts, and deliver estimated consequence parameters with uncertainty bounds. Significance of this economics-based intelligence (EI) tool is presented and discussed for security- or failure-consequence estimation to risk assessment of failure of critical structures or components.

Proceedings Papers

Jeffrey T. Fong, Roland deWit, Pedro V. Marcal, James J. Filliben, N. Alan Heckert, Stephen R. Gosselin

*Proc. ASME*. PVP2009, Volume 6: Materials and Fabrication, Parts A and B, 1651-1693, July 26–30, 2009

Paper No: PVP2009-77974

Abstract

In a 2007 paper entitled “Application of Failure Event Data to Benchmark Probabilistic Fracture Mechanics (PFM) Computer Codes” (Simonen, F. A., Gosselin, S. R., Lydell, B. O. Y., Rudland, D. L., & Wikowski, G. M. Proc. ASME PVP Conf., San Antonio, TX , Paper PVP2007-26373), it was reported that the two benchmarked PFM models, PRO-LOCA and PRAISE, predicted significantly higher failure probabilities of cracking than those derived from field data in three PWR and one BWR cases by a factor ranging from 30 to 10,000. To explain the reasons for having such a large discrepancy, the authors listed ten sources of uncertainties: (1) Welding Residual Stresses. (2) Crack Initiation Predictions. (3) Crack Growth Rates. (4) Circumferential Stress Variation. (5) Operating temperatures different from design temperatures. (6) Temperature factor in actual activation energy vs. assumed. (7) Under reporting of field data due to NDE limitations. (8) Uncertainty in modeling initiation, growth, and linking of multiple cracks around the circumference of a weld. (9) Correlation of crack initiation times and growth rates. (10) Insights from NUREG/CR-6674 (2000) fatigue crack growth models using conservative inputs for cyclic strain rates and environmental parameters such as oxygen content. In this paper we design a Python-based plug-in that allows a user to address those ten sources of uncertainties. This approach is based on the statistical theory of design of experiments with a 2-level factorial design, where a small number of runs is enough to estimate the uncertainties in the predictions of PFM models due to some combination of the source uncertainties listed by Simonen et al (PVP2007-26373).

Proceedings Papers

*Proc. ASME*. PVP2008, Volume 6: Materials and Fabrication, Parts A and B, 1475-1501, July 27–31, 2008

Paper No: PVP2008-61565

Abstract

Scatter in laboratory data with duplicates on Charpy impact tests is analyzed by identifying several sources of variability such as temperature, manganese sulfide, initial strain, mis-orientation, and notch radius in order to estimate the predictive 95% confidence intervals of the mean energy of absorption for each specific test temperature. Using a combination of real and virtual data on a high-strength pressure vessel grade steel (ASTM A517) over a range of temperatures from −40 °C (−40 °F) to 182 °C (360 °F), and the concept of a statistical design of experiments, we present an uncertainty estimation methodology using a public-domain statistical analysis software named DATAPLOT. A numerical example for estimating the mean, standard deviation, and predictive intervals of the Charpy energy at 48.9 °C (120 °F) is included. To illustrate the application potential of this methodology, we enhance it with formulas of error propagation to estimate the mean, standard deviation, and predictive intervals of the associated static crack initiation toughness, K Ic . A discussion of the significance and limitations of the proposed methodology, and a concluding remark are given at the end of this paper.

Proceedings Papers

*Proc. ASME*. PVP2008, Volume 6: Materials and Fabrication, Parts A and B, 1537-1564, July 27–31, 2008

Paper No: PVP2008-61602

Abstract

To advance the state of the art of engineering design, we introduce a new concept on the “robustness” of a structure by measuring its ability to sustain a sudden loss of a part without causing an immediate collapse. The concept is based on the premise that most structures have built-in redundancy such that when the loss of a single part leads to a load redistribution, the “crippled” structure tends to seek a new stability configuration without immediate collapse. This property of a “robust” structure, when coupled with a continuous or periodic inspection program using nondestructive evaluation (NDE) techniques, is useful in failure prevention, because such structure is expected to display “measurable” signs of “weakening” long before the onset of catastrophic failure. To quantify this “robustness” concept, we use a large number of simulations to develop a metric to be named the “Robustness Index (RBI).” To illustrate its application, we present two examples: (1) the design of a simple square grillage in support of a water tank, and (2) a classroom model of a 3-span double-Pratt-truss bridge. The first example is a “toy” problem, which turned out to be a good vehicle to test the feasibility of the RBI concept. The second example is taken from a textbook in bridge design (Tall, L., Structural Steel Bridge , 2nd ed., page 99, Fig. 4.3(b), Ronald Press, New York NY, 1974). It is not a case study for failure analysis, but a useful classroom exercise in an engineering design course. Significance and limitations of this new approach to catastrophic failure avoidance through “robust” design, are discussed.

Proceedings Papers

*Proc. ASME*. PVP2008, Volume 6: Materials and Fabrication, Parts A and B, 1615-1653, July 27–31, 2008

Paper No: PVP2008-61612

Abstract

Recent advances in computer technology, internet communication networks, and finite element modeling and analysis capability have made it feasible for engineers to accelerate the feedback loop between the field inspectors of a structure or component for critical flaws by nondestructive evaluation (NDE) and the office engineers who do the damage assessment and recommendations for field action to prevent failure. For example, field inspection data of critical flaws can be transmitted to the office instantly via the internet, and the office engineer with a computer database of equipment geometry, material properties, past loading/deformation histories, and potential future loadings, can process the NDE data as input to a damage assessment model to simulate the equipment performance under a variety of loading conditions until its failure. Results of such simulations can be combined with engineering judgment to produce a specific recommendation for field action, which can also be transmitted to the field by the internet. In this paper, we describe a web-based NDE data analysis methodology to estimate the reliability of weld flaw detection, location, and sizing by using a public-domain statistical data analysis software named DATAPLOT and a ten-step sensitivity analysis of NDE data from a two-level fractional factorial orthogonal experimental design. A numerical example using the 1968 ultrasonic examination data of weld seam in PVRC test block 251J, and the 1984 sectioning data of 251J containing 15 implanted flaws, is presented and discussed.

Proceedings Papers

*Proc. ASME*. PVP2006-ICPVT-11, Volume 6: Materials and Fabrication, 991-1013, July 23–27, 2006

Paper No: PVP2006-ICPVT-11-93927

Abstract

Using an example from a recent study of the finite element method (FEM) solutions of the natural frequencies of single-crystal silicon cantilevers in atomic force microscopy (AFM), we present the results of an analysis using two powerful tools of engineering statistics, namely, (a) stochastic FEM, and (b) design of experiments. The analysis of the FEM results using ABAQUS, ANSYS, and LS-DYNA anisotropic elastic element types yields conclusions that engineers can use to justify decisions with quantitative measure of uncertainties. For PVP engineers, we show with an example that this methodology is equally applicable to their decision making process and the appropriate risk assessment.

Journal Articles

Jeffrey T. Fong, James J. Filliben, Roland deWit, Richard J. Fields, Barry Bernstein, Pedro V. Marcal

Article Type: Research Papers

*. February 2006, 128(1): 140–147.*

*J. Pressure Vessel Technol*Published Online: October 23, 2005

Abstract

In this paper, we first review the impact of the powerful finite element method (FEM) in structural engineering, and then address the shortcomings of FEM as a tool for risk-based decision making and incomplete-data-based failure analysis. To illustrate the main shortcoming of FEM, i.e., the computational results are point estimates based on “deterministic” models with equations containing mean values of material properties and prescribed loadings, we present the FEM solutions of two classical problems as reference benchmarks: (RB-101) The bending of a thin elastic cantilever beam due to a point load at its free end and (RB-301) the bending of a uniformly loaded square, thin, and elastic plate resting on a grillage consisting of 44 columns of ultimate strengths estimated from 5 tests. Using known solutions of those two classical problems in the literature, we first estimate the absolute errors of the results of four commercially available FEM codes ( ABAQUS , ANSYS , LSDYNA , and MPAVE ) by comparing the known with the FEM results of two specific parameters, namely, (a) the maximum displacement and (b) the peak stress in a coarse-meshed geometry. We then vary the mesh size and element type for each code to obtain grid convergence and to answer two questions on FEM and failure analysis in general: (Q-1) Given the results of two or more FEM solutions, how do we express uncertainty for each solution and the combined? (Q-2) Given a complex structure with a small number of tests on material properties, how do we simulate a failure scenario and predict time to collapse with confidence bounds? To answer the first question, we propose an easy-to-implement metrology-based approach, where each FEM simulation in a grid-convergence sequence is considered a “numerical experiment,” and a quantitative uncertainty is calculated for each sequence of grid convergence. To answer the second question, we propose a progressively weakening model based on a small number (e.g., 5) of tests on ultimate strength such that the failure of the weakest column of the grillage causes a load redistribution and collapse occurs only when the load redistribution leads to instability. This model satisfies the requirement of a metrology-based approach, where the time to failure is given a quantitative expression of uncertainty. We conclude that in today’s computing environment and with a precomputational “design of numerical experiments,” it is feasible to “quantify” uncertainty in FEM modeling and progressive failure analysis.