In 1987, the Coordinating Group for Fluid Measurements (CGFM) of the Fluids Engineering Division (FED) was asked by Dr. Frank White, the Technical Editor of the Journal of Fluids Engineering (JFE) at that time, to prepare a set of guidelines on estimating experimental uncertainty. The purpose was to alert the authors of the Journal to the fact that estimates of experimental uncertainty enhance the value of information reported. It was also felt that the publication of such guidelines would improve the uniformity of presenting experimental data in the pages of the Journal. Many members of the Committee felt at that time that other reasons justified the publication of such guidelines, as for example, the need for authors to differentiate between bias and precision error and the need to handle correctly single-sample experiments.

The CGFM reviewed existing standards, including PTC 19.1 and the material presented in a collection of papers from JFE in 1985. There is no question that the basic information on how to handle uncertainty is already published. However, it is not written in a practical format as evidenced by usage (or the lack thereof). Existing information is in articles that are too long, depend too much on multiple sample analysis, do not provide perception on how to handle bias error, and give the impression that uncertainty analysis requires disproportionate attention. The current statement in JFE refers authors to those articles but leaves the actual reporting format up to each author without stringent requirements.

Over the past two years the CGFM has struggled to reach a consensus agreement on this statement. A perception has evolved that there must necessarily be three steps to develop good practices in reporting uncertainty estimates. First, a broad outline of policy must be introduced to recognize bias and precision error and the limits for the uncertainty band. Second, terminology must be standardized. Much of the problem in communicating information about uncertainty is in the language. The particular problem is that a differentiation between single and multiple sample experiments in the context of the notion that these are but endpoints on a continuum must be made. This seems simple enough, but it is incredibly difficult to accomplish. Third, procedures for handling error, and especially bias error, need to be standardized. So far this seems only possible by using examples.

CGFM intends to continue the steps outlined above, and considers the first step as having been completed with the publication of the following guidelines. These guidelines were arrived at after long discussions and exchange of arguments between the CGFM, some technical associate editors of the Journal, some reviewers and the Technical Editor. Special appreciation is extended to H.W. Coleman and W. G. Steele, the principal authors of the adopted statement.

Journal of Fluids Engineering Policy on Reporting Uncertainties in Experimental Measurements and Results

Guidelines

An uncertainty analysis of experimental measurements is necessary for the results to be used to their fullest value. Authors submitting papers for publication to this Journal are expected to describe the uncertainties in their experimental measurements and in the results calculated from those measurements.

The presentation of experimental data should include the following information:

  1. (1)

    The precision limit, P. The ±P interval about a result (single or averaged) is the experimenter's 95 percent confidence estimate of the band within which the mean of many such results would fall, if the experiment were repeated many times under the same conditions and using the same equipment. The precision limit is thus an estimate of the scatter (or lack of repeatability) caused by random errors and unsteadiness.

  2. (2)

    The bias limit, B. The bias limit is an estimate of the magnitude of the fixed, constant error. When the true bias error in a result is defined as β, the quantity Β is the experimenter's 95 percent confidence estimate such that |β|  ≤ B.

  3. (3)
    The uncertainty U. The ± U interval about the result is the band within which the experimenter is 95 percent confident the true value of the result lies. The 95 percent confidence uncertainty is calculated from
    U=[B2+P2]1/2
    (1)
  4. (4)

    A brief description of, or reference to, the methods used for the uncertainty analysis. (If estimates are made at a confidence level other than 95 percent, adequate explanation of the techniques used must be provided.)

The estimates of precision limits and bias limits should be made corresponding to a time interval appropriate to the experiment.

It is preferred that the following additional information also be included:

  1. (1)

    The precision limit and bias limits for the variables and parameters used in calculating each result.

  2. (2)

    A statement comparing the observed scatter in results on repeated trials (if performed) with the expected scatter (±P) based on the uncertainty analysis.

Although it is natural in any experimental paper to discuss sources of experimental error in the body of the text, this alone does not satisfy our requirement. All reported data must show uncertainty estimates. All tables should carry estimates. All figures reporting new data should contain uncertainty estimates either on the figure itself or in the caption.

A list of references on the topic, many of which appeared in the pages of this Journal is provided here in alphabetical order.

Example

Consider an experiment in which the pressure drop characteristics for fully developed flow conditions in a particular type of circular pipe are determined over a range of water flow rates. The outcome of this experiment might be presented by plotting one result—the Fanning friction factor, f, versus another result, the Reynolds number, Re. To obtain each “data point” that would be plotted on such a figure, the values of f and Re could be calculated from
f=π232D5ρQ2(p1p2)(x2x1)
(2)
and
Re=4πρQμD
(3)

where Q is the volumetric flow rate of the water with density ρ and dynamic viscosity μ, D is the pipe diameter, ρ is the static pressure, x is axial position along the pipe, and the subscripts 1 and 2 refer to the upstream and downstream pressure tap locations, respectively.

The measured variables (Q, D, p1, p2, x1, x2) and the parameters found from reference property data (ρ, μ) contain bias errors and precision errors. For example, calibrating pressure transducers under static conditions may later introduce bias errors if the measured field involves dynamic motions. Other bias errors arise from calibration of the measurement systems for p and Q against imperfect standards and from using property values originally determined in imperfect experiments. Precision errors could arise, for example, from sensitivity of the pressure transducer, flowmeter and data acquisition system to variations in ambient temperature and humidity. Inability to hold flow rate exactly constant during a period of data acquisition could also appear as a variation in the pressure measurements.

Errors in these quantities will propagate through Eqs. (2) and (3) to produce bias and precision errors in the results f and Re. The techniques of uncertainty analysis described in the references can be used to obtain estimates of the bias limits and precision limits for the variables and parameters and the bias limit, B, the precision limit, P, and the uncertainty, U, in the quantities f and Re.

If the two pressures, p1 and p2, are measured successively using the same absolute pressure transducer, the bias errors in the measurements of the two variables will not be independent of each other. This phenomenon of correlated bias errors occurs fairly often in the fluid and thermal sciences, usually when variables are measured using the same transducer or using different transducers that have been calibrated against the same standard. These effects must be taken into account in the uncertainty analysis. A method for doing this is shown in one example in ANSI/ASME PTC 19.1 and is derived and discussed in detail in Chapter 4 of Coleman and Steele (1989).

References

1.
Abernethy
,
R. B.
,
Benedict
,
R. P.
, and
Dowdell
,
R. B.
,
1985
, “
ASME Measurement Uncertainty
,”
ASME J. Fluids Eng.
,
107
(
2
), pp.
161
164
.
2.
Coleman
,
H. W.
, and
Steele
,
W. G.
,
1989
,
Experimentation and Uncertainty Analysis for Engineers
,
John Wiley & Sons
,
New York
.
3.
Kline
,
S. J.
, and
McClintock
,
F. Α.
,
1953
, “
Describing Uncertainties in Single-Sample Experiments
,”
Mechanical Engineering
, Vol.
75
, pp.
3
8
.
4.
Kline
,
S. J.
,
1985
, “
1983 Symposium on Uncertainty Analysis Closure
,”
ASME J. Fluids Eng.
,
107
(
2
), pp.
181
182
.
5.
Kline
,
S. J.
, “
The Purposes of Uncertainty Analysis
,”
ASME J. Fluids Eng.
,
107
(
2
), pp.
153
160
.
6.
Lassahn
,
G. D.
,
1985
, “
Uncertainty Definition
,”
ASME J. Fluids Eng.
,
107
(
2
), p.
179
.
7.
Measurement Uncertainty
,
ANSI/ASME
PTC 19.1—1985 Part 1,
1986
.
8.
Moffat
,
R. J.
,
1982
, “
Contributions to the Theory of Single-Sample Uncertainty Analysis
,”
ASME J. Fluids Eng.
,
104
(
2
), pp.
250
260
.
9.
Moffat
,
R. J.
,
1985
, “
Using Uncertainty Analysis in the Planning of an Experiment
,”
ASME J. Fluids Eng.
,
107
(
2
), pp.
173
178
.
10.
Moffat
,
R. J.
,
1988
, “
Describing the Uncertainties in Experimental Results
,”
Exp. Therm. Fluid Sci.
,
1
(
1
), pp.
3
17
.
11.
Smith
,
R. E.
, Jr.
, and
Wehofer
,
S.
,
1985
, “
From Measurement Uncertainty to Measurement Communications, Credibility, and Cost Control in Propulsion Ground Test Facilities
,”
ASME J. Fluids Eng.
,
107
(
2
), pp.
165
172
.