A very brief summary of uncertainty analysis¶
Tip
Have a look at "Measurements and their Uncertainties" by Ifan G. Hughes and Thomas P.A. Hase to get a very approachable introduction to uncertainties and their propagation. Also consider "An introduction to error analysis" by John R. Taylor.
In practice one can never measure something exactly. The goal of uncertainty analysis is to determine an estimate x from a set of measurements and to give an uncertainty \Delta x. We specify a measurement of x with associated uncertainty \Delta x as x\pm\Delta x, which means that with some degree of confidence x-\Delta x \leq x \leq x + \Delta x.
Error, or uncertainty analysis allows us to answer questions like:
- Do measurements agree with theoretical predictions?
- Are the measurements reproducible?
We have to distinugish between random uncertainties, systematic uncertainties and mistakes. Historically not much distinction has been made between "uncertainty" and "error", but in this lab we understand uncertainties as specifying our degree of confidence. Errors, on the other hand, would denote the result of a measurement minus the true value of what is being measured. For measurements where the true value cannot be known, this does not make much sense, but consider an approximation to the function \cos(x) \simeq 1 - x^2/2 \text{ for } x\ll 1. We can evaluate both the exact number and its approximation, and the difference that we can calculate is a definite error, rather than an uncertainty that indicates a degree of confidence.
To understand the difference between random and systematic uncertainties, consider the following image that visualizes the difference between accuracy and precision:

Translated to a measurement this means: A precise measurement is where individual data points have a small spread, either relative to the average or in absolute magnitude. For an accurate measurement the data points are in agreement with some "accepted" value. This is visualized by points spreading around the bullseye in the image above. An accurate measurement doesn't need to be precise, and neither does a precise measurement need to be accurate.
Based on the terminology of accuracy and precision we distinguish between:
- random uncertainties, also called statistical uncertainty, as uncertainties that influence the precision,
- systematic uncertainties, as uncertainties that influence the accuracy,
- mistakes or errors, as bad data points or mistakes or the experimenter (misreading scales, malfunctioning apparatuses, unit confusions, etc.).
Examples of systematic uncertainties are:
- A ruler/tape measure does not have the exactly stated length, or its divisions are wrong (calibration uncertainty). Two apparently identical rulers could give different answers due to such manufacturing defects.
- The voltage of a power supply, or the value of a resistor, ... , is systematically too large or too small (calibration uncertainty).
Random uncertainties:
- When repeating a measurement results in slight variations, this is usually a random uncertainty. This can be due to fluctuations in the experimental setup or environment, friction in mechanical measurement devices (needle of an analog voltmeter)...
Random uncertainties can be improved through repeated measurement until the intrinsic limitation of the apparatus (a systematic uncertainty) is reached.
In these labs we are interested in uncertainties from various sources, both statistical and systematic uncertainties and, when applicable, we quote separately x\pm \Delta x_\text{syst.}\pm \Delta x_\text{stat.}.
Note
While random uncertainties can be reduced/improved by repeating the measurement, this is not the case with systematic uncertainties, which require a different/improved measurement procedure or apparatus.
Determining the random/statistical uncertainty through multiple measurements¶
To combine multiple measurements into an improved estimate, e.g. measurement values l_i for i=1,\ldots,N, we take the arithmetic average: $$ \overline l = \frac{1}{N} \sum_{j=1}^N{l_j}\,. $$
This average is the best estimate of the true measurement value that we have.
Next, we define the standard deviation (SD) \sigma_l, which is a measure of the average uncertainty of the individual measurements l_i: $$ \sigma_l = \sqrt{\frac{1}{N-1} \sum_{j=1}^N ( l_j - \overline l)^2}\,. $$
With more and more measurements the value of \sigma_l does not necessarily decrease, but it will stabilize at a certain fixed value. This makes it clear that \sigma_l is indeed a measure of the reliability of our individual measurements.
For our results we will always quote the best estimate, the average \overline l, which has an associated uncertainty better than \sigma_l. The standard deviation of the mean (SDOM) $$ \sigma_\overline{l} = \frac{\sigma_l }{ \sqrt{N} } $$ is an estimate of this. This is the statistical uncertainty that we quote in results. With more and more measurements \sigma_\overline{l} decreases, as we would indeed expect the average to become more precise with more measurements included.
Note that the decrease is proportional to 1/\sqrt{N}, so to improve the random uncertainty by a factor of 2 (10), you need to perform 4 (100) times more measurements. Taking more measurements has to be balanced with respect to the systematic uncertainties that cannot be reduced by taking more and more measurements. So if the systematic uncertainty is already larger than your statistical uncertainty, it might be better to improve those to reduce the overall uncertainty. Most effort should go into reducing the dominant uncertainty source.
Five golden rules for reporting a measured parameter, following Hughes & Hase
- The reported best estimate is the mean.
- The reported uncertainty is the standard deviation of the mean (SDOM)
- Quote the uncertainty to one significant figure (or two if the first significant figure is 1). (This is because the uncertainty of the uncertainty decreases only very slowly, more than 5000 measurements are necessary to archieve a 1\% uncertainty of the uncertainty!)
- Match the number of decimal places in the mean to the SDOM, but carry all digits through to the final result before rounding to avoid rounding errors.
- Include units!
Uncertainty propagation¶
Often we are in the situation of measuring individual quantities with associated uncertainties, but really want to compute a derived quantity given by a mathematical function. For example the density as a function of mass m and volume V is given by \rho(m,V) = m/V.
For a functional dependence of one measured quantity x, f(x) with an uncertainty \Delta x in x, the uncertainty in f(x) can be computed as \Delta f_x = |f(x+\Delta x) - f(x)|. For multiple variables as in the density above, we add the uncertainties in quadrature: $$ \Delta\rho = \sqrt{(\Delta\rho_m)^2 + (\Delta\rho_V)^2} \equiv \sqrt{|\rho(m+\Delta m,V)-\rho(m,V)|^2 + |\rho(m,V+\Delta V)-\rho(m,V)|^2}\,. $$