Consider a uniformly distributed random variable with a probability distribution function f(x) shown here:
It is straightforward to derive its mean and variance as:
Further, if we form a second random variable by summing N of these uniformly distributed variables together independently, then it is a property of statistical independence that the mean and variance of the sum will be:
Now suppose we run the following segment of pseudocode:integer i; real r,s1,s2;
Since we see a pseudorandom number being generated with the values of a=-1 and b=1, we expect that the mean and variance of the uniform variable r will be 0 and 1/3, respectively. This pseudocode will form in the two sum variables s1 and s2 the sum of N=10,000 such variables. Therefore we would expect a mean and variance of both sum variables of 0 and 3333, respectively. The difference between the two sum variables is that we are subjecting the second variable s2 to a rounding error after only three decimal digits of precision. We will use the example of 10,000 summations here to represent the effects of a long series of numeric calculations that may be necessary for the solution of some quantitative problem. The error that has accumulated in the rounded sum is calculated after the loop has terminated.
You should see the mean and variance of the two sum variables being reported in the neighborhood of 0 and 3333, respectively. Then you should also see the variance of the cumulative rounding error being about 0.0008333 after each set of 100 trials. Why?
We can model the error committed each time a number is rounded to, for example, three decimal places, as the addition of another uniform pseudorandom variable distributed between -0.0005 and +0.0005. This means a variance of 8.3333333×10-8. Over a sum of 10,000 additions of this pseudorandom rounding error, this gives and expected variance of 8.3333333×10-4. Of course, deriving an experimental value for the mean and variance with only 100 trials will exhibit some pseudorandom variation, but these are the typical values observed.
This simple statistical experiment demonstrates that over many thousands of arithmetic operations, typical for a numerical solution to a complex physics problem, considerable numerical rounding error may accumulate. This is why we use the 64-bit double precision floating point numerical format, or even in extreme cases the 80-bit long double format, in practical numerical computation tasks. These formats will introduce rounding errors with a statistical variance so small that even accumulated over many thousands of operations, there is a vanishingly small probability of significant numerical error for most applications.
Back to Computational Physics Playground page
Back to John Fattaruso's home page