In R I have written this function
ifun <- function(m) {
o = c()
for (k in 1:m) {
o[k] = prod(1:k) / prod(2 * (1:k) + 1)
}
o_sum = 2 * (1 + sum(o)) # Final result
print(o_sum)
}
This function approximates constant pi
, however, after m > 50
the approximation gets stuck, i.e. the approximation is the same value and don't get better. How can I fix this? Thanks.
Let's go inside:
There is no further improvement after
51
, because:Further terms are too small compared with the 1st term, readily beyond what machine precision could measure.
So eventually you are just adding zeros.
In this case, the summation over
o
has numerically converged, so does your approximation topi
.More thoughts
IEEE 754 standard for double-precision floating-point format states that on a 64-bit machine: 11 bits are used for exponential, 53 bits are used for significant digits (including a sign bit). This gives the machine precision:
1 / (2 ^ 52) = 2.2204e-16
. In other words, a double-precision floating point number at most has 16 valid significant digits. R functionprint
can display up to 22 digits, whilesprintf
can display more, but remember, any digits beyond the 16th are invalid, garbage values.Have a look at the constant
pi
in R:If you compare it with How to print 1000 decimals places of pi value?, you will see that only the first 16 digits are truly correct:
No. There have been many crazy algorithms around so that we can compute a shockingly great many of digits of
pi
, but you cannot modify your approach to get more valid significant digits.At first I was thinking about computing
sum(o[1:51])
andsum(o[52:100])
separately, as both of them give 16 valid significant digits. But we can't just concatenate them to get 32 digits. Because forsum(o[1:51])
, the true digits beyond the 16th are not zeros, so the 16 digits forsum(o[52:100])
are not the 17 ~ 32th digits ofsum(o[1:100])
.