I'm am currently writing a program that calculates the digits of pi, and I have a problem. After three iterations the number of correct digits exceeds the memory available in a double.
I heard of the System.Numerics.BigInteger type (in System.Numerics.dll), but I need to use floating point numbers. I do not understand the algorithm well enough to use integers.
It would be great if a version of the BigInteger exists which supports a decimal point. Below I have put my C# code:
var a = 1.0;
var b = 1 / Math.Sqrt(2);
var t = 0.25;
var p = 1.0;
double anext, bnext, tnext, pnext;
int count = 0;
for (int i = 1; i <= accuracy; i++ )
{
anext = (a + b) / 2;
bnext = Math.Sqrt(a * b);
tnext = (t - p * ((a - anext) * (a - anext)));
pnext = 2 * p;
a = anext;
b = bnext;
t = tnext;
p = pnext;
var pi = ((a + b) * (a + b)) / (4 * t);
Console.WriteLine("Iteration = " + i.ToString());
Console.WriteLine("Pi = " + pi + "\n\n\n\n");
}