I implemented the Madhava–Leibniz series to calculate pi in Python, and then in Cython to improve the speed. The Python version:
from __future__ import division
pi = 0
l = 1
x = True
while True:
if x:
pi += 4/l
else:
pi -= 4/l
x = not x
l += 2
print str(pi)
The Cython version:
cdef float pi = 0.0
cdef float l = 1.0
cdef unsigned short x = True
while True:
if x:
pi += 4.0/l
else:
pi -= 4.0/l
x = not x
l += 2
print str(pi)
When I stopped the Python version it had correctly calculated pi to 3.141592. The Cython version eventually ended up at 3.141597 with some more digits that I don't remember (my terminal crashed) but were incorrect. Why are the Cython version's calculations incorrect?
How do you know when it's finished? Have you considered that the value for
pi
would oscillate about the true value, and you would expect that if you stopped the code at some point, you could have a value that is too high (or too low)?You are using
float
in the Cython version -- that's single precision! Usedouble
instead, which corresponds to Python'sfloat
(funnily enough). The C typefloat
only has about 8 significant decimal digits, whereasdouble
or Python'sfloat
have about 16 digits.If you want to increase speed, note that you can simplify the logic by unrolling your loop once, like so:
Also note that you don't have to call print inside the loop - it is probably taking ten times longer than the rest of the calculation.