I was benchmarking some python code I noticed something strange. I used the following function to measure how fast it took to iterate through an empty for loop:
def f(n):
t1 = time.time()
for i in range(n):
pass
print(time.time() - t1)
f(10**6)
prints about 0.035
, f(10**7)
about 0.35
, f(10**8)
about 3.5
, and f(10**9)
about 35
. But f(10**10)
? Well over 2000
. That's certainly unexpected. Why would it take over 60 times as long to iterate through 10 times as many elements? What's with python's for loops that causes this? Is this python-specific, or does this occur in a lot of languages?
Some accurate timings using
timeit
show the times actually roughly increase in line with the input size so your timings seem to be quite a ways off:Using xrange and python2 we see the ratio roughly the same, obviously python2 is much faster overall due to the fact python3 int has been replaced by long:
The actual difference in run time seems to be more related to the size of window's long rather than directly related to python 3. The difference is marginal when using unix which handles longs much differently to windows so this is a platform specific issue as much if not more than a python one.
When you get above
10^9
you get out of 32bit integer range. Python3 then transparently moves you onto arbitrary precision integers, which are much slower to allocate and use.In general working with such big numbers is one of the areas where Python3 is a lot slower that Python2 (which at least had fast 64bit integers on many systems). On the good side it makes python easier to use, with fewer overflow type errors.