When iterating over a large array with a range expression, should I use Python's built-in range function, or numpy's arange
to get the best performance?
My reasoning so far:
arange
probably resorts to a native implementation and might be faster therefore. On the other hand, arange
returns a full array, which occupies memory, so there might be an overhead. Python 3's range expression is a generator, which does not hold all the values in memory.
For large arrays numpy should be the faster solution.
In numpy you should use combinations of vectorized calculations, ufuncs and indexing to solve your problems as it runs at
C
speed. Looping over numpy arrays is inefficient compared to this.(Something like the worst thing you could do would be to iterate over the array with an index created with
range
ornp.arange
as the first sentence in your question suggests, but I'm not sure if you really mean that.)So for this case numpy is 4 times faster than using
xrange
if you do it right. Depending on your problem numpy can be much faster than a 4 or 5 times speed up.The answers to this question explain some more advantages of using numpy arrays instead of python lists for large data sets.
First of all, as written by @bmu, you should use combinations of vectorized calculations, ufuncs and indexing. There are indeed some cases where explicit looping is required, but those are really rare.
If explicit loop is needed, with python 2.6 and 2.7, you should use xrange (see below). From what you say, in Python 3, range is the same as xrange (returns a generator). So maybe range is as good for you.
Now, you should try it yourself (using timeit: - here the ipython "magic function"):
Again, as mentioned above, most of the time it is possible to use numpy vector/array formula (or ufunc etc...) which run a c speed: much faster. This is what we could call "vector programming". It makes program easier to implement than C (and more readable) but almost as fast in the end.