I need to get an overview of the performance one can get from using Cython in high performance numerical code. One of the thing I am interested in is to find out if an optimizing C compiler can vectorize code generated by Cython. So I decided to write the following small example:
import numpy as np
cimport numpy as np
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
cpdef int f(np.ndarray[int, ndim = 1] f):
cdef int array_length = f.shape[0]
cdef int sum = 0
cdef int k
for k in range(array_length):
sum += f[k]
return sum
I know that there are Numpy functions that does the job, but I would like to have an easy code in order to understand what is possible with Cython. It turns out that the code generated with:
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules = cythonize("sum.pyx"))
and called with:
python setup.py build_ext --inplace
generates a C code which look likes this for the loop:
for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_1; __pyx_t_2 += 1) {
__pyx_v_sum = __pyx_v_sum + (*(int *)((char *)
__pyx_pybuffernd_f.rcbuffer->pybuffer.buf +
__pyx_t_2 * __pyx_pybuffernd_f.diminfo[0].strides)));
}
The main problem with this code is that the compiler does not know at compile time that __pyx_pybuffernd_f.diminfo[0].strides
is such that the elements of the array are close together in memory. Without that information, the compiler cannot vectorize efficiently.
Is there a way to do such a thing from Cython?
You have two problems in your code (use option
-a
to make it visible):int
incdef sum=0
Taking this into account we get:
For the loop the following code:
}
Which is not that bad, but not as easy for the optimizer as the normal code written by human. As you have already pointed out,
__pyx_pybuffernd_f.diminfo[0].strides
isn't known at compile time and this prevents vectorization.However, you would get better results, when using typed memory views, i.e:
which leads to a less opaque C-code - the one, at least my compiler, can better optimize:
The most crucial thing here, is that we make it clear to the cython, that the memory is continuous, i.e.
int[::1]
compared toint[:]
as it is seen for numpy-arrays, for which a possiblestride!=1
must be taken into account.In this case, the cython-generated C-code results in the same assembler as the code I would have written. As crisb has pointed out, adding
-march=native
would lead to vectorization, but in this case the assembler of both functions would be slightly different again.However, in my experience, compilers have quite often some problems to optimize loops created by cython and/or it is easier to miss a detail which prevents the generation of really good C-code. So my strategy for working-horse-loops is to write them in plain C and use cython for wrapping/accessing them - often it is somewhat faster, because one can also use dedicated compiler flags for this code-snipped without affecting the whole Cython-module.