Fast dot product using SSE/AVX intrinsics

2019-01-25 14:19发布

问题:

I am looking for a fast way to calculate the dot product of vectors with 3 or 4 components. I tried several things, but most examples online use an array of floats while our data structure is different.

We use structs which are 16 byte aligned. Code excerpt (simplified):

struct float3 {
    float x, y, z, w; // 4th component unused here
}

struct float4 {
    float x, y, z, w;
}

In previous tests (using SSE4 dot product intrinsic or FMA) I could not get a speedup, compared to using the following regular c++ code.

float dot(const float3 a, const float3 b) {
    return a.x*b.x + a.y*b.y + a.z*b.z;
}

Tests were done with gcc and clang on Intel Ivy Bridge / Haswell. It seems that the time spend to load the data into the SIMD registers and pulling them out again kills alls the benefits.

I would appreciate some help and ideas, how the dot product can be efficiently calculated using our float3/4 data structures. SSE4, AVX or even AVX2 is fine.

Thanks in advance.

回答1:

Algebraically, efficient SIMD looks almost identical to scalar code. So the right way to do the dot product is to operate on four float vectors at once for SEE (eight with AVX).

Consider constructing your code like this

#include <x86intrin.h>

struct float4 {
    __m128 xmm;
    float4 () {};
    float4 (__m128 const & x) { xmm = x; }
    float4 & operator = (__m128 const & x) { xmm = x; return *this; }
    float4 & load(float const * p) { xmm = _mm_loadu_ps(p); return *this; }
    operator __m128() const { return xmm; }
};

static inline float4 operator + (float4 const & a, float4 const & b) {
    return _mm_add_ps(a, b);
}
static inline float4 operator * (float4 const & a, float4 const & b) {
    return _mm_mul_ps(a, b);
}

struct block3 {
    float4 x, y, z;
};

struct block4 {
    float4 x, y, z, w;
};

static inline float4 dot(block3 const & a, block3 const & b) {
    return a.x*b.x + a.y*b.y + a.z*b.z;
}

static inline float4 dot(block4 const & a, block4 const & b) {
    return a.x*b.x + a.y*b.y + a.z*b.z + a.w*b.w;
}

Notice that the last two functions look almost identical to your scalar dot function except that float becomes float4 and float4 becomes block3 or block4. This will do the dot product most efficiently.



回答2:

To get the best out of AVX intrinsics, you have to think in a different dimension. Instead of doing one dot product, do 8 dot products in a single go.

Look up the difference between SoA and AoS. If your vectors are in SoA (structures of arrays) format, your data looks like this in memory:

// eight 3d vectors, called a.
float ax[8];
float ay[8];
float az[8];

// eight 3d vectors, called b.
float bx[8];
float by[8];
float bz[8];

Then to multiply all 8 a vectors with all 8 b vectors, you use three simd multiplications, one for each of x,y,z.

For dot, you still need to add afterwards, of course, which is a little trickier. But multiplication, subtraction, addition of vectors, using SoA is pretty easy, and really fast. When AVX-512 is available, you can do 16 3d vector multiplications in just 3 instructions.



标签: c++ gcc clang simd