SSE matrix-matrix multiplication

2019-06-14 11:28发布

问题:

I'm having trouble doing matrix-matrix multiplication with SSE in C.

Here is what I got so far:

#define N 1000

void matmulSSE(int mat1[N][N], int mat2[N][N], int result[N][N]) {
  int i, j, k;
  __m128i vA, vB, vR;

  for(i = 0; i < N; ++i) {
    for(j = 0; j < N; ++j) {
        vR = _mm_setzero_si128();
        for(k = 0; k < N; k += 4) {
            //result[i][j] += mat1[i][k] * mat2[k][j];
            vA = _mm_loadu_si128((__m128i*)&mat1[i][k]);
            vB = _mm_loadu_si128((__m128i*)&mat2[k][j]); //how well does the k += 4 work here? Should it be unrolled?
            vR = _mm_add_epi32(vR, _mm_mul_epi32(vA, vB));
        }
        vR = _mm_hadd_epi32(vR, vR);
        vR = _mm_hadd_epi32(vR, vR);
        result[i][j] += _mm_extract_epi32(vR, 0);
    }
  }
}

I can't seem to make it give the correct results. Am I missing something? And searching dosent seem to help much - every result is either only doing 4x4 matrices, mat-vec or some special magic thats not very readable and hard to understand...

Update: Woho! I finally figured it out. Besides the errors in my logic (thanks for the help Peter Cordes) there was also the issue of _mm_mul_epi32() not working as I thought it did - I should've been using _mm_mullo_epi32() instead!

I know this is not the most effective code, but it was made to get it to work properly first - now I can move on to optimizing it.

void matmulSSE(int mat1[N][N], int mat2[N][N], int result[N][N]) {
    int i, j, k;
    __m128i vA, vB, vR, vSum;

    for(i = 0; i < N; ++i) {
        for(j = 0; j < N; ++j) {
            vR = _mm_setzero_si128();
            for(k = 0; k < N; k += 4) {
                //result[i][j] += mat1[i][k] * mat2[k][j];
                vA = _mm_loadu_si128((__m128i*)&mat1[i][k]);
                vB = _mm_insert_epi32(vB, mat2[k][j], 0);
                vB = _mm_insert_epi32(vB, mat2[k + 1][j], 1);
                vB = _mm_insert_epi32(vB, mat2[k + 2][j], 2);
                vB = _mm_insert_epi32(vB, mat2[k + 3][j], 3);
                vR = _mm_mullo_epi32(vA, vB);
                vR = _mm_hadd_epi32(vR, vR);
                vR = _mm_hadd_epi32(vR, vR);
                result[i][j] += _mm_extract_epi32(vR, 0);

                //DEBUG
                //printf("vA: %d, %d, %d, %d\n", vA.m128i_i32[0], vA.m128i_i32[1], vA.m128i_i32[2], vA.m128i_i32[3]);
                //printf("vB: %d, %d, %d, %d\n", vB.m128i_i32[0], vB.m128i_i32[1], vB.m128i_i32[2], vB.m128i_i32[3]);
                //printf("vR: %d, %d, %d, %d\n", vR.m128i_i32[0], vR.m128i_i32[1], vR.m128i_i32[2], vR.m128i_i32[3]);
                //printf("\n");
            }
        }
    }
}

Update 2: converted Peters example to an i-k-j loop order version. Required an extra load for vR and moving in the store to inner loop, but setting vA could be moved up a loop. Turned out faster.

void matmulSSE_2(int mat1[N][N], int mat2[N][N], int result[N][N]) {
    int i, j, k;
    __m128i vA, vB, vR;

    for(i = 0; i < N; ++i) {
        for(k = 0; k < N; ++k) {
            vA = _mm_set1_epi32(mat1[i][k]);
            for(j = 0; j < N; j += 4) {
                //result[i][j] += mat1[i][k] * mat2[k][j];
                vB = _mm_loadu_si128((__m128i*)&mat2[k][j]);
                vR = _mm_loadu_si128((__m128i*)&result[i][j]);
                vR = _mm_add_epi32(vR, _mm_mullo_epi32(vA, vB));
                _mm_storeu_si128((__m128i*)&result[i][j], vR);

                //DEBUG
                //printf("vA: %d, %d, %d, %d\n", vA.m128i_i32[0], vA.m128i_i32[1], vA.m128i_i32[2], vA.m128i_i32[3]);
                //printf("vB: %d, %d, %d, %d\n", vB.m128i_i32[0], vB.m128i_i32[1], vB.m128i_i32[2], vB.m128i_i32[3]);
                //printf("vR: %d, %d, %d, %d\n", vR.m128i_i32[0], vR.m128i_i32[1], vR.m128i_i32[2], vR.m128i_i32[3]);

                //printf("\n");
            }
        }
    }
}

回答1:

You're right, your vB is the problem. You're loading 4 consecutive integers, but mat2[k+0..3][j] aren't contiguous. You're actually getting mat2[k][j+0..3].


I forget what works well for matmul. Sometimes it works well to produce 4 results in parallel, instead of doing a horizontal sum for every result.

Transposing one of your input matrices works well, and costs O(N^2). It's worth it because it means the O(N^3) matmul can use sequential accesses, and your current loop structure becomes SIMD-friendly.

There are even better ways, transposing small blocks right before use, so they're still hot in L1 cache when you read them again. Cache blocking, aka loop tiling, is one key to good matmul performance.

Much has been written about optimizing matrix multiplies, with SIMD and with cache-blocking. I suggest you google it up. Most if it is probably talking about FP, but it all applies to integer as well.

(Except that SSE/AVX only has FMA for FP, not for 32-bit integers, and the 8 and 16-bit input PMADD instructions do horizontal adds of pairs.)


Actually I think you can produce 4 results in parallel here:

void matmulSSE(int mat1[N][N], int mat2[N][N], int result[N][N]) {

  for(int i = 0; i < N; ++i) {
    for(int j = 0; j < N; j+=4) {   // vectorize over this loop
        __m128i vR = _mm_setzero_si128();
        for(int k = 0; k < N; k++) {   // not this loop
            //result[i][j] += mat1[i][k] * mat2[k][j];
            __m128i vA = _mm_set1_epi32(mat1[i][k]);  // load+broadcast is much cheaper than MOVD + 3 inserts (or especially 4x insert, which your new code is doing)
            __m128i vB = _mm_loadu_si128((__m128i*)&mat2[k][j]);  // mat2[k][j+0..3]
            vR = _mm_add_epi32(vR, _mm_mullo_epi32(vA, vB));
        }
        _mm_storeu_si128((__m128i*)&result[i][j], vR));
    }
  }
}

A broadcast-load (or separate load+broadcast without AVX) is still much cheaper than a gather.

Your current code does the gather with 4 inserts, instead of breaking the dependency chain on the previous iteration's value by using a MOVD for the first element, so that's even worse. But even the best gather of 4 scattered elements is pretty bad compared to a load + PSHUFD. Not to mention that that makes your code need SSE4.1. Although it does anyway, for _mm_mullo_epi32 instead of the widening PMULDQ (_mm_mul_epi32).