Problem
I have been studying HPC, specifically using matrix multiplication as my project (see my other posts in profile). I achieve good performance in those, but not good enough. I am taking a step back to see how well I can do with a dot product calculation.
Dot Product vs. Matrix Multiplication
The dot product is simpler, and will allow me to test HPC concepts without dealing with packing and other related issues. Cache blocking is still an issue, which forms my second question.
Algorithm
Multiply n
corresponding elements in two double
arrays A
and B
and sum them. A double
dot product in assembly is just a series of movapd
, mulpd
, addpd
. Unrolled and arranged in a clever way, it is possible to have groups of movapd
/mulpd
/addpd
that operate on different xmm
registers and are thus independent, optimizing pipelining. Of course, it turns out that this does not matter so much as my CPU has out-of-order execution. Also note that the re-arrangement requires peeling off the last iteration.
Other Assumptions
I am not writing the code for general dot products. The code is for specific sizes and I am not handling fringe cases. This is just to test HPC concepts and to see what type of CPU usage I can attain.
Results
Compiled with gcc -std=c99 -O2 -m32 -mincoming-stack-boundary=2 -msse3 -mfpmath=sse,387 -masm=intel
. I am on a different computer than usual. This computer has a i5 540m
which can obtain 2.8 GHz * 4 FLOPS/cycle/core = 11.2 GFLOPS/s per core
after a two-step Intel Turbo Boost (both cores are on right now so it only gets 2 step...a 4 step boost is possible if I turn off one core). 32 bit LINPACK gets around 9.5 GFLOPS/s when set to run with one thread.
N Total Gflops/s Residual
256 5.580521 1.421085e-014
384 5.734344 -2.842171e-014
512 5.791168 0.000000e+000
640 5.821629 0.000000e+000
768 5.814255 2.842171e-014
896 5.807132 0.000000e+000
1024 5.817208 -1.421085e-013
1152 5.805388 0.000000e+000
1280 5.830746 -5.684342e-014
1408 5.881937 -5.684342e-014
1536 5.872159 -1.705303e-013
1664 5.881536 5.684342e-014
1792 5.906261 -2.842171e-013
1920 5.477966 2.273737e-013
2048 5.620931 0.000000e+000
2176 3.998713 1.136868e-013
2304 3.370095 -3.410605e-013
2432 3.371386 -3.410605e-013
Question 1
How can I do better than this? I am not even coming close to the peak performance. I have optimized the assembly code to high heaven. Further unrolling might boost it just a little more, but less unrolling seems to degrade performance.
Question 2
When n > 2048
, you can see a drop in performance. This is because my L1 cache is 32KB, and when n = 2048
and A
and B
are double
, they take up the entire cache. Any bigger and they are streamed from memory.
I tried cache blocking (not shown in source), but maybe I did it wrong. Can anyone provide some code or explain how to block a dot product for a cache?
Source Code
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <x86intrin.h>
#include <math.h>
#include <omp.h>
#include <stdint.h>
#include <windows.h>
// computes 8 dot products
#define KERNEL(address) \
"movapd xmm4, XMMWORD PTR [eax+"#address"] \n\t" \
"mulpd xmm7, XMMWORD PTR [edx+48+"#address"] \n\t" \
"addpd xmm2, xmm6 \n\t" \
"movapd xmm5, XMMWORD PTR [eax+16+"#address"] \n\t" \
"mulpd xmm4, XMMWORD PTR [edx+"#address"] \n\t" \
"addpd xmm3, xmm7 \n\t" \
"movapd xmm6, XMMWORD PTR [eax+96+"#address"] \n\t" \
"mulpd xmm5, XMMWORD PTR [edx+16+"#address"] \n\t" \
"addpd xmm0, xmm4 \n\t" \
"movapd xmm7, XMMWORD PTR [eax+112+"#address"] \n\t" \
"mulpd xmm6, XMMWORD PTR [edx+96+"#address"] \n\t" \
"addpd xmm1, xmm5 \n\t"
#define PEELED(address) \
"movapd xmm4, XMMWORD PTR [eax+"#address"] \n\t" \
"mulpd xmm7, [edx+48+"#address"] \n\t" \
"addpd xmm2, xmm6 \n\t" \
"movapd xmm5, XMMWORD PTR [eax+16+"#address"] \n\t" \
"mulpd xmm4, XMMWORD PTR [edx+"#address"] \n\t" \
"addpd xmm3, xmm7 \n\t" \
"mulpd xmm5, XMMWORD PTR [edx+16+"#address"] \n\t" \
"addpd xmm0, xmm4 \n\t" \
"addpd xmm1, xmm5 \n\t"
inline double
__attribute__ ((gnu_inline))
__attribute__ ((aligned(64))) ddot_ref(
int n,
const double* restrict A,
const double* restrict B)
{
double sum0 = 0.0;
double sum1 = 0.0;
double sum2 = 0.0;
double sum3 = 0.0;
double sum;
for(int i = 0; i < n; i+=4) {
sum0 += *(A + i ) * *(B + i );
sum1 += *(A + i+1) * *(B + i+1);
sum2 += *(A + i+2) * *(B + i+2);
sum3 += *(A + i+3) * *(B + i+3);
}
sum = sum0+sum1+sum2+sum3;
return(sum);
}
inline double
__attribute__ ((gnu_inline))
__attribute__ ((aligned(64))) ddot_asm
( int n,
const double* restrict A,
const double* restrict B)
{
double sum;
__asm__ __volatile__
(
"mov eax, %[A] \n\t"
"mov edx, %[B] \n\t"
"mov ecx, %[n] \n\t"
"pxor xmm0, xmm0 \n\t"
"pxor xmm1, xmm1 \n\t"
"pxor xmm2, xmm2 \n\t"
"pxor xmm3, xmm3 \n\t"
"movapd xmm6, XMMWORD PTR [eax+32] \n\t"
"movapd xmm7, XMMWORD PTR [eax+48] \n\t"
"mulpd xmm6, XMMWORD PTR [edx+32] \n\t"
"sar ecx, 7 \n\t"
"sub ecx, 1 \n\t" // peel
"L%=: \n\t"
KERNEL(64 * 0)
KERNEL(64 * 1)
KERNEL(64 * 2)
KERNEL(64 * 3)
KERNEL(64 * 4)
KERNEL(64 * 5)
KERNEL(64 * 6)
KERNEL(64 * 7)
KERNEL(64 * 8)
KERNEL(64 * 9)
KERNEL(64 * 10)
KERNEL(64 * 11)
KERNEL(64 * 12)
KERNEL(64 * 13)
KERNEL(64 * 14)
KERNEL(64 * 15)
"lea eax, [eax+1024] \n\t"
"lea edx, [edx+1024] \n\t"
" \n\t"
"dec ecx \n\t"
"jnz L%= \n\t" // end loop
" \n\t"
KERNEL(64 * 0)
KERNEL(64 * 1)
KERNEL(64 * 2)
KERNEL(64 * 3)
KERNEL(64 * 4)
KERNEL(64 * 5)
KERNEL(64 * 6)
KERNEL(64 * 7)
KERNEL(64 * 8)
KERNEL(64 * 9)
KERNEL(64 * 10)
KERNEL(64 * 11)
KERNEL(64 * 12)
KERNEL(64 * 13)
KERNEL(64 * 14)
PEELED(64 * 15)
" \n\t"
"addpd xmm0, xmm1 \n\t" // summing result
"addpd xmm2, xmm3 \n\t"
"addpd xmm0, xmm2 \n\t" // cascading add
"movapd xmm1, xmm0 \n\t" // copy xmm0
"shufpd xmm1, xmm0, 0x03 \n\t" // shuffle
"addsd xmm0, xmm1 \n\t" // add low qword
"movsd %[sum], xmm0 \n\t" // mov low qw to sum
: // outputs
[sum] "=m" (sum)
: // inputs
[A] "m" (A),
[B] "m" (B),
[n] "m" (n)
: //register clobber
"memory",
"eax","ecx","edx","edi",
"xmm0","xmm1","xmm2","xmm3","xmm4","xmm5","xmm6","xmm7"
);
return(sum);
}
int main()
{
// timers
LARGE_INTEGER frequency, time1, time2;
double time3;
QueryPerformanceFrequency(&frequency);
// clock_t time1, time2;
double gflops;
int nmax = 4096;
int trials = 1e7;
double sum, residual;
FILE *f = fopen("soddot.txt","w+");
printf("%16s %16s %16s\n","N","Total Gflops/s","Residual");
fprintf(f,"%16s %16s %16s\n","N","Total Gflops/s","Residual");
for(int n = 256; n <= nmax; n += 128 ) {
double* A = NULL;
double* B = NULL;
A = _mm_malloc(n*sizeof(*A), 64); if (!A) {printf("A failed\n"); return(1);}
B = _mm_malloc(n*sizeof(*B), 64); if (!B) {printf("B failed\n"); return(1);}
srand(time(NULL));
// create arrays
for(int i = 0; i < n; ++i) {
*(A + i) = (double) rand()/RAND_MAX;
*(B + i) = (double) rand()/RAND_MAX;
}
// warmup
sum = ddot_asm(n,A,B);
QueryPerformanceCounter(&time1);
// time1 = clock();
for (int count = 0; count < trials; count++){
// sum = ddot_ref(n,A,B);
sum = ddot_asm(n,A,B);
}
QueryPerformanceCounter(&time2);
time3 = (double)(time2.QuadPart - time1.QuadPart) / frequency.QuadPart;
// time3 = (double) (clock() - time1)/CLOCKS_PER_SEC;
gflops = (double) (2.0*n*trials)/time3/1.0e9;
residual = ddot_ref(n,A,B) - sum;
printf("%16d %16f %16e\n",n,gflops,residual);
fprintf(f,"%16d %16f %16e\n",n,gflops,residual);
_mm_free(A);
_mm_free(B);
}
fclose(f);
return(0); // successful completion
}
EDIT: explanation of assembly
A dot product is just a repeat sum of products of two numbers: sum += a[i]*b[i]
. sum
must be initialized to 0
before the first iteration. Vectorized, you can do 2 sums at a time which must be summed at the end: [sum0 sum1] = [a[i] a[i+1]]*[b[i] b[i+1]]
, sum = sum0 + sum1
. In (Intel) assembly, this is 3 steps (after the initialization):
pxor xmm0, xmm0 // accumulator [sum0 sum1] = [0 0]
movapd xmm1, XMMWORD PTR [eax] // load [a[i] a[i+1]] into xmm1
mulpd xmm1, XMMWORD PTR [edx] // xmm1 = xmm1 * [b[i] b[i+1]]
addpd xmm0, xmm1 // xmm0 = xmm0 + xmm1
At this point you have nothing special, the compiler can come up with this. You can usually get better performance by unrolling the code enough times to use all xmm
registers available to you (8 registers in 32 bit mode). So if you unroll it 4 times that allows you to utilize all 8 registers xmm0
through xmm7
. You will have 4 accumulators and 4 registers for storing the results of movapd
and addpd
. Again, the compiler can come up with this. The real thinking part is trying to come up with a way to pipeline the code, i.e., make each instruction in the group of MOV/MUL/ADD operate on different registers so that all 3 instructions execute at the same time (usually the case on most CPUs). That's how you beat the compiler. So you have to pattern the 4x unrolled code to do just that, which may require loading vectors ahead of time and peeling off the first or last iteration. This is what KERNEL(address)
is. I made a macro of the 4x unrolled pipelined code for convenience. That way I can easily unroll it in multiples of 4 by just changing address
. Each KERNEL
computes 8 dot products.
To answer your overall question you can't achieve peak performance with the dot product.
The problem is that your CPU can do one 128-bit load per clock cycle and to do the dot product you need two 128-bit loads per clock cycle.
But it's worse than that for large n. The answer to your second question is that the dot product is memory bound and not compute bound and so it cannot parallelize for large n with fast cores. This is explained better here why-vectorizing-the-loop-does-not-have-performance-improvement. This is a big problem with parallelization with fast cores. It took me a while to figure this out but it's very important to learn.
There are actually few basic algorithms that can fully benefit from parallelization on fast cores. In terms of BLAS algorithms it's only the Level-3 algorithms (O(n^3)) such as matrix multiplication that really benefit from parallelization. The situation is better on slow cores e.g. with GPUs and the Xeon Phi because the discrepancy between memory speed and core speed is much smaller.
If you want to find an algorithm which can get close to peak flops for small n try e.g. scalar * vector or the sum of scalar * vector. The first case should do one load, one mult, and one store every clock cycle and the second case one mult, one add, and one load every clock cycle.
I tested the following code on a Core 2 Duo P9600@2.67GHz in Knoppix 7.3 32-bit. I get about 75% of the peak for the scalar product and 75% of the peakfor the sum of the scalar product. The flops/cycle for the scalar product is 2 and for the sum of the scalar product it's 4.
Compiled with
g++ -msse2 -O3 -fopenmp foo.cpp -ffast-math