Effects of Loop unrolling on memory bound data

2019-06-01 11:15发布

问题:

I have been working with a piece of code which is intensively memory bound. I am trying to optimize it within a single core by manually implementing cache blocking, sw prefetching, loop unrolling etc. Even though cache blocking gives significant improvement in performance. However when i introduce loop unrolling I get tremendous performance degradation.

I am compiling with Intel icc with compiler flags -O2 and -ipo in all my test cases.

My code is similar to this (3D 25-point stencil):

    void stencil_baseline (double *V, double *U, int dx, int dy, int dz, double c0, double c1,     double c2, double c3, double c4)
   {
   int i, j, k;

   for (k = 4; k < dz-4; k++) 
   {
    for (j = 4; j < dy-4; j++) 
    {
        //x-direction
            for (i = 4; i < dx-4; i++) 
        {
            U[k*dy*dx+j*dx+i] =  (c0 * (V[k*dy*dx+j*dx+i]) //center
                +  c1 * (V[k*dy*dx+j*dx+(i-1)] + V[k*dy*dx+j*dx+(i+1)])                 
                +  c2 * (V[k*dy*dx+j*dx+(i-2)] + V[k*dy*dx+j*dx+(i+2)])     
                +  c3 * (V[k*dy*dx+j*dx+(i-3)] + V[k*dy*dx+j*dx+(i+3)]) 
                +  c4 * (V[k*dy*dx+j*dx+(i-4)] + V[k*dy*dx+j*dx+(i+4)]));

        }

        //y-direction   
        for (i = 4; i < dx-4; i++) 
        {
            U[k*dy*dx+j*dx+i] += (c1 * (V[k*dy*dx+(j-1)*dx+i] + V[k*dy*dx+(j+1)*dx+i])
                + c2 * (V[k*dy*dx+(j-2)*dx+i] + V[k*dy*dx+(j+2)*dx+i])
                + c3 * (V[k*dy*dx+(j-3)*dx+i] + V[k*dy*dx+(j+3)*dx+i]) 
                + c4 * (V[k*dy*dx+(j-4)*dx+i] + V[k*dy*dx+(j+4)*dx+i]));
        }

        //z-direction
        for (i = 4; i < dx-4; i++) 
        {
            U[k*dy*dx+j*dx+i] += (c1 * (V[(k-1)*dy*dx+j*dx+i] + V[(k+1)*dy*dx+j*dx+i])
                + c2 * (V[(k-2)*dy*dx+j*dx+i] + V[(k+2)*dy*dx+j*dx+i])
                + c3 * (V[(k-3)*dy*dx+j*dx+i] + V[(k+3)*dy*dx+j*dx+i]) 
                + c4 * (V[(k-4)*dy*dx+j*dx+i] + V[(k+4)*dy*dx+j*dx+i]));

        }

    }
   }

 }

When I do loop unrolling on the innermost loop (dimension i) and unroll in directions x,y,z separately by unroll factor 2,4,8 respectively, I get performance degradation in all 9 cases i.e. unroll by 2 on direction x, unroll by 2 on direction y, unroll by 2 in direction z, unroll by 4 in direction x ... etc. But when I perform loop unrolling on the outermost loop (dimension k) by factor of 8 (2 & 4 also), I get v.good performance improvement which is even better than cache blocking.

I even tried profiling my code with Intel Vtune. It seemed like the bottlenecks where mainly due to 1.LLC Miss and 2. LLC Load Misses serviced by Remote DRAM.

I am unable to understand why unrolling the innermost fastest loop in giving performance degradation whereas unrolling the outermost, slowest dimension is fetching performance improvement. However, this improvement in the latter case is when i use -O2 and -ipo when compiling with icc.

I am not sure how to interpret these statistics. Can someone help shed some light on this.

回答1:

This strongly suggests that you are causing instruction cache misses by the unrolling, which is typical. In the age of modern hardware, unrolling no longer automatically means faster code. If each inner loop fits in a cache line, you will get better performance.

You may be able to unroll manually, to limit the size of the generated code, but this will require examining the generated machine-language instructions -- and their position -- to ensure that your loop is within a single cache line. Cache lines are typically 64 bytes long, and aligned on 64-byte boundaries.

Outer loops do not have the same effect. They will likely be outside of the instruction cache regardless of the unroll level. Unrolling these results in fewer branches, which is why you get better performance.

"Load misses serviced by remote DRAM" means that you allocated memory on one NUMA node, but now you are running on the other. Setting process or thread affinity based on NUMA is the answer.

Remote DRAM takes almost twice as long to read as local DRAM on the Intel machines that I have used.