Multithreaded sparse matrix multiplication in Matl

2020-03-13 04:18发布

问题:

I am performing several matrix multiplications of an NxN sparse (~1-2%) matrix, let's call it B, with an NxM dense matrix, let's call it A (where M < N). N is large, as is M; on the order of several thousands. I am running Matlab 2013a.

Now, usually, matrix multiplications and most other matrix operations are implicitly parallelized in Matlab, i.e. they make use of multiple threads automatically. This appears NOT to be the case if either of the matrices are sparse (see e.g. this StackOverflow discussion - with no answer for the intended question - and this largely unanswered MathWorks thread). This is a rather unhappy surprise for me.

We can verify that multithreading has no effects for sparse matrix operations by the following code:

clc; clear all; 

N = 5000;         % set matrix sizes
M = 3000;       
A = randn(N,M);   % create dense random matrices
B = sprand(N,N,0.015); % create sparse random matrix
Bf = full(B);     %create a dense form of the otherwise sparse matrix B

for i=1:3 % test for 1, 2, and 4 threads
  m(i) = 2^(i-1);
  maxNumCompThreads(m(i)); % set the thread count available to Matlab
  tic                      % starts timer
    y = B*A; 
  walltime(i) = toc;       % wall clock time
  speedup(i) = walltime(1)/walltime(i);
end

% display number of threads vs. speed up relative to just a single thread
[m',speedup']

This produces the following output, which illustrates that there is no difference between using 1, 2, and 4 threads for sparse operations:

threads   speedup
1.0000    1.0000
2.0000    0.9950
4.0000    1.0155

If, on the other hand, I replace B by its dense form, refered to as Bf above, I get significant speedup:

threads   speedup
1.0000    1.0000
2.0000    1.8894
4.0000    3.4841

(illustrating that matrix operations for dense matrices in Matlab are indeed implicitly parallelized)

So, my question: is there any way at all to access a parallelized/threaded version of matrix operations for sparse matrices (in Matlab) without converting them to dense form? I found one old suggestion involving .mex files at MathWorks, but it seems the links are dead and not very well documented/no feedback? Any alternatives?

It seems to be a rather severe restriction of implicit parallelism functionality, since sparse matrices are abound in computationally heavy problems, and hyperthreaded functionality highly desirable in these cases.

回答1:

MATLAB already uses SuiteSparse by Tim Davis for many of its operation on sparse matrices (for example see here), but neither of which I believe are multithreaded.

Usually computations on sparse matrices are memory-bound rather than CPU-bound. So even you use a multithreaded library, I doubt you will see huge benefits in terms of performance, at least not comparable to those specialized in dense matrices...

After all that the design of sparse matrices have different goals in mind than regular dense matrices, where efficient memory storage is often more important.


I did a quick search online, and found a few implementations out there:

  • sparse BLAS, spBLAS, PSBLAS. For instance, Intel MKL and AMD ACML do have some support for sparse matrices
  • cuSPARSE, CUSP, VexCL, ViennaCL, etc.. that run on the GPU.


回答2:

I ended up writing my own mex file with OpenMP for multithreading. Code as follows. Don't forget to use -largeArrayDims and /openmp (or -fopenmp) flags when compiling.

#include <omp.h>
#include "mex.h"
#include "matrix.h"

#define ll long long

void omp_smm(double* A, double*B, double* C, ll m, ll p, ll n, ll* irs, ll* jcs)
{
    for (ll j=0; j<p; ++j)
    {
        ll istart = jcs[j];
        ll iend = jcs[j+1];
        #pragma omp parallel for
        for (ll ii=istart; ii<iend; ++ii)
        {
            ll i = irs[ii];
            double aa = A[ii];
            for (ll k=0; k<n; ++k)
            {
                C[i+k*m] += B[j+k*p]*aa;
            }
        }
    }
}


void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
    double *A, *B, *C; /* pointers to input & output matrices*/
    size_t m,n,p;      /* matrix dimensions */

    A = mxGetPr(prhs[0]); /* first sparse matrix */
    B = mxGetPr(prhs[1]); /* second full matrix */

    mwIndex * irs = mxGetIr(prhs[0]);
    mwIndex * jcs = mxGetJc(prhs[0]);

    m = mxGetM(prhs[0]);  
    p = mxGetN(prhs[0]);
    n = mxGetN(prhs[1]);

    /* create output matrix C */
    plhs[0] = mxCreateDoubleMatrix(m, n, mxREAL);
    C = mxGetPr(plhs[0]);

    omp_smm(A,B,C, m, p, n, (ll*)irs, (ll*)jcs);
}


回答3:

On matlab central the same question was asked, and this answer was given:

I believe the sparse matrix code is implemented by a few specialized TMW engineers rather than an external library like BLAS/LAPACK/LINPACK/etc... 

Which basically means, that you are out of luck.


However I can think of some tricks to achieve faster computations:

  1. If you need to do several multiplications: do multiple multiplications at once and process them in parallel?
  2. If you just want to do one multiplication: Cut the matrix into pieces (for example top half and bottom half), do the calculations of the parts in parallel and combine the results afterwards

Probably these solutions will not turn out to be as fast as properly implemented multithreading, but hopefully you can still get a speedup.