pseudo inverse of sparse matrix in python

2019-04-07 07:51发布

I am working with data from neuroimaging and because of the large amount of data, I would like to use sparse matrices for my code (scipy.sparse.lil_matrix or csr_matrix).

In particular, I will need to compute the pseudo-inverse of my matrix to solve a least-square problem. I have found the method sparse.lsqr, but it is not very efficient. Is there a method to compute the pseudo-inverse of Moore-Penrose (correspondent to pinv for normal matrices).

The size of my matrix A is about 600'000x2000 and in every row of the matrix I'll have from 0 up to 4 non zero values. The matrix A size is given by voxel x fiber bundle (white matter fiber tracts) and we are expecting maximum 4 tracts to cross in a voxel. In most of the white matter voxels we expect to have at least 1 tract, but I will say that around 20% of the lines could be zeros.

The vector b should not be sparse, actually b contains the measure for each voxel, which is in general not zero.

I would need to minimize the error, but there are also some conditions on the vector x. As I tried the model on smaller matrices, I never needed to constrain the system in order to satisfy these conditions (in general 0

Is that of any help? Is there a way to avoid taking the pseudo-inverse of A?

Thanks

Update 1st June: thanks again for the help. I can't really show you anything about my data, because the code in python give me some problems. However, in order to understand how I could choose a good k I've tried to create a testing function in Matlab.

The code is as follow:

F=zeros(100000,1000);

for k=1:150000
    p=rand(1);
    a=0;
    b=0;
    while a<=0 || b<=0
    a=random('Binomial',100000,p);
    b=random('Binomial',1000,p);
    end
    F(a,b)=rand(1);
end

solution=repmat([0.5,0.5,0.8,0.7,0.9,0.4,0.7,0.7,0.9,0.6],1,100);
size(solution)
solution=solution';
measure=F*solution;
%check=pinvF*measure;
k=250;
F=sparse(F);
[U,S,V]=svds(F,k);
s=svds(F,k);
plot(s)
max(max(U*S*V'-F))
for s=1:k
    if S(s,s)~=0
        S(s,s)=1/S(s,s);
    end
end

inv=V*S'*U';
inv*measure
max(inv*measure-solution)

Do you have any idea of what should be k compare to the size of F? I've taken 250 (over 1000) and the results are not satisfactory (the waiting time is acceptable, but not short). Also now I can compare the results with the known solution, but how could one choose k in general? I also attached the plot of the 250 single values that I get and their squares normalized. I don't know exactly how to better do a screeplot in matlab. I'm now proceeding with bigger k to see if suddently the value will be much smaller.

Thanks again, Jennifer

The image shows the 250 computed. I don't know exactly how to create a scree plot in Matlab. squared normalized single values

2条回答
别忘想泡老子
2楼-- · 2019-04-07 08:09

Regardless of the answer to my comment, I would think you could accomplish this fairly easily using the Moore-Penrose SVD representation. Find the SVD with scipy.sparse.linalg.svds, replace Sigma by its pseudoinverse, and then multiply V*Sigma_pi*U' to find the pseudoinverse of your original matrix.

查看更多
Luminary・发光体
3楼-- · 2019-04-07 08:15

You could study more on the alternatives offered in scipy.sparse.linalg.

Anyway, please note that a pseudo-inverse of a sparse matrix is most likely to be a (very) dense one, so it's not really a fruitful avenue (in general) to follow, when solving sparse linear systems.

You may like to describe a slight more detailed manner your particular problem (dot(A, x)= b+ e). At least specify:

  • 'typical' size of A
  • 'typical' percentage of nonzero entries in A
  • least-squares implies that norm(e) is minimized, but please indicate whether your main interest is on x_hat or on b_hat, where e= b- b_hat and b_hat= dot(A, x_hat)

Update: If you have some idea of the rank of A (and its much smaller than number of columns), you could try total least squares method. Here is a simple implementation, where k is the number of first singular values and vectors to use (i.e. 'effective' rank).

from scipy.sparse import hstack
from scipy.sparse.linalg import svds

def tls(A, b, k= 6):
    """A tls solution of Ax= b, for sparse A."""
    u, s, v= svds(hstack([A, b]), k)
    return v[-1, :-1]/ -v[-1, -1]
查看更多
登录 后发表回答