I have a badly conditioned matrix, whose rcond()
is close to zero, and therefore, the inverse of that matrix does not come out to be correct. I have tried using pinv()
but that does not solve the problem. This is how I am taking the inverse:
X = (A)\(b);
I looked up for a solution to this problem and found this link (last solution) for improving the matrix. The solution there suggests to use this:
A_new = A_old + c*eye(size(A_old));
Where c > 0
. So far employing this technique works in making the matrix A
better conditioned and the resultant solution looks better. However, I investigated using different values of c
and the resultant solution depends on the value of chosen c
.
Other than manually investigating for the value of c
, is there an automatic way through which I can find the value of c
for which I get the best solution?
Adding small values to the diagonal of
A
is roughly equivalent to introducing an L2-norm regularization term in the least squares problemAx=b
. That is one seeks to minimize the residual as well the added constraint:where
lamdba
controls the weight given to minimizing the constraint vs. minimizing the residual norm.Usually this parameter is chosen using some kind of cross-validation technique.
As already pointed out in the comments, the answer to your question crucially depends on your application. Maybe adding a small multiple of the identity matrix is the right thing to do, maybe not. In order to determine that you need to tell us: How did this matrix arise? And what do you need the inverse for?
Two common cases are:
If you know the matrix
A
exactly, e.g. because it is the design matrix in a general linear modelb = A * X
, then modifying it is not a good idea. In this case, the matrix defines a linear system of equations, and if the matrix is singular this means there is no unique solution to this system. To pick one from the infinitely many possible solutions, there are different strategies:X = A \ b
picks a solution with as many zero coefficients as possible, whileX = pinv(A) * b
picks the solution with the minimum L2 norm. See the examples in the documentation ofpinv
.If the matrix
A
is estimated from data, e.g. a covariance matrix for an LDA classifier, and you have reason to believe the true value is not singular and the singularity is just due to not having enough data points for the estimate, then applying regularization or "shrinkage" by adding a small multiple of the identity matrix is a common strategy. In this case, Schäfer and Strimmer (2005) describe a way to estimate the optimal regularization coefficient from the data itself.But I'm sure there are other cases with other answers, too.
In discrete inverse theory, adding a small value
c
to the diagonal of the matrixA
about to be inverted, is called damping the inversion and the small value to be addedc
is called Marquardt-Levenberg coefficient. Sometimes matrix A has zero or close to zero eigenvalues, due to which the matrix becomes singular; adding a small damping coefficient to the diagonal elements makes it stable. Bigger is the value ofc
, bigger is the damping, your matrix inversion is more stable, but you are further away from true solution. Smaller is the value ofc
, smaller is the damping, closer is your inverted matrix to true inverted matrix but it might become unstable. One 'adaptive damping' technique sometimes used is - start with a test value ofc
, invert the matrixA
, then decrease the value of c, do the inversion again and so on. stop when you get weird values in inverted matrix due toA
becoming singular again, like really large numbers. I think this doesn't exactly answer your question, but it was too long to put it in a comment.