Short version of my question:
What would be the optimal way of calculating an eigenvector for a matrix A
, if we already know the eigenvalue belonging to the eigenvector?
Longer explanation:
I have a large stochastic matrix A
which, because it is stochastic, has a non-negative left eigenvector x
(such that A^Tx=x
).
I'm looking for quick and efficient methods of numerically calculating this vector. (Preferrably in MATLAB or numpy/scipy - since both of these wrap around ARPACK/LAPACK, any one would be fine).
I know that 1
is the largest eigenvalue of A
, so I know that calling something like this Python code:
from scipy.sparse.linalg import eigs
vals, vecs = eigs(A, k=1)
will result in vals = 1
and vecs
equalling the vector I need.
However, the thing that bothers me here is that calculating eigenvalues is, in general, a more difficult operation than solving a linear system, and, in general, if a matrix M
has eigenvalue l
, then finding the appropriate eigenvector is a matter of solving the equation (M - 1 * I) * x = 0
, which is, in theory at least, an operation that is simpler than calculating an eigenvalue, since we are only solving a linear system, more specifically, finding the nullspace of a matrix.
However, I find that all methods of nullspace calculation in MATLAB
rely on svd
calculation, a process I cannot afford to perform on a matrix of my size. I also cannot call solvers on the linear equation, because they all only find one solution, and that solution is 0
(which, yes, is a solution, but not the one I need).
Is there any way to avoid calls to eigs
-like function to solve my problem more quickly than by calculating the largest eigenvalue and accompanying eigenvector?