I use extensively the julia's linear equation solver res = X\b
. I have to use it millions of times in my program because of parameter variation. This was working ok because I was using small dimensions (up to 30
). Now that I want to analyse bigger systems, up to 1000
, the linear solver is no longer efficient.
I think there can be a work around. However I must say that sometimes my X matrix is dense, and sometimes is sparse, so I need something that works fine for both cases.
The b
vector is a vector with all zeroes, except for one entry which is always 1
(actually it is always the last entry). Moreover, I don't need all the res
vector, just the first entry of it.
Best approach for efficiency would be to use: JuliaMath/IterativeSolvers.jl. For
A * x = b
problems, I would recommendx = lsmr(A, b)
.Second best alternatives would be to give a bit more information to the compiler: instead of
x = inv(A'A) * A' * b
, dox = inv(cholfact(A'A)) A' * b
if Cholesky decomposition works for you. Otherwise, you could tryU, S, Vt = svd(A)
andx = Vt' * diagm(sqrt.(S)) * U' * b
.Unsure if
x = pinv(A) * b
is optimized, but might be slightly more efficient thanx = A \ b
.Usually when people talk about speeding up linear solvers
res = X \ b
, it’s for multipleb
s. But since yourb
isn’t changing, and you just keep changingX
, none of those tricks apply.The only way to speed this up, from a mathematical perspective, seems to be to ensure that Julia is picking the fastest solver for
X \ b
, i.e., if you knowX
is positive-definite, use Cholesky, etc. Matlab’s flowcharts for how it picks the solver to use forX \ b
, for dense and sparseX
, are available—most likely Julia implements something close to these flowcharts too, but again, maybe you can find some way to simplify or shortcut it.All programming-related speedups (multiple threads—while each individual solver is probably already multi-threaded, it may be worth running multiple solvers in parallel when each solver uses fewer threads than cores;
@simd
if you’re willing to dive into the solvers themselves; OpenCL/CUDA libraries; etc.) then can be applied.If your problem is of the form
(A - µI)x = b
, whereµ
is a variable parameter andA
,b
are fixed, you might work with diagonalization.Let
A = PDP°
whereP°
denotes the inverse ofP
. Then(PDP° - µI)x = b
can be transformed to(the
/
operation denotes the division of the respective vector elements by the scalarsDr - µ
.)After you have diagonalized
A
, computing a solution for anyµ
reduces to two matrix/vector products, or a single one if you can also precomputeP°b
.Numerical instability will show up in the vicinity of the Eigenvalues of
A
.