Solve a particular linear system efficiently in ju

2019-05-28 20:48发布

I use extensively the julia's linear equation solver res = X\b. I have to use it millions of times in my program because of parameter variation. This was working ok because I was using small dimensions (up to 30). Now that I want to analyse bigger systems, up to 1000, the linear solver is no longer efficient.

I think there can be a work around. However I must say that sometimes my X matrix is dense, and sometimes is sparse, so I need something that works fine for both cases.

The b vector is a vector with all zeroes, except for one entry which is always 1 (actually it is always the last entry). Moreover, I don't need all the res vector, just the first entry of it.

3条回答
孤傲高冷的网名
2楼-- · 2019-05-28 21:28

Best approach for efficiency would be to use: JuliaMath/IterativeSolvers.jl. For A * x = b problems, I would recommend x = lsmr(A, b).

Second best alternatives would be to give a bit more information to the compiler: instead of x = inv(A'A) * A' * b, do x = inv(cholfact(A'A)) A' * b if Cholesky decomposition works for you. Otherwise, you could try U, S, Vt = svd(A) and x = Vt' * diagm(sqrt.(S)) * U' * b.

Unsure if x = pinv(A) * b is optimized, but might be slightly more efficient than x = A \ b.

查看更多
SAY GOODBYE
3楼-- · 2019-05-28 21:37

Usually when people talk about speeding up linear solvers res = X \ b, it’s for multiple bs. But since your b isn’t changing, and you just keep changing X, none of those tricks apply.

The only way to speed this up, from a mathematical perspective, seems to be to ensure that Julia is picking the fastest solver for X \ b, i.e., if you know X is positive-definite, use Cholesky, etc. Matlab’s flowcharts for how it picks the solver to use for X \ b, for dense and sparse X, are available—most likely Julia implements something close to these flowcharts too, but again, maybe you can find some way to simplify or shortcut it.

All programming-related speedups (multiple threads—while each individual solver is probably already multi-threaded, it may be worth running multiple solvers in parallel when each solver uses fewer threads than cores; @simd if you’re willing to dive into the solvers themselves; OpenCL/CUDA libraries; etc.) then can be applied.

查看更多
smile是对你的礼貌
4楼-- · 2019-05-28 21:40

If your problem is of the form (A - µI)x = b, where µ is a variable parameter and A, b are fixed, you might work with diagonalization.

Let A = PDP° where denotes the inverse of P. Then (PDP° - µI)x = b can be transformed to

(D - µI)P°x = P°b, 
P°x = P°b / (D - µI), 
x = P(P°b / (D - µI)).

(the / operation denotes the division of the respective vector elements by the scalars Dr - µ.)

After you have diagonalized A, computing a solution for any µ reduces to two matrix/vector products, or a single one if you can also precompute P°b.

Numerical instability will show up in the vicinity of the Eigenvalues of A.

查看更多
登录 后发表回答