Inspired by Mike Bantegui's question on constructing a matrix defined as a recurrence relation, I wonder if there is any general guidance that could be given on setting up large block matrices in the least computation time. In my experience, constructing the blocks and then putting them together can be quite inefficient (thus my answer was actually slower than Mike's original code). Join
and possibly ArrayFlatten
are possibly less efficient than they could be.
Obviously if the matrix is sparse, one can use SparseMatrix
constructs, but there will be times when the block matrix you are constructing is not sparse.
What is best practice for this kind of problem? I am assuming the elements of the matrix are numeric.
The code shown below is available here: http://pastebin.com/4PWWxGhB. Just copy and paste it into a notebook to test it out.
I was actually trying to do several functional ways of calculating matrices, since I figured the functional way (which is typically idiomatic in Mathematica) is more efficient.
As one example, I had this matrix which was composed of two lists:
My first step was to time everything.
DiagonalMatrix[...]
was slower than the do loops, so I decided to just useDo
loops on the last step. As you can see, usingOuter[Times, f, f]
was much faster in this case.I then wrote the equivalent using
Outer
for the blocks in the upper right and bottom left of the matrix, andDiagonalMatrix
for the diagonal:The
DiagonalMatrix
was actually slower. I could replace this with just theDo
loops, but I kept it because it was cleaner looking.The current tally is 9.06 seconds for the naive
Do
loop, and 1.389 seconds for my next version usingOuter
andDiagonalMatrix
. About a 6.5 times speedup, not too bad.Sounds a lot faster, now doesn't it? Let's try using
Compile
now.Now it's running 3.56 times faster than my last version, and 23.23 times faster than the first one. Next version:
Most of the speed came from
CompilationTarget->"C"
. Here I got another 2.84 speedup over the fastest version, and 66.13 times speedup over the first version. But all I did was just compile it!Now, this is a very simple example. But this is real code I'm using to solve a problem in condensed matter physics. So don't dismiss it as possibly being a "toy example."
How's about another example of a technique we can use? I have a relatively simple matrix I have to build up. I have a matrix that's composed of nothing but ones from the start to some arbitrary point. The naive way may look something like this:
Instead, let's build it up using
ArrayPad
andIdentityMatrix
:This actually doesn't work for k = 0, but you can special case that if you need that. Furthermore, depending on the size of k, this can be faster or slower. It's always faster than the Table[...] version though.
You could even write this using
SparseArray
:I could go on about some other things, but I'm afraid if I do I'll make this answer unreasonably large. I've accumulated a number of techniques for forming these various matrices and lists in the time I spent trying to optimize some code. The base code I worked with took over 6 days for one calculation to run, and now it takes only 6 hours to do the same thing.
I'll see if I can pick out the general techniques I've come up with and just stick them in a notebook to use.
TL;DR: It seems like for these cases, the functional way outperforms the procedural way. But when compiled, the procedural code outperforms the functional code.
Looking at what
Compile
does toDo
loops is instructive. Consider this:First, it seems safe to assume that
Do
does not automatically compile its argument if it's over some length (asMap
,Nest
etc do): you can keep adding constants and the derivative of time taken vs number of constants is constant. This is further supported by the nonexistence of such an option inSystemOptions["CompileOptions"]
.Next, since this loops around
n(n-1)/2
times withn=2*L
, so around 3*10^6 times for ourL=1200
, the time taken for each addition indicates that there is a lot more going on than is necessary.Next let us try
So here things are more reasonable. Let's take a look:
the two
CompilePrint
s give the same output, namely,f1==f2
returnsTrue
.Now, do
I won't show the full listings, but in the first there is a line
R3 = Sin[ R1]
while in the second there is an assignment to a registerR1 = 0.43496553411123023
(which, however, is reassigned in the innermost part of the loop byR2 = R1
; perhaps if we output to C this will be optimized by gcc eventually).So, in these very simple cases, uncompiled
Do
just blindly executes the body without inspecting it, whileCompile
does do various simple optimizations (in addition to outputing byte code). While here I am choosing examples that exaggerate how literallyDo
interprets its argument, this kind of thing partly explains the large speedup after compiling.As for the huge speedup in Mike Bantegui's question yesterday, I think the speedup in such simple problems (just looping and multiplying things) is because there is no reason that automatically produced C code can't be optimized by the compiler to get things running as fast as possible. The C code produced is too hard to understand for me, but the bytecode is readable and I don't think that there is anything all that wasteful. So it is not that shocking that it is so fast when compiled to C. Using built-in functions shouldn't be any faster than that, since there shouldn't be any difference in the algorithm (if there is, the
Do
loop shouldn't have been written that way).All this should be checked case by case, of course. In my experience,
Do
loops usually are the fastest way to go for this kind of operation. However, compilation has its limits: if you are producing large objects and trying to pass them around between two compiled functions (as arguments), the bottleneck can be this transfer. One solution is to simply put everything into one giant function and compile that; this ends up being harder and harder to do (you are forced to write C in mma, so to speak). Or you can try compiling the individual functions and usingCompilationOptions -> {"InlineCompiledFunctions" -> True}]
in theCompile
. Things can get tricky very fast, though.But this is getting too long.