I would like to learn more about low level code optimization, and how to take advantage of the underlying machine architecture. I am looking for good pointers on where to read about this topic.
More details:
I am interested in optimization in the context of scientific computing (which is a lot of number crunching but not only) in low level languages such as C/C++. I am in particular interested in optimization methods that are not obvious unless one has a good understanding of how the machine works (which I don't---yet).
For example, it's clear that a better algorithm is faster, without knowing anything about the machine it's run on. It's not at all obvious that it matters if one loops through the columns or the rows of a matrix first. (It's better to loop through the matrix so that elements that are stored at adjacent locations are read successively.)
Basic advice on the topic or pointers to articles are most welcome.
Answers
Got answers with lots of great pointers, a lot more than I'll ever have time to read. Here's a list of all of them:
- The software optimization cookbook from Intel (book)
- What every programmer should know about memory (pdf book)
- Write Great Code, Volume 2: Thinking Low-Level, Writing High-Level (book)
- Software optimization resources by Agner Fog (five detailed pdf manuals)
I'll need a bit of skim time to decide which one to use (not having time for all).
Drepper's What Every Programmer Should Know About Memory [pdf] is a good reference to one aspect of low-level optimisation.
For Intel architectures this is priceless: The Software Optimization Cookbook, Second Edition
It's been a few years since I read it, but Write Great Code, Volume 2: Thinking Low-Level, Writing High-Level by Randall Hyde was quite good. It gives good examples of how C/C++ code translates into assembly, e.g. what really happens when you have a big switch
statement.
Also, altdevblogaday.com is focused on game development, but the programming articles might give you some ideas.
An interesting book about bit manipulation and smart ways of doing low-level things is Hacker's Delight.
This is definitely worth a read for everyone interested in low-level coding.
Check out: http://www.agner.org/optimize/
C and C++ are usually the languages that are used for this because of their speed (ignoring Fortran as you didn't mention it). What you can take advantage of (which the icc compiler does a lot) is SSE instruction sets for a lot of floating point number crunching. Another thing that is possible is the use of CUDA and Stream API's for Nvidia/Ati respectively to do VERY fast floating point operations on the graphics card while leaving the CPU free to do the rest of the work.
Another approach to this is hands-on comparison. You can get a library like Blitz++ (http://www.oonumerics.org/blitz/) which - I've been told - implements aggressive optimisations for numeric/scientific computing, then write some simple programs doing operations of interest to you (e.g. matrix multiplications). As you use Blitz++ to perform them, write your own class that does the same, and if Blitz++ proves faster start investigating it's implementation until you realise why. (If yours is significantly faster you can tell the Blitz++ developers!)
You should end up learning about a lot of things, for example:
- memory cache access patterns
- expression templates (there are some bad links atop Google search results re expression templates - the key scenario/property you want to find discussion of is that they can encode many successive steps in a chain of operations such that they all be applied during one loop over a data set)
- some CPU-specific instructions (though I haven't checked they've used such non-portable techniques)...
I learned a lot from the book Inner Loops. It's ancient now, in computer terms, but it's very well written and Rick Booth is so enthusiastic about his subject I would still say it's worth looking at to see the kind of mindset you need to make a CPU fly.