Full Re-Write/Update for clarity (and your sanity, its abit too long) ... (Old Post)
For an assignment, I need to find the levels (L1,L2,...) and size of each cache. Given hints and what I found so far: I think the idea is to create arrays of different sizes and read them. Timing these operations:
sizes = [1k, 4k, 256K, ...]
foreach size in sizes
create array of `size`
start timer
for i = 0 to n // just keep accessing array
arr[(i * 16) % arr.length]++ // i * 16 supposed to modify every cache line ... see link
record/print time
UPDATED (28 Sept 6:57PM UTC+8)
See also full source
Ok now following @mah's advice, I might have fixed the SNR ratio problem ... and also found a method of timing my code (wall_clock_time
from a lab example code)
However, I seem to be getting incorrect results: I am on a Intel Core i3 2100: [SPECS]
- L1: 2 x 32K
- L2: 2 x 256K
- L3: 3MB
The results I got, in a graph:
lengthMod: 1KB to 512K
The base of the 1st peak is 32K ... reasonable ... the 2nd is 384K ... why? I'm expecting 256?
lengthMod: 512k to 4MB
Then why might this range be in a mess?
I also read about prefetching or interference from other applications, so I closed as many things as possible while the script is running, it appears consistently (through multiple runs) that the data of 1MB and above is always so messy?
To answer your question of weird numbers above 1MB, it's pretty simple; cache eviction policies having to do with branch prediction, and the fact that the L3 cache is shared between the cores.
A core i3 has a very interesting cache structure. Actually any modern processor does. You should read about them on wikipedia; there are all sorts of ways for it to decide "well, I probably won't need this..." then it can say "I'll put it in the victim cache" or any number of things. L1-2-3 cache timings can be very complex based on a large number of factors and the individual design decisions made.
On top of that, all these decisions and more (see wikipedia articles on the subject) have to be synchronized between the two cores' caches. The methods to synchronize the shared L3 cache with separate L1 and L2 caches can be ugly, they can involve back-tracking and re-doing calculations or other methods. It's highly unlikely you'll ever have a completely free second core and nothing competing for L3 cache space and thus causing synchronization weirdness.
In general if you are working on data, say, convoluting a kernel, you want to make sure it fits within L1 cache and shape your algorithm around that. L3 cache isn't really meant for working on a data set the way you're doing it (though it is better than main memory!)
I swear if I was the one having to implement cache algorithms I'd go insane.
For the benchmarking with varying strides, you could try lat_mem_rd from lmbench package, it's open-source: http://www.bitmover.com/lmbench/lat_mem_rd.8.html
I had posted my port for Windows to http://habrahabr.ru/post/111876/ -- it's quite lengthy to be copypasted here. That's from two years ago, I didn't test it with modern CPUs.
After 10 minutes of searching the Intel instruction manual and another 10 minutes of coding I came up with this (for Intel based processors):
Reference: http://download.intel.com/products/processor/manual/325462.pdf 3-166 Vol. 2A
This is much more reliable then measuring cache latencies as it is pretty much impossible to turn off cache prefetching on a modern processor. If you require similar info for a different processor architecture you will have to consult the respective manual.
Edit: Added cache type descriptor. Edit2: Added Cache Level indicator. Edit3: Added more documentation.
The time it takes to measure your time (that is, the time just to call the clock() function) is many many (many many many....) times greater than the time it takes to perform
arr[(i*16)&lengthMod]++
. This extremely low signal-to-noise ratio (among other likely pitfalls) makes your plan unworkable. A large part of the problem is that you're trying to measure a single iteration of the loop; the sample code you linked is attempting to measure a full set of iterations (read the clock before starting the loop; read it again after emerging from the loop; do not use printf() inside the loop).If your loop is large enough you might be able to overcome the signal-to-noise ratio problem.
As to "what element is being incremented";
arr
is an address of a 1MB buffer;arr[(i * 16) & lengthMod]++;
causes(i * 16) * lengthMod
to generate an offset from that address; that offset is the address of the int that gets incremented. You're performing a shift (i * 16 will turn into i << 4), a logical and, an addition, then either a read/add/write or a single increment, depending on your CPU).Edit: As described, your code suffers from a poor SNR (signal to noise ratio) due to the relative speeds of memory access (cache or no cache) and calling functions just to measure the time. To get the timings you're currently getting, I assume you modified the code to look something like:
This moves the measurement outside the loop so you're not measuring a single access (which would really be impossible) but rather you're measuring
steps
accesses.You're free to increase
steps
as needed and this will have a direct impact on your timings. Since the times you're receiving are too close together, and in some cases even inverted (your time oscillates between sizes, which is not likely caused by cache), you might try changing the value ofsteps
to256 * 1024 * 1024
or even larger.NOTE: You can make
steps
as large as you can fit into a signed int (which should be large enough), since the logical and ensures that you wrap around in your buffer.I know this! (In reality it is very complicated because of pre-fetching)
Changing stride allows you to test the properties of caches. By looking at a graph you will get your answers.
Look at slides 35-42 http://www.it.uu.se/edu/course/homepage/avdark/ht11/slides/11_Memory_and_optimization-1.pdf
Erik Hagersten is a really good teacher (and also really competent, was lead architect at sun at one point) so take a look at the rest of his slides for more great explanations!
For windows, you can use the GetLogicalProcessorInformation method.
For linux, you may use
sysconf()
. You can find valid arguments forsysconf
in/usr/include/unistd.h
or/usr/include/bits/confname.h