I got this program from this link (https://gist.github.com/jiewmeng/3787223).I have been searching the web with the idea of gaining a better understanding of processor caches (L1 and L2).I want to be able to write a program that would enable me to guess the size of L1 and L2 cache on my new Laptop.(just for learning purpose.I know I could check the spec.)
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define KB 1024
#define MB 1024 * 1024
int main() {
unsigned int steps = 256 * 1024 * 1024;
static int arr[4 * 1024 * 1024];
int lengthMod;
unsigned int i;
double timeTaken;
clock_t start;
int sizes[] = {
1 * KB, 4 * KB, 8 * KB, 16 * KB, 32 * KB, 64 * KB, 128 * KB, 256 * KB,
512 * KB, 1 * MB, 1.5 * MB, 2 * MB, 2.5 * MB, 3 * MB, 3.5 * MB, 4 * MB
};
int results[sizeof(sizes)/sizeof(int)];
int s;
/*for each size to test for ... */
for (s = 0; s < sizeof(sizes)/sizeof(int); s++)
{
lengthMod = sizes[s] - 1;
start = clock();
for (i = 0; i < steps; i++)
{
arr[(i * 16) & lengthMod] *= 10;
arr[(i * 16) & lengthMod] /= 10;
}
timeTaken = (double)(clock() - start)/CLOCKS_PER_SEC;
printf("%d, %.8f \n", sizes[s] / 1024, timeTaken);
}
return 0;
}
The output of the program in my machine is as follows.How do I interpret the numbers? What does this program tell me.?
1, 1.07000000
4, 1.04000000
8, 1.06000000
16, 1.13000000
32, 1.14000000
64, 1.17000000
128, 1.20000000
256, 1.21000000
512, 1.19000000
1024, 1.23000000
1536, 1.23000000
2048, 1.46000000
2560, 1.21000000
3072, 1.45000000
3584, 1.47000000
4096, 1.94000000
you need direct access to memory
I am not meaning DMA transfer by this. Memory must be accessed by CPU of course (otherwise you are not measuring CACHEs) but as directly as it can be ... so measurements will probably not be very accurate on Windows/Linux because services and other processes can mess with caches during runtime. Measure many times and average for better results (or use the fastest time or filter it together). For best accuracy use DOS and asm for example
so you measure the memory transfer and not something else like in your code !!!
measure the raw transfer times and plot a graph
x
axis is transfer block sizey
axis is transfer speedzones with the same transfer rate are consistent with appropriate CACHE layer
[Edit1] could not find my old source code for this so I busted something right now in C++ for windows:
Time measurement:
Benchmark (32bit app):
Where arrays
pmovsd[]
andpstosd[]
holds the measured32bit
transfer rates[MByte/sec]
. You can configure the code by use/rem two defines at the start of measure function.Graphical Output:
To maximize accuracy you can change process priority class to maximum. So create measure thread with max priority (I try it but it mess thing up actually) and add critical section to it so the test will not be uninterrupted by OS as often (no visible difference with and without threads). If you want to use
Byte
transfers then take account that it uses only16bit
registers so you need to add loop and address iterations.PS.
If you try this on notebook then you should overheat the CPU to be sure that you measure on top CPU/Mem speed. So no
Sleep
s. Some stupid loops before measurement will do it but they should run at least few seconds. Also you can synchronize this by CPU frequency measurement and loop while is rising. Stop after it saturates ...asm instruction
RDTSC
is best for this (but beware its meaning has slightly changed with new architectures).If you are not under Windows then change functions
tbeg,tend
to your OS equivalents[edit2] further improvements of accuracy
Well after finally solving problem with VCL affecting measurement accuracy which I discover thanks to this question and more about it here, to improve accuracy you can prior to benchmark do this:
set process priority class to
realtime
set process affinity to single CPU
so you measure just single CPU on multi-core
flush DATA and Instruction CACHEs
For example:
So the more accurate measurement looks like this:
Your
lengthMod
variable doesn't do what you think it does. You want it to limit the size of your data set, but you have 2 problems there -lengthMod
is 1k (0x400), then all indices lower than 0x400 (meaning i=1 to 63) would simply map to index 0, so you'll always hit the cache. That's probably why the results are so fast. Instead uselengthMod - 1
to create a correct mask (0x400 --> 0x3ff, which would mask just the upper bits and leave the lower ones intact).lengthMod
are not a power of 2, so doing thelengthMod-1
isn't going to work there as some of the mask bits would still be zeros. Either remove them from the list, or use a modulo operation instead oflengthMod-1
altogether. See also my answer here for a similar case.Another issue is that 16B jumps are probably not enough to skip a cachline as most common CPUs work with 64 byte cachelines, so you get only one miss for every 4 iterations. Use
(i*64)
instead.