C Program to determine Levels & Size of Cache

2019-01-10 23:05发布

Full Re-Write/Update for clarity (and your sanity, its abit too long) ... (Old Post)

For an assignment, I need to find the levels (L1,L2,...) and size of each cache. Given hints and what I found so far: I think the idea is to create arrays of different sizes and read them. Timing these operations:

sizes = [1k, 4k, 256K, ...]
foreach size in sizes 
    create array of `size`

    start timer
    for i = 0 to n // just keep accessing array
        arr[(i * 16) % arr.length]++ // i * 16 supposed to modify every cache line ... see link
    record/print time

UPDATED (28 Sept 6:57PM UTC+8)

See also full source

Ok now following @mah's advice, I might have fixed the SNR ratio problem ... and also found a method of timing my code (wall_clock_time from a lab example code)

However, I seem to be getting incorrect results: I am on a Intel Core i3 2100: [SPECS]

  • L1: 2 x 32K
  • L2: 2 x 256K
  • L3: 3MB

The results I got, in a graph:

lengthMod: 1KB to 512K

enter image description here

The base of the 1st peak is 32K ... reasonable ... the 2nd is 384K ... why? I'm expecting 256?

lengthMod: 512k to 4MB

enter image description here

Then why might this range be in a mess?


I also read about prefetching or interference from other applications, so I closed as many things as possible while the script is running, it appears consistently (through multiple runs) that the data of 1MB and above is always so messy?

标签: c caching
6条回答
够拽才男人
2楼-- · 2019-01-10 23:15

To answer your question of weird numbers above 1MB, it's pretty simple; cache eviction policies having to do with branch prediction, and the fact that the L3 cache is shared between the cores.

A core i3 has a very interesting cache structure. Actually any modern processor does. You should read about them on wikipedia; there are all sorts of ways for it to decide "well, I probably won't need this..." then it can say "I'll put it in the victim cache" or any number of things. L1-2-3 cache timings can be very complex based on a large number of factors and the individual design decisions made.

On top of that, all these decisions and more (see wikipedia articles on the subject) have to be synchronized between the two cores' caches. The methods to synchronize the shared L3 cache with separate L1 and L2 caches can be ugly, they can involve back-tracking and re-doing calculations or other methods. It's highly unlikely you'll ever have a completely free second core and nothing competing for L3 cache space and thus causing synchronization weirdness.

In general if you are working on data, say, convoluting a kernel, you want to make sure it fits within L1 cache and shape your algorithm around that. L3 cache isn't really meant for working on a data set the way you're doing it (though it is better than main memory!)

I swear if I was the one having to implement cache algorithms I'd go insane.

查看更多
forever°为你锁心
3楼-- · 2019-01-10 23:16

For the benchmarking with varying strides, you could try lat_mem_rd from lmbench package, it's open-source: http://www.bitmover.com/lmbench/lat_mem_rd.8.html

I had posted my port for Windows to http://habrahabr.ru/post/111876/ -- it's quite lengthy to be copypasted here. That's from two years ago, I didn't test it with modern CPUs.

查看更多
【Aperson】
4楼-- · 2019-01-10 23:23

After 10 minutes of searching the Intel instruction manual and another 10 minutes of coding I came up with this (for Intel based processors):

void i386_cpuid_caches () {
    int i;
    for (i = 0; i < 32; i++) {

        // Variables to hold the contents of the 4 i386 legacy registers
        uint32_t eax, ebx, ecx, edx; 

        eax = 4; // get cache info
        ecx = i; // cache id

        __asm__ (
            "cpuid" // call i386 cpuid instruction
            : "+a" (eax) // contains the cpuid command code, 4 for cache query
            , "=b" (ebx)
            , "+c" (ecx) // contains the cache id
            , "=d" (edx)
        ); // generates output in 4 registers eax, ebx, ecx and edx 

        // taken from http://download.intel.com/products/processor/manual/325462.pdf Vol. 2A 3-149
        int cache_type = eax & 0x1F; 

        if (cache_type == 0) // end of valid cache identifiers
            break;

        char * cache_type_string;
        switch (cache_type) {
            case 1: cache_type_string = "Data Cache"; break;
            case 2: cache_type_string = "Instruction Cache"; break;
            case 3: cache_type_string = "Unified Cache"; break;
            default: cache_type_string = "Unknown Type Cache"; break;
        }

        int cache_level = (eax >>= 5) & 0x7;

        int cache_is_self_initializing = (eax >>= 3) & 0x1; // does not need SW initialization
        int cache_is_fully_associative = (eax >>= 1) & 0x1;


        // taken from http://download.intel.com/products/processor/manual/325462.pdf 3-166 Vol. 2A
        // ebx contains 3 integers of 10, 10 and 12 bits respectively
        unsigned int cache_sets = ecx + 1;
        unsigned int cache_coherency_line_size = (ebx & 0xFFF) + 1;
        unsigned int cache_physical_line_partitions = ((ebx >>= 12) & 0x3FF) + 1;
        unsigned int cache_ways_of_associativity = ((ebx >>= 10) & 0x3FF) + 1;

        // Total cache size is the product
        size_t cache_total_size = cache_ways_of_associativity * cache_physical_line_partitions * cache_coherency_line_size * cache_sets;

        printf(
            "Cache ID %d:\n"
            "- Level: %d\n"
            "- Type: %s\n"
            "- Sets: %d\n"
            "- System Coherency Line Size: %d bytes\n"
            "- Physical Line partitions: %d\n"
            "- Ways of associativity: %d\n"
            "- Total Size: %zu bytes (%zu kb)\n"
            "- Is fully associative: %s\n"
            "- Is Self Initializing: %s\n"
            "\n"
            , i
            , cache_level
            , cache_type_string
            , cache_sets
            , cache_coherency_line_size
            , cache_physical_line_partitions
            , cache_ways_of_associativity
            , cache_total_size, cache_total_size >> 10
            , cache_is_fully_associative ? "true" : "false"
            , cache_is_self_initializing ? "true" : "false"
        );
    }
}

Reference: http://download.intel.com/products/processor/manual/325462.pdf 3-166 Vol. 2A

This is much more reliable then measuring cache latencies as it is pretty much impossible to turn off cache prefetching on a modern processor. If you require similar info for a different processor architecture you will have to consult the respective manual.

Edit: Added cache type descriptor. Edit2: Added Cache Level indicator. Edit3: Added more documentation.

查看更多
虎瘦雄心在
5楼-- · 2019-01-10 23:33

The time it takes to measure your time (that is, the time just to call the clock() function) is many many (many many many....) times greater than the time it takes to perform arr[(i*16)&lengthMod]++. This extremely low signal-to-noise ratio (among other likely pitfalls) makes your plan unworkable. A large part of the problem is that you're trying to measure a single iteration of the loop; the sample code you linked is attempting to measure a full set of iterations (read the clock before starting the loop; read it again after emerging from the loop; do not use printf() inside the loop).

If your loop is large enough you might be able to overcome the signal-to-noise ratio problem.

As to "what element is being incremented"; arr is an address of a 1MB buffer; arr[(i * 16) & lengthMod]++; causes (i * 16) * lengthMod to generate an offset from that address; that offset is the address of the int that gets incremented. You're performing a shift (i * 16 will turn into i << 4), a logical and, an addition, then either a read/add/write or a single increment, depending on your CPU).

Edit: As described, your code suffers from a poor SNR (signal to noise ratio) due to the relative speeds of memory access (cache or no cache) and calling functions just to measure the time. To get the timings you're currently getting, I assume you modified the code to look something like:

int main() {
    int steps = 64 * 1024 * 1024;
    int arr[1024 * 1024];
    int lengthMod = (1024 * 1024) - 1;
    int i;
    double timeTaken;
    clock_t start;

    start = clock();
    for (i = 0; i < steps; i++) {
        arr[(i * 16) & lengthMod]++;
    }
    timeTaken = (double)(clock() - start)/CLOCKS_PER_SEC;
    printf("Time for %d: %.12f \n", i, timeTaken);
}

This moves the measurement outside the loop so you're not measuring a single access (which would really be impossible) but rather you're measuring steps accesses.

You're free to increase steps as needed and this will have a direct impact on your timings. Since the times you're receiving are too close together, and in some cases even inverted (your time oscillates between sizes, which is not likely caused by cache), you might try changing the value of steps to 256 * 1024 * 1024 or even larger.

NOTE: You can make steps as large as you can fit into a signed int (which should be large enough), since the logical and ensures that you wrap around in your buffer.

查看更多
SAY GOODBYE
6楼-- · 2019-01-10 23:38

I know this! (In reality it is very complicated because of pre-fetching)

 for (times = 0; times < Max; time++) /* many times*/
     for (i=0; i < ArraySize; i = i + Stride)
           dummy = A[i]; /* touch an item in the array */

Changing stride allows you to test the properties of caches. By looking at a graph you will get your answers.

Look at slides 35-42 http://www.it.uu.se/edu/course/homepage/avdark/ht11/slides/11_Memory_and_optimization-1.pdf

Erik Hagersten is a really good teacher (and also really competent, was lead architect at sun at one point) so take a look at the rest of his slides for more great explanations!

查看更多
贼婆χ
7楼-- · 2019-01-10 23:41

For windows, you can use the GetLogicalProcessorInformation method.

For linux, you may use sysconf(). You can find valid arguments for sysconf in /usr/include/unistd.h or /usr/include/bits/confname.h

查看更多
登录 后发表回答