Maximum resident set size does not make sense

2020-05-18 05:03发布

问题:

I am trying to measure memory consumption of a running program in Linux. I wrote a C program to allocate 1G memory, then use time to output its "Maximum resident set size":

/usr/bin/time -f '%Uu %Ss %er %MkB %x %C' ./takeMem 1000000000

0.85u 0.81s 1.68r **3910016kB** 0 ./takeMem 1000000000

From man time, I should interpret that "Maximum resident set size" for such program take 3.9G memory although the program allocated only 1G memory. It does NOT make sense.

Can anybody known what happened to cause "Maximum resident set size" that high?

The C code is quite simple:

#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
    int memLength = atoi(argv[1]);
    fprintf(stderr, "Allocating %d memory...", memLength);
    unsigned char* p = new unsigned char[memLength];
    fprintf(stderr, "Done\n");                                                                                                                                                       
    while (true) {
        int i = rand() % memLength;
        char v = rand() % 256;
        p[i] = v;
    }

    return 0;
}

回答1:

I stumbled across this a while ago. It's a bug in GNU time, values are 4 times too large, as it assumes a size in pages and converts it into kB, even though it is kB already in the first place. You might wanna check:

http://groups.google.com/group/gnu.utils.help/browse_thread/thread/bb530eb072f86e18/83599c4828de175b

http://forums.whirlpool.net.au/archive/1693957



标签: linux memory