I would like to measure the system time it takes to execute some code. To do this I know I would sandwich said code between two calls to getrusage(), but I get some unexpected results...
#include <sys/time.h>
#include <sys/resource.h>
#include <unistd.h>
#include <stdio.h>
int main() {
struct rusage usage;
struct timeval start, end;
int i, j, k = 0;
getrusage(RUSAGE_SELF, &usage);
start = usage.ru_stime;
for (i = 0; i < 10000; i++) {
/* Double loop for more interesting results. */
for (j = 0; j < 10000; j++) {
k += 20;
}
}
getrusage(RUSAGE_SELF, &usage);
end = usage.ru_stime;
printf("Started at: %ld.%lds\n", start.tv_sec, start.tv_usec);
printf("Ended at: %ld.%lds\n", end.tv_sec, end.tv_usec);
return 0;
}
I would hope that this produces two different numbers, but alas! After seeing my computer think for a second or two, this is the result:
Started at: 0.1999s
Ended at: 0.1999s
Am I not using getrusage() right? Why shouldn't these two numbers be different? If I am fundamentally wrong, is there another way to use getrusage() to measure the system time of some source code? Thank you for reading.
You should use
usage.ru_utime
, which is user CPU time used, instead.Use gprof. This will give give you the time taken by each function. Install gprof and use these flags for compilation -pg -fprofile-arcs -ftest-coverage.
You're misunderstanding the difference between "user" and "system" time. Your example code is executing primarily in user-mode (ie, running your application code) while you are measuring, but "system" time is a measure of time spent executing in kernel-mode (ie, processing system calls).
ru_stime
is the correct field to measure system time. Your test application just happens not to accrue any such time between the two points you check.