I'm trying to measure some activity in C (Matrix multiplying) and noticed that I should do something like this:
clock_t start = clock();
sleep(3);
clock_t end = clock();
double elapsed_time = (end - start)/(double)CLOCKS_PER_SEC;
printf("Elapsed time: %.2f.\n", elapsed_time);
The output is:
Elapsed time: 0.00.
Why is this happening?
clock
estimates the CPU time used by your program; that's the time the CPU has been busy executing instructions belonging to your program. sleep
doesn't perform any work, so it takes no noticeable CPU time (even if it takes wallclock time).
If you want to measure wallclock time, use time
:
time_t start = time(NULL);
sleep(3);
printf("%.2f\n", (double)(time(NULL) - start));
will print a number close to three.
As a side note, if you want to measure execution time in a more precise manner (milliseconds), time
is not precise enough. You can use gettimeofday
instead:
#include <stdio.h>
#include <unistd.h>
#include <sys/time.h>
int main() {
long start, end;
struct timeval timecheck;
gettimeofday(&timecheck, NULL);
start = (long)timecheck.tv_sec * 1000 + (long)timecheck.tv_usec / 1000;
usleep(200000); // 200ms
gettimeofday(&timecheck, NULL);
end = (long)timecheck.tv_sec * 1000 + (long)timecheck.tv_usec / 1000;
printf("%ld milliseconds elapsed\n", (end - start));
return 0;
}
You must use time_t start = time(NULL);
and time_t end = time(NULL);
to get the correct values.
Use QueryPerformanceFrequency() as described in Orwells answer or use the GetSystemTimeAsFileTime() function. The latter has 100 ns granularity but does not increment at that rate. Its increment depends on underlaying hardware and the setting of multimedia timer resolution.
Keep in mind that the frequency returned by QueryPerformanceFrequency()
is treated as a constant. However, since it is generated by hardware it has an offset and a drift in time too. Measuring periods in time by using QueryPerformanceCounter()
will typically be accompanied by errors of many microseconds per second.
I've given this and this answer about similar matters.
If you don't care about being tied to Windows, you can try the high resolution timer. It's is a lot more precise than time(), which only has a precision of a single second because it the uses UNIX format.
#include <iostream>
#include <windows.h>
__int64 countspersec = 0;
double secpercount = 0.0;
__int64 starttime = 0;
__int64 curtime = 0;
int main() {
// Get current time, and determine how fast the clock ticks
QueryPerformanceCounter((LARGE_INTEGER*)&starttime);
QueryPerformanceFrequency((LARGE_INTEGER*)&countspersec);
secpercount = 1.0/(double)countspersec;
/* calculate something */
// Standard end-start stuff, account for clock speed
QueryPerformanceCounter((LARGE_INTEGER*)&curtime);
std::cout << "Time needed: " << (curtime-starttime)*secpercount << " sec\n";
return 0;
}