I had to write a very simple console program for university that had to measure the time required to make an input.
Therefor I used clock()
in front and after an fgets()
call. When running on my Windows computer it worked perfectly. However when running on my friends Mac-Book and Linux-PC it gave extremely small results (a few micro seconds of time only).
I tried the following code on all 3 OS:
#include <stdio.h>
#include <time.h>
#include <unistd.h>
void main()
{
clock_t t;
printf("Sleeping for a bit\n");
t = clock();
// Alternatively some fgets(...)
usleep(999999);
t = clock() - t;
printf("Processor time spent: %lf", ((double)t) / CLOCKS_PER_SEC);
}
On windows the output shows 1 second (or the amount of time you took to type when using fgets
), on the other two OS not much more than 0 seconds.
Now my question is why there is such a difference in implementation of clock()
on these OS. For windows it seems like the clock keeps ticking while the thread is sleeping/waiting but for Linux and Mac isn't?
Edit:
Thank you for the answers so far guys, so it's just Microsoft's faulty implementation really.
Could anyone please answer my last question:
Also is there a way to measure what I wanted do measure on all 3 systems using C-standard libraries since clock()
only seems to work this way on Windows?
If we look at the source code for clock()
on Mac OS X, we see that it is implemented using getrusage
, and reads ru_utime + ru_stime
. These two fields measure CPU time used by the process (or by the system, on behalf of the process). This means that if usleep
(or fgets
) causes the OS to swap in a different program for execution until something happens, then any amount of real time (also called "wall time", as in "wall clock") elapsed does not count against the value that clock()
returns on Mac OS X. You could probably dig in and find something similar in Linux.
On Windows, however, clock()
returns the amount of wall time elapsed since the start of the process.
In pure C, I am not aware of a function available on OS X, Linux and Windows that will return wall time with a sub-second precision (time.h being fairly limited). You have GetSystemTimeAsFileTime
on Windows that will return you time in slices of 100ns, and gettimeofday
from BSD that will return time to a microsecond precision.
If second-precision is acceptable to you, you could use time(NULL)
.
If C++ is an option, you could use one of the clocks from std::chrono
to get time to the desired precision.
You're encountering a known bug in Microsoft's C Runtime. Even though the behavior is not conforming to any ISO C standard, it won't be fixed. From the bug report:
However, we have opted to avoid reimplementing clock() in such a way that it might return time values advancing faster than one second per physical second, as this change would silently break programs depending on the previous behavior (and we expect there are many such programs).
On Linux, you should read time(7). It suggests to use the POSIX 2001 clock_gettime which should exist on recent MacOSX (and on Linux). On Linux, running on not too old hardware (e.g. less than 6 years old laptop or desktop), clock_gettime
gives good accuracy, typically dozens of microseconds or better. It gives measures with seconds and nanoseconds (in a struct timespec
), but I don't expect the nanosecond figure to be very accurate.
Indeed, clock(3) is told to be conforming to...
C89, C99, POSIX.1-2001. POSIX requires that CLOCKS_PER_SEC equals
1000000 independent of the actual resolution.
At last, several framework libraries provide functions (wrapping target specific system functions) to measure time. Look into POCO (in C++) or Glib (from GTK & Gnome in C).