Since Mac OS X doesn't support get time, I found this gist that made a portable way to use the clock_gettime fund here:
void current_utc_time(struct timespec *ts) {
#ifdef __MACH__ // OS X does not have clock_gettime, use clock_get_time
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), CALENDAR_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
ts->tv_sec = mts.tv_sec;
ts->tv_nsec = mts.tv_nsec;
#else
clock_gettime(CLOCK_REALTIME, ts);
#endif
}
using it like this
struct timespec requestStart;
current_utc_time(&requestStart);
printf("start: s: %lu\n", requestStart.tv_sec);
printf("start: ns: %lu\n", requestStart.tv_nsec);
start: s: 1435988139
start: ns: 202015000
I am trying to get the seconds and milliseconds value from this code, is this the correct way to get milliseconds from the nanoseconds value?
printf("total: ms: %lu\n", (requestStart.tv_nsec - requestEnd.tv_nsec) / (unsigned long)1000000);
If so, how do I get the seconds value? If not how can I get the milliseconds and seconds value?
edit In response to the first comment, I am mainly looking for a way to have the code as portable as possible.
If you want portability, just use
gettimeofday
. That should work everywhere.clock_gettime
appears to beLinux-specifica relatively recent Posix introduction.On converting:
struct timeval
(used bygettimeofday
and other older functions) uses microseconds. The newerstruct timespec
uses nanoseconds. So to convert back and forth, you would multiply or divide by 1000.When computing differences, you have to worry about carry/overflow, so your expression
is incorrect for two reasons: the difference might come out negative (that is, if the time went from 123.456 to 124.321), and you're scaling it by too much.
When I'm timing some operation, I usually do it something like this:
If I then wanted a
struct timespec
(using nanoseconds), I'd multiply by 1000:To convert from a
struct timespec
back to astruct timeval
, I'd divide by 1000:Another useful thing is to pull out seconds and subseconds as floating point:
(It turns out that there can be subtleties when writing this sort of code, depending on the actual types of
tv_sec
andtv_usec
ortv_nsec
, and on the "helpful" warnings your compiler might choose to give you. See this question.)