When invoking clock_gettime() may the returned tv_

2019-04-06 12:14发布

问题:

When you invoke clock_gettime() it returns a timespec structure.

       struct timespec {
           time_t   tv_sec;        /* seconds */
           long     tv_nsec;       /* nanoseconds */
       };

I don't find in the man page a garantee that the tv_nsec won't exceed one second. Does the garantee exist actually ? Could it be dependant on the library (glibc?) implementation for linux?

The key idea is : do I need to 'normalize' any result coming from the clock_gettime() function?

回答1:

According to opengroup

The tv_nsec member is only valid if greater than or equal to zero, and less than the number of nanoseconds in a second (1000 million). The time interval described by this structure is (tv_sec * 10'-.4m'9'.4m' + tv_nsec) nanoseconds.

So according to opengroup, it looks official that it must be less than 1 second.



回答2:

I am fairly certain the answer is always going to be "no".

clock_gettime won't return with tv_nsec >= 10e9. clock_settime() and clock_nanosleep() both place this restriction on their inputs, so I've always assumed clock_gettime was consistent with that.

Also on Linux + glibc, if you dig deep enough into glibc, you'll see code like this:

Excerpt from glibc/nptl/pthread_clock_gettime.c:

/* Compute the seconds.  */
tp->tv_sec = tsc / freq;

/* And the nanoseconds.  This computation should be stable until
   we get machines with about 16GHz frequency.  */
tp->tv_nsec = ((tsc % freq) * 1000000000ull) / freq;

This also occurs in glibc/sysdeps/unix/clock_gettime.c.

But you're right, the man pages don't say. At least not what's in my Linux distro or on opengroup.org. So implementation is technically subject to change w/o warning.

If you're writing for Linux + glibc, I'd say your safe. You can check alternate open source libc libraries yourself, e.g. Android's bionic, or the scaled down newlib.

If you're targeting some other closed source POSIX system, you or your client are problem paying for support, so ask the vendor if it's not documented.

If trying to be portable as possible and feeling paranoid, wrap clock_gettime with a "normalizing" function like this:

int my_gettime( struct timespec * ts ) 
{  
   int ret; 
   if( 0 == (ret = clock_gettime(SOME_CLOCK, ts))
   {
      while (tv_nsec >= 1000000000 )
      {
         ts->tv_nsec -= 1000000000;
         ts->tv_sec += 1;
      }
   }
   return ret;
}


回答3:

No, you don't need to normalize the result. You can trust the nanoseconds field to be within 0 and 999999999, inclusive.

The POSIX specification for clock_gettime() explicitly states that clock_settime() will fail with EINVAL if tv_nsec < 0 || tv_nsec >= 1000000000.

Standards pedants may argue, but simple symmetry alone tells us we can expect the same from clock_gettime(). Technically, 100000000 ns is one second, and since the standard consistently uses the term "seconds and nanoseconds", the logical conclusion is that the nanoseconds field is supposed to be normalized. Besides, a lot of programs will glitch out in interesting and fascinating ways if clock_gettime() were to return results with nanoseconds field out of bounds.



标签: c linux glibc libc