Computing time in linux :granularity and precision

2019-01-19 07:27发布

**********************Original edit**********************


I am using different kind of clocks to get time in LINUX systems :

rdtsc, gettimeofday, clock_gettime

and already read various threads like these :

What's the best timing resolution can i get on Linux

How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?

How do I measure a time interval in C?

faster equivalent of gettimeofday

Granularity in time function

Why is clock_gettime so erratic?

But I am a little confused :


What is the difference between granularity, resolution, precision, accuracy ?


Granularity(or resolution or precision) and accuracy are not the same things (if I am right ...)

For example, while using the "clock_gettime" the precision is 10ms as I get with:

struct timespec res;
clock_getres( CLOCK_REALTIME, &res):

and the granularity (which is defined as ticks per second) is 100Hz(or 10ms), as I get when executing :

 long ticks_per_sec = sysconf(_SC_CLK_TCK); 

Accuracy is in nanosecond, as the above code suggest :

struct timespec gettime_now;

clock_gettime(CLOCK_REALTIME, &gettime_now); time_difference = gettime_now.tv_nsec - start_time;

In the link below,I saw that this is the Linux global definition of granularity and it's better not to change it:

http://wwwagss.informatik.uni-kl.de/Projekte/Squirrel/da/node5.html#fig:clock:hw

So my question is If this remarks above were right, and also :

a) Can we see what is the granularity of rdtsc and gettimeofday (with a command)?

b) Can we change them (with any way)?

Thanks in advance


**********************Edit no2**********************

Hi all , I have tested some new clocks and I will like to share infos :

a) In the page below, David Terei, did a fine program that compares various clock and their performances :

https://github.com/dterei/Scraps/tree/master/c/time

b) I have also tested omp_get_wtime as Raxman suggested by and I found a precision in nsec, but not really better than "clock_gettime(as they did in this website) :

http://msdn.microsoft.com/en-us/library/t3282fe5.aspx

I think it's a window oriented time function

Better results are given with clock_gettime using CLOCK_MONOTONIC than when using CLOCK_REALTIME . Thats normal, because the first calculates PROCESSING time and the other REAL TIME respectively

c) I have found also the Intel function ippGetCpuClocks, but not I ve not test it because it's mandatory to register first :

http://software.intel.com/en-us/articles/ipp-downloads-registration-and-licensing/

... or you may use a trial version

Thank to all for your replies !!!


标签: c linux time
1条回答
姐就是有狂的资本
2楼-- · 2019-01-19 08:24
  • Precision is the amount of information, i.e. the number of significant digits you report. (E.g. I am 2m, 1.8m, 1.83m, 1.8322m tall. All those measurements are accurate, but increasingly precise.)

  • Accuracy is the relation between the reported information and the truth. (E.g. "I'm 1.70m tall" is more precise than "1.8m", but not actually accurate.)

  • Granularity or resolution are abou the smallest time interval that the timer can measure. For example, if you have 1ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.

On Linux, the available timers with increasing granularity are:

  • clock() from <time.h> (20ms or 10ms resolution?)

  • gettimeofday() from Posix <sys/time.h> (microseconds)

  • clock_gettime() on Posix (nanoseconds?)

In C++, the <chrono> header offers a certain amount of abstraction around this, and std::high_resolution_clock attempts to give you the best possible clock.

查看更多
登录 后发表回答