Using only ANSI C, is there any way to measure time with milliseconds precision or more? I was browsing time.h but I only found second precision functions.
相关问题
- Multiple sockets for clients to connect to
- What is the best way to do a search in a large fil
- glDrawElements only draws half a quad
- Index of single bit in long integer (in C) [duplic
- Equivalent of std::pair in C
Implementing a portable solution
As it was already mentioned here that there is no proper ANSI solution with sufficient precision for the time measurement problem, I want to write about the ways how to get a portable and, if possible, a high-resolution time measurement solution.
Monotonic clock vs. time stamps
Generally speaking there are two ways of time measurement:
The first one uses a monotonic clock counter (sometimes it is called a tick counter) which counts ticks with a predefined frequency, so if you have a ticks value and the frequency is known, you can easily convert ticks to elapsed time. It is actually not guaranteed that a monotonic clock reflects the current system time in any way, it may also count ticks since a system startup. But it guarantees that a clock is always run up in an increasing fashion regardless of the system state. Usually the frequency is bound to a hardware high-resolution source, that's why it provides a high accuracy (depends on hardware, but most of the modern hardware has no problems with high-resolution clock sources).
The second way provides a (date)time value based on the current system clock value. It may also have a high resolution, but it has one major drawback: this kind of time value can be affected by different system time adjustments, i.e. time zone change, daylight saving time (DST) change, NTP server update, system hibernation and so on. In some circumstances you can get a negative elapsed time value which can lead to an undefined behavior. Actually this kind of time source is less reliable than the first one.
So the first rule in time interval measuring is to use a monotonic clock if possible. It usually has a high precision, and it is reliable by design.
Fallback strategy
When implementing a portable solution it is worth to consider a fallback strategy: use a monotonic clock if available and fallback to time stamps approach if there is no monotonic clock in the system.
Windows
There is a great article called Acquiring high-resolution time stamps on MSDN about time measurement on Windows which describes all the details you may need to know about software and hardware support. To acquire a high precision time stamp on Windows you should:
query a timer frequency (ticks per second) with QueryPerformanceFrequency:
The timer frequency is fixed on the system boot so you need to get it only once.
query the current ticks value with QueryPerformanceCounter:
scale the ticks to elapsed time, i.e. to microseconds:
According to Microsoft you should not have any problems with this approach on Windows XP and later versions in most cases. But you can also use two fallback solutions on Windows:
GetTickCount
, but it is available starting from Windows Vista and above.OS X (macOS)
OS X (macOS) has its own Mach absolute time units which represent a monotonic clock. The best way to start is the Apple's article Technical Q&A QA1398: Mach Absolute Time Units which describes (with the code examples) how to use Mach-specific API to get monotonic ticks. There is also a local question about it called clock_gettime alternative in Mac OS X which at the end may leave you a bit confused what to do with the possible value overflow because the counter frequency is used in the form of numerator and denominator. So, a short example how to get elapsed time:
get the clock frequency numerator and denominator:
You need to do that only once.
query the current tick value with
mach_absolute_time
:scale the ticks to elapsed time, i.e. to microseconds, using previously queried numerator and denominator:
The main idea to prevent an overflow is to scale down the ticks to desired accuracy before using the numerator and denominator. As the initial timer resolution is in nanoseconds, we divide it by
1000
to get microseconds. You can find the same approach used in Chromium's time_mac.c. If you really need a nanosecond accuracy consider reading the How can I use mach_absolute_time without overflowing?.Linux and UNIX
The
clock_gettime
call is your best way on any POSIX-friendly system. It can query time from different clock sources, and the one we need isCLOCK_MONOTONIC
. Not all systems which haveclock_gettime
supportCLOCK_MONOTONIC
, so the first thing you need to do is to check its availability:_POSIX_MONOTONIC_CLOCK
is defined to a value>= 0
it means thatCLOCK_MONOTONIC
is avaiable;if
_POSIX_MONOTONIC_CLOCK
is defined to0
it means that you should additionally check if it works at runtime, I suggest to usesysconf
:Usage of
clock_gettime
is pretty straight forward:get the time value:
I've scaled down the time to microseconds here.
calculate the difference with the previous time value received the same way:
The best fallback strategy is to use the
gettimeofday
call: it is not a monotonic, but it provides quite a good resolution. The idea is the same as withclock_gettime
, but to get a time value you should:Again, the time value is scaled down to microseconds.
SGI IRIX
IRIX has the
clock_gettime
call, but it lacksCLOCK_MONOTONIC
. Instead it has its own monotonic clock source defined asCLOCK_SGI_CYCLE
which you should use instead ofCLOCK_MONOTONIC
withclock_gettime
.Solaris and HP-UX
Solaris has its own high-resolution timer interface
gethrtime
which returns the current timer value in nanoseconds. Though the newer versions of Solaris may haveclock_gettime
, you can stick togethrtime
if you need to support old Solaris versions.Usage is simple:
HP-UX lacks
clock_gettime
, but it supportsgethrtime
which you should use in the same way as on Solaris.BeOS
BeOS also has its own high-resolution timer interface
system_time
which returns the number of microseconds have elapsed since the computer was booted.Example usage:
OS/2
OS/2 has its own API to retrieve high-precision time stamps:
query a timer frequency (ticks per unit) with
DosTmrQueryFreq
(for GCC compiler):query the current ticks value with
DosTmrQueryTime
:scale the ticks to elapsed time, i.e. to microseconds:
Example implementation
You can take a look at the plibsys library which implements all the described above strategies (see ptimeprofiler*.c for details).
Under windows: