How to measure time in milliseconds using ANSI C?

2019-01-01 04:02发布

问题:

Using only ANSI C, is there any way to measure time with milliseconds precision or more? I was browsing time.h but I only found second precision functions.

回答1:

There is no ANSI C function that provides better than 1 second time resolution but the POSIX function gettimeofday provides microsecond resolution. The clock function only measures the amount of time that a process has spent executing and is not accurate on many systems.

You can use this function like this:

struct timeval tval_before, tval_after, tval_result;

gettimeofday(&tval_before, NULL);

// Some code you want to time, for example:
sleep(1);

gettimeofday(&tval_after, NULL);

timersub(&tval_after, &tval_before, &tval_result);

printf(\"Time elapsed: %ld.%06ld\\n\", (long int)tval_result.tv_sec, (long int)tval_result.tv_usec);

This returns Time elapsed: 1.000870 on my machine.



回答2:

#include <time.h>
clock_t uptime = clock() / (CLOCKS_PER_SEC / 1000);


回答3:

I always use the clock_gettime() function, returning time from the CLOCK_MONOTONIC clock. The time returned is the amount of time, in seconds and nanoseconds, since some unspecified point in the past, such as system startup of the epoch.

#include <stdio.h>
#include <stdint.h>
#include <time.h>

int64_t timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p)
{
  return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) -
           ((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec);
}

int main(int argc, char **argv)
{
  struct timespec start, end;
  clock_gettime(CLOCK_MONOTONIC, &start);

  // Some code I am interested in measuring 

  clock_gettime(CLOCK_MONOTONIC, &end);

  uint64_t timeElapsed = timespecDiff(&end, &start);
}


回答4:

Implementing a portable solution

As it was already mentioned here that there is no proper ANSI solution with sufficient precision for the time measurement problem, I want to write about the ways how to get a portable and, if possible, a high-resolution time measurement solution.

Monotonic clock vs. time stamps

Generally speaking there are two ways of time measurement:

  • monotonic clock;
  • current (date)time stamp.

The first one uses a monotonic clock counter (sometimes it is called a tick counter) which counts ticks with a predefined frequency, so if you have a ticks value and the frequency is known, you can easily convert ticks to elapsed time. It is actually not guaranteed that a monotonic clock reflects the current system time in any way, it may also count ticks since a system startup. But it guarantees that a clock is always run up in an increasing fashion regardless of the system state. Usually the frequency is bound to a hardware high-resolution source, that\'s why it provides a high accuracy (depends on hardware, but most of the modern hardware has no problems with high-resolution clock sources).

The second way provides a (date)time value based on the current system clock value. It may also have a high resolution, but it has one major drawback: this kind of time value can be affected by different system time adjustments, i.e. time zone change, daylight saving time (DST) change, NTP server update, system hibernation and so on. In some circumstances you can get a negative elapsed time value which can lead to an undefined behavior. Actually this kind of time source is less reliable than the first one.

So the first rule in time interval measuring is to use a monotonic clock if possible. It usually has a high precision, and it is reliable by design.

Fallback strategy

When implementing a portable solution it is worth to consider a fallback strategy: use a monotonic clock if available and fallback to time stamps approach if there is no monotonic clock in the system.

Windows

There is a great article called Acquiring high-resolution time stamps on MSDN about time measurement on Windows which describes all the details you may need to know about software and hardware support. To acquire a high precision time stamp on Windows you should:

  • query a timer frequency (ticks per second) with QueryPerformanceFrequency:

    LARGE_INTEGER tcounter;
    LARGE_INTEGER freq;    
    
    if (QueryPerformanceFrequency (&tcounter) != 0)
        freq = tcounter.QuadPart;
    

    The timer frequency is fixed on the system boot so you need to get it only once.

  • query the current ticks value with QueryPerformanceCounter:

    LARGE_INTEGER tcounter;
    LARGE_INTEGER tick_value;
    
    if (QueryPerformanceCounter (&tcounter) != 0)
        tick_value = tcounter.QuadPart;
    
  • scale the ticks to elapsed time, i.e. to microseconds:

    LARGE_INTEGER usecs = (tick_value - prev_tick_value) / (freq / 1000000);
    

According to Microsoft you should not have any problems with this approach on Windows XP and later versions in most cases. But you can also use two fallback solutions on Windows:

  • GetTickCount provides the number of milliseconds that have elapsed since the system was started. It wraps every 49.7 days, so be careful in measuring longer intervals.
  • GetTickCount64 is a 64-bit version of GetTickCount, but it is available starting from Windows Vista and above.

OS X (macOS)

OS X (macOS) has its own Mach absolute time units which represent a monotonic clock. The best way to start is the Apple\'s article Technical Q&A QA1398: Mach Absolute Time Units which describes (with the code examples) how to use Mach-specific API to get monotonic ticks. There is also a local question about it called clock_gettime alternative in Mac OS X which at the end may leave you a bit confused what to do with the possible value overflow because the counter frequency is used in the form of numerator and denominator. So, a short example how to get elapsed time:

  • get the clock frequency numerator and denominator:

    #include <mach/mach_time.h>
    #include <stdint.h>
    
    static uint64_t freq_num   = 0;
    static uint64_t freq_denom = 0;
    
    void init_clock_frequency ()
    {
        mach_timebase_info_data_t tb;
    
        if (mach_timebase_info (&tb) == KERN_SUCCESS && tb.denom != 0) {
            freq_num   = (uint64_t) tb.numer;
            freq_denom = (uint64_t) tb.denom;
        }
    }
    

    You need to do that only once.

  • query the current tick value with mach_absolute_time:

    uint64_t tick_value = mach_absolute_time ();
    
  • scale the ticks to elapsed time, i.e. to microseconds, using previously queried numerator and denominator:

    uint64_t value_diff = tick_value - prev_tick_value;
    
    /* To prevent overflow */
    value_diff /= 1000;
    
    value_diff *= freq_num;
    value_diff /= freq_denom;
    

    The main idea to prevent an overflow is to scale down the ticks to desired accuracy before using the numerator and denominator. As the initial timer resolution is in nanoseconds, we divide it by 1000 to get microseconds. You can find the same approach used in Chromium\'s time_mac.c. If you really need a nanosecond accuracy consider reading the How can I use mach_absolute_time without overflowing?.

Linux and UNIX

The clock_gettime call is your best way on any POSIX-friendly system. It can query time from different clock sources, and the one we need is CLOCK_MONOTONIC. Not all systems which have clock_gettime support CLOCK_MONOTONIC, so the first thing you need to do is to check its availability:

  • if _POSIX_MONOTONIC_CLOCK is defined to a value >= 0 it means that CLOCK_MONOTONIC is avaiable;
  • if _POSIX_MONOTONIC_CLOCK is defined to 0 it means that you should additionally check if it works at runtime, I suggest to use sysconf:

    #include <unistd.h>
    
    #ifdef _SC_MONOTONIC_CLOCK
    if (sysconf (_SC_MONOTONIC_CLOCK) > 0) {
        /* A monotonic clock presents */
    }
    #endif
    
  • otherwise a monotonic clock is not supported and you should use a fallback strategy (see below).

Usage of clock_gettime is pretty straight forward:

  • get the time value:

    #include <time.h>
    #include <sys/time.h>
    #include <stdint.h>
    
    uint64_t get_posix_clock_time ()
    {
        struct timespec ts;
    
        if (clock_gettime (CLOCK_MONOTONIC, &ts) == 0)
            return (uint64_t) (ts.tv_sec * 1000000 + ts.tv_nsec / 1000);
        else
            return 0;
    }
    

    I\'ve scaled down the time to microseconds here.

  • calculate the difference with the previous time value received the same way:

    uint64_t prev_time_value, time_value;
    uint64_t time_diff;
    
    /* Initial time */
    prev_time_value = get_posix_clock_time ();
    
    /* Do some work here */
    
    /* Final time */
    time_value = get_posix_clock_time ();
    
    /* Time difference */
    time_diff = time_value - prev_time_value;
    

The best fallback strategy is to use the gettimeofday call: it is not a monotonic, but it provides quite a good resolution. The idea is the same as with clock_gettime, but to get a time value you should:

#include <time.h>
#include <sys/time.h>
#include <stdint.h>

uint64_t get_gtod_clock_time ()
{
    struct timeval tv;

    if (gettimeofday (&tv, NULL) == 0)
        return (uint64_t) (tv.tv_sec * 1000000 + tv.tv_usec);
    else
        return 0;
}

Again, the time value is scaled down to microseconds.

SGI IRIX

IRIX has the clock_gettime call, but it lacks CLOCK_MONOTONIC. Instead it has its own monotonic clock source defined as CLOCK_SGI_CYCLE which you should use instead of CLOCK_MONOTONIC with clock_gettime.

Solaris and HP-UX

Solaris has its own high-resolution timer interface gethrtime which returns the current timer value in nanoseconds. Though the newer versions of Solaris may have clock_gettime, you can stick to gethrtime if you need to support old Solaris versions.

Usage is simple:

#include <sys/time.h>

void time_measure_example ()
{
    hrtime_t prev_time_value, time_value;
    hrtime_t time_diff;

    /* Initial time */
    prev_time_value = gethrtime ();

    /* Do some work here */

    /* Final time */
    time_value = gethrtime ();

    /* Time difference */
    time_diff = time_value - prev_time_value;
}

HP-UX lacks clock_gettime, but it supports gethrtime which you should use in the same way as on Solaris.

BeOS

BeOS also has its own high-resolution timer interface system_time which returns the number of microseconds have elapsed since the computer was booted.

Example usage:

#include <kernel/OS.h>

void time_measure_example ()
{
    bigtime_t prev_time_value, time_value;
    bigtime_t time_diff;

    /* Initial time */
    prev_time_value = system_time ();

    /* Do some work here */

    /* Final time */
    time_value = system_time ();

    /* Time difference */
    time_diff = time_value - prev_time_value;
}

OS/2

OS/2 has its own API to retrieve high-precision time stamps:

  • query a timer frequency (ticks per unit) with DosTmrQueryFreq (for GCC compiler):

    #define INCL_DOSPROFILE
    #define INCL_DOSERRORS
    #include <os2.h>
    #include <stdint.h>
    
    ULONG freq;
    
    DosTmrQueryFreq (&freq);
    
  • query the current ticks value with DosTmrQueryTime:

    QWORD    tcounter;
    unit64_t time_low;
    unit64_t time_high;
    unit64_t timestamp;
    
    if (DosTmrQueryTime (&tcounter) == NO_ERROR) {
        time_low  = (unit64_t) tcounter.ulLo;
        time_high = (unit64_t) tcounter.ulHi;
    
        timestamp = (time_high << 32) | time_low;
    }
    
  • scale the ticks to elapsed time, i.e. to microseconds:

    uint64_t usecs = (prev_timestamp - timestamp) / (freq / 1000000);
    

Example implementation

You can take a look at the plibsys library which implements all the described above strategies (see ptimeprofiler*.c for details).



回答5:

timespec_get from C11

Returns up to nanoseconds, rounded to the resolution of the implementation.

Looks like an ANSI ripoff from POSIX\' clock_gettime.

Example: a printf is done every 100ms on Ubuntu 15.10:

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

static long get_nanos(void) {
    struct timespec ts;
    timespec_get(&ts, TIME_UTC);
    return (long)ts.tv_sec * 1000000000L + ts.tv_nsec;
}

int main(void) {
    long nanos;
    long last_nanos;
    long start;
    nanos = get_nanos();
    last_nanos = nanos;
    start = nanos;
    while (1) {
        nanos = get_nanos();
        if (nanos - last_nanos > 100000000L) {
            printf(\"current nanos: %ld\\n\", nanos - start);
            last_nanos = nanos;
        }
    }
    return EXIT_SUCCESS;
}

The C11 N1570 standard draft 7.27.2.5 \"The timespec_get function says\":

If base is TIME_UTC, the tv_sec member is set to the number of seconds since an implementation defined epoch, truncated to a whole value and the tv_nsec member is set to the integral number of nanoseconds, rounded to the resolution of the system clock. (321)

321) Although a struct timespec object describes times with nanosecond resolution, the available resolution is system dependent and may even be greater than 1 second.

C++11 also got std::chrono::high_resolution_clock: C++ Cross-Platform High-Resolution Timer

glibc 2.21 implementation

Can be found under sysdeps/posix/timespec_get.c as:

int
timespec_get (struct timespec *ts, int base)
{
  switch (base)
    {
    case TIME_UTC:
      if (__clock_gettime (CLOCK_REALTIME, ts) < 0)
        return 0;
      break;

    default:
      return 0;
    }

  return base;
}

so clearly:

  • only TIME_UTC is currently supported

  • it forwards to __clock_gettime (CLOCK_REALTIME, ts), which is a POSIX API: http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html

    Linux x86-64 has a clock_gettime system call.

    Note that this is not a fail-proof micro-benchmarking method because:

    • man clock_gettime says that this measure may have discontinuities if you change some system time setting while your program runs. This should be a rare event of course, and you might be able to ignore it.

    • this measures wall time, so if the scheduler decides to forget about your task, it will appear to run for longer.

    For those reasons getrusage() might be a better better POSIX benchmarking tool, despite it\'s lower microsecond maximum precision.

    More information at: Measure time in Linux - time vs clock vs getrusage vs clock_gettime vs gettimeofday vs timespec_get?



回答6:

The best precision you can possibly get is through the use of the x86-only \"rdtsc\" instruction, which can provide clock-level resolution (ne must of course take into account the cost of the rdtsc call itself, which can be measured easily on application startup).

The main catch here is measuring the number of clocks per second, which shouldn\'t be too hard.



回答7:

The accepted answer is good enough.But my solution is more simple.I just test in Linux, use gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0.

Alse use gettimeofday, the tv_sec is the part of second, and the tv_usec is microseconds, not milliseconds.

long currentTimeMillis() {
  struct timeval time;
  gettimeofday(&time, NULL);

  return time.tv_sec * 1000 + time.tv_usec / 1000;
}

int main() {
  printf(\"%ld\\n\", currentTimeMillis());
  // wait 1 second
  sleep(1);
  printf(\"%ld\\n\", currentTimeMillis());
  return 0;
 }

It print:

1522139691342 1522139692342, exactly a second.



回答8:

Under windows:

SYSTEMTIME t;
GetLocalTime(&t);
swprintf_s(buff, L\"[%02d:%02d:%02d:%d]\\t\", t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);