timer accuracy: c clock( ) vs. WinAPI's QPC or

2019-04-08 20:38发布

问题:

I'd like to characterize the accuracy of a software timer. I'm not concerned so much about HOW accurate it is, but do need to know WHAT the accuracy is.

I've investigated c function clock(), and WinAPI's function QPC and timeGetTime, and I know that they're all hardware dependent.

I'm measuring a process that could take around 5-10 seconds, and my requirements are simple: I only need 0.1 second precision (resolution). But I do need to know what the accuracy is, worst-case.

while more accuracy would be preferred, I would rather know that the accuracy was poor (500ms) and account for it, than to believe that the accuracy was better (1 mS) but not be able to document it.

Does anyone have suggestions on how to characterize software clock accuracy?

Thanks

回答1:

You'll need to distinguish accuracy, resolution and latency.

clock(), GetTickCount and timeGetTime() are derived from a calibrated hardware clock. Resolution is not great, they are driven by the clock tick interrupt which ticks by default 64 times per second or once every 15.625 msec. You can use timeBeginPeriod() to drive that down to 1.0 msec. Accuracy is very good, the clock is calibrated from a NTP server, you can usually count on it not being off more than a second over a month.

QPC has a much higher resolution, always better than one microsecond and as little as half a nanosecond on some machines. It however has poor accuracy, the clock source is a frequency picked up from the chipset somewhere. It is not calibrated and has typical electronic tolerances. Use it only to time short intervals.

Latency is the most important factor when you deal with timing. You have no use for a highly accurate timing source if you can't read it fast enough. And that's always an issue when you run code in user mode on a protected mode operating system. Which always has code that runs with higher priority than your code. Particularly device drivers are trouble-makers, video and audio drivers in particular. Your code is also subjected to being swapped out of RAM, requiring a page-fault to get loaded back. On a heavily loaded machine, not being able to run your code for hundreds of milliseconds is not unusual. You'll need to factor this failure mode into your design. If you need guaranteed sub-millisecond accuracy then only a kernel thread with real-time priority can give you that.

A pretty decent timer is the multi-media timer you get from timeSetEvent(). It was designed to provide good service for the kind of programs that require a reliable timer. You can make it tick at 1 msec, it will catch up with delays when possible. Do note that it is an asynchronous timer, the callback is made on a separate worker thread so you have to be careful taking care of proper threading synchronization.



回答2:

Since you've asked for hard facts, here they are:

A typical frequency device controlling HPETs is the CB3LV-3I-14M31818 which specifies a frequency stability of +/- 50ppm between -40 °C and +85 °C. A cheaper chip is the CB3LV-3I-66M6660. This device has a frequency stability of +/- 100 ppm between -20°C and 70°C.

As you see, 50 to 100ppm will result in a drift of 50 to 100 us/s, 180 to 360 ms/hour, or 4.32 to 8.64 s/day!

Devices controlling the RTC are typically somewhat better: The RV-8564-C2 RTC module provides tolerances of +/- 10 to 20 ppm. Tighter tolerances are typically available in military version or on request. The deviation of this source is a factor of 5 less than that of the HPET. However, it is still 0.86 s/day.

All of the above values are maximum values as specified in the data sheet. Typical values may be considerably less, as mentioned in my comment, they are in the few ppm range.

The frequency values are also accompanied by thermal drift. The result of QueryPerformanceCounter() may be heavely influenced by thermal drift on systems operating with the ACPI Power Management Timer chip (example).

More information about timers: Clock and Timer Circuits.



回答3:

For QPC, you can call QueryPerformanceFrequency to get the rate it updates at. Unless you are using time, you will get more than 0.5s timing accuracy anyway, but clock isn't all that accurate - quite often 10ms segments [although the apparently CLOCKS_PER_SEC is standardized at 1 million, making the numbers APPEAR more accurate].

If you do something along these lines, you can figure out how small a gap you can measure [although at REALLY high frequency you may not be able to notice how small, e.g. timestamp counter that updates every clock-cycle, and reading it takes 20-40 clockcycles]:

 time_t t, t1;

 t = time();
 // wait for the next "second" to tick on. 
 while(t == (t1 = time()))  /* do nothing */ ;

 clock_t old = 0;
 clock_t min_diff = 1000000000;
 clock_t start, end;
 start = clock();
 int count = 0;
 while(t1 == time())
 {
    clock_t c = clock();
    if (old != 0 && c != old)
    {
       count ++;
       clock_t diff;
       diff = c - old;
       if (min_diff > diff) min_diff = diff;
    }
    old = c;
}
end = clock();
cout << "Clock changed " << count << " times" << endl;
cout << "Smallest differece " << min_diff << " ticks" << endl;
cout << "One second ~= " << end - start << " ticks" << endl; 

Obviously, you can apply same principle to other time-sources.

(Not compile-tested, but hopefully not too full of typos and mistakes)

Edit: So, if you are measuring times in the range of 10 seconds, a timer that runs at 100Hz would give you 1000 "ticks". But it could be 999 or 1001, depending on your luck and you catch it just right/wrong, so that's 2000 ppm there - then the clock input may vary too, but it's much smaller variation ~ 100ppm at most. For Linux, the the clock() is updated at 100Hz (the actual timer that runs the OS may run at a higher frequency, but clock() in Linux will update at 100Hz or 10ms intervals [and it only updates when the CPU is being used, so sitting 5 seconds waiting for user input is 0 time].

In windows, clock() measures the actual time, same as your wrist watch does, not just the CPU being used, so 5 seconds waiting for user input is counted as 5 seconds of time. I'm not sure how accurate it is.

The other problem that you will find is that modern systems are not very good at repeatable timing in general - no matter what you do, the OS, the CPU and the memory all conspire together to make life a misery for getting the same amount of time for two runs. CPU's these days often run with intentionally variable clock (it's allowed to drift about 0.1-0.5%) to reduce electromagnetic radiation for EMC, (electromagnetic compatibility) testing spikes that can "sneak out" of that nicely sealed computer box.

In other words, even if you can get a very standardized clock, your test results will vary up and down a bit, depending on OTHER factors that you can't do anything about...

In summary, unless you are looking for a number to fill into a form that requires you to have a ppm number for your clock accuracy, and it's a government form that you can't NOT fill that information into, I'm not entirely convinced it's very useful to know the accuracy of the timer used to measure the time itself. Because other factors will play AT LEAST as big a role.



标签: c++ c winapi timer