C++ How to make timer accurate in Linux

2019-08-23 12:30发布

问题:

Consider this code:

#include <iostream>
#include <vector>
#include <functional>
#include <map>
#include <atomic>
#include <memory>
#include <chrono>
#include <thread>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/asio/high_resolution_timer.hpp>

static const uint32_t FREQUENCY = 5000; // Hz
static const uint32_t MKSEC_IN_SEC = 1000000;

std::chrono::microseconds timeout(MKSEC_IN_SEC / FREQUENCY);
boost::asio::io_service ioservice;
boost::asio::high_resolution_timer timer(ioservice);

static std::chrono::system_clock::time_point lastCallTime = std::chrono::high_resolution_clock::now();
static uint64_t deviationSum = 0;
static uint64_t deviationMin = 100000000;
static uint64_t deviationMax = 0;
static uint32_t counter = 0;

void timerCallback(const boost::system::error_code &err) {
  auto actualTimeout = std::chrono::high_resolution_clock::now() - lastCallTime;
  std::chrono::microseconds actualTimeoutMkSec = std::chrono::duration_cast<std::chrono::microseconds>(actualTimeout);
  long timeoutDeviation = actualTimeoutMkSec.count() - timeout.count();
  deviationSum += abs(timeoutDeviation);
  if(abs(timeoutDeviation) > deviationMax) {
    deviationMax = abs(timeoutDeviation);
  } else if(abs(timeoutDeviation) < deviationMin) {
    deviationMin = abs(timeoutDeviation);
  }

  ++counter;
  //std::cout << "Actual timeout: " << actualTimeoutMkSec.count() << "\t\tDeviation: " << timeoutDeviation << "\t\tCounter: " << counter << std::endl;

  timer.expires_from_now(timeout);
  timer.async_wait(timerCallback);
  lastCallTime = std::chrono::high_resolution_clock::now();
}

using namespace std::chrono_literals;

int main() {
  std::cout << "Frequency: " << FREQUENCY << " Hz" << std::endl;
  std::cout << "Callback should be called each: " << timeout.count() << " mkSec" << std::endl;
  std::cout << std::endl;

  ioservice.reset();
  timer.expires_from_now(timeout);
  timer.async_wait(timerCallback);
  lastCallTime = std::chrono::high_resolution_clock::now();
  auto thread = new std::thread([&] { ioservice.run(); });
  std::this_thread::sleep_for(1s);

  std::cout << std::endl << "Messages posted: " << counter << std::endl;
  std::cout << "Frequency deviation: " << FREQUENCY - counter << std::endl;
  std::cout << "Min timeout deviation: " << deviationMin << std::endl;
  std::cout << "Max timeout deviation: " << deviationMax << std::endl;
  std::cout << "Avg timeout deviation: " << deviationSum / counter << std::endl;

  return 0;
}

It runs timer to call timerCallback(..) periodically with specified frequency. In this example, callback must be called 5000 times per second. One can play with frequency and see that actual (measured) frequency of calls is different from desired one. In fact the higher is the frequency, the higher is deviation. I did some measurements with different frequencies and here is summary: https://docs.google.com/spreadsheets/d/1SQtg2slNv-9VPdgS0RD4yKRnyDK1ijKrjVz7BBMSg24/edit?usp=sharing

When desired frequency is 10000Hz, system miss 10% (~ 1000) of calls. When desired frequency is 100000Hz, system miss 40% (~ 40000) of calls.

Question: Is it possible to achieve better accuracy in Linux \ C ++ environment? How? I need it to work without significant deviation with frequency of 500000Hz

P.S. My first idea was that it is the body of the timerCallback(..) method itself causes delay. I measured it. It takes a stably takes less than 1 microsecond to execute. So it does not affect the process.

回答1:

I have no experience in this problem myself, but I guess (as the references explains) that the scheduler of the OS interferes with your callback somehow. So, you could try to use the real-time scheduler and try to change priority of your task to a higher one.

Hope this gives you a direction to find your answer.

Scheduler: http://gumstix.8.x6.nabble.com/High-resolution-periodic-task-on-overo-td4968642.html

Priority: https://linux.die.net/man/3/setpriority



回答2:

If you need to achieve one call each two microsecond interval, you'd better to attach to absolute time positions, and don't consider the time each request is going to require.... You run although into the problem that the processing required at each timeslot could be more cpu demanding than the time required for it to execute.

If you have a multicore cpu, I'd divide the timeslot between each core (in a multithreaded approach) for it to be longer for each core, so suppose that you have your requirements in a four core cpu, then you can allow each thread to execute 1 cal per 8usec, which is probably more affordable. In this case you use absolute timers (one absolute timer is one that waits until the wall clock ticks a specific absolute time, and not a delay from the time you called it) and will offset them by an amount equal to the thread number of 2usec delay, in this case (4 cores) you will start thread #1 at time T, thread #2 at time T + 2usec, thread #3 at time T + 4usec, ... and thread #N at time T + 2*(N-1)usec. Each thread will then start itself again at time oldT + 2usec, instead of doing some kind of nsleep(3) call. This will not accumulate the processing time to the delay call, as this is most probably what you are experiencing. The pthread library timers are all absolute time timers, so you can use them. I think this is the only way you'll be capable of reaching such a hard spec. (and prepare to see how the battery suffers with that, assuming you're in an android environment)

NOTE

in this approach, the external bus can be a bottleneck, so even if you get it working, probably it would be better to synchronize several machines with NTP (this can be done to the usec level, at the speed of actual GBit links) and use different processors running in parallel. As you don't describe anything of the process you have to repeat so densely, I cannot provide more help to the problem.



标签: c++ linux timer