I'm looking to implement a simple timer mechanism in C++. The code should work in Windows and Linux. The resolution should be as precise as possible (at least millisecond accuracy). This will be used to simply track the passage of time, not to implement any kind of event-driven design. What is the best tool to accomplish this?
相关问题
- Sorting 3 numbers without branching [closed]
- How to compile C++ code in GDB?
- Why does const allow implicit conversion of refere
- thread_local variables initialization
- What uses more memory in c++? An 2 ints or 2 funct
相关文章
- Class layout in C++: Why are members sometimes ord
- How to mock methods return object with deleted cop
- Which is the best way to multiply a large and spar
- C++ default constructor does not initialize pointe
- How do you make an installer for your python progr
- Selecting only the first few characters in a strin
- Libgdx - Check if a key is being held down?
- What exactly do pointers store? (C++)
The first answer to C++ library questions is generally BOOST: http://www.boost.org/doc/libs/1_40_0/libs/timer/timer.htm. Does this do what you want? Probably not but it's a start.
The problem is you want portable and timer functions are not universal in OSes.
I am not sure about your requirement, If you want to calculate time interval please see thread below
Calculating elapsed time in a C program in milliseconds
This isn't the greatest answer, but here are some conversations on a Game Development site regarding high resolution timers:
If one is using the Qt framework in the project, the best solution is probably to use QElapsedTimer.
For C++03:
Boost.Timer might work, but it depends on the C function
clock
and so may not have good enough resolution for you.Boost.Date_Time includes a
ptime
class that's been recommended on Stack Overflow before. See its docs onmicrosec_clock::local_time
andmicrosec_clock::universal_time
, but note its caveat that "Win32 systems often do not achieve microsecond resolution via this API."STLsoft provides, among other things, thin cross-platform (Windows and Linux/Unix) C++ wrappers around OS-specific APIs. Its performance library has several classes that would do what you need. (To make it cross platform, pick a class like
performance_counter
that exists in both thewinstl
andunixstl
namespaces, then use whichever namespace matches your platform.)For C++11 and above:
The
std::chrono
library has this functionality built in. See this answer by @HowardHinnant for details.Updated answer for an old question:
In C++11 you can portably get to the highest resolution timer with:
Example output:
"chrono_io" is an extension to ease I/O issues with these new types and is freely available here.
There is also an implementation of
<chrono>
available in boost (might still be on tip-of-trunk, not sure it has been released).Update
This is in response to Ben's comment below that subsequent calls to
std::chrono::high_resolution_clock
take several milliseconds in VS11. Below is a<chrono>
-compatible workaround. However it only works on Intel hardware, you need to dip into inline assembly (syntax to do that varies with compiler), and you have to hardwire the machine's clock speed into the clock:So it isn't portable. But if you want to experiment with a high resolution clock on your own intel hardware, it doesn't get finer than this. Though be forewarned, today's clock speeds can dynamically change (they aren't really a compile-time constant). And with a multiprocessor machine you can even get time stamps from different processors. But still, experiments on my hardware work fairly well. If you're stuck with millisecond resolution, this could be a workaround.
This clock has a duration in terms of your cpu's clock speed (as you reported it). I.e. for me this clock ticks once every 1/2,800,000,000 of a second. If you want to, you can convert this to nanoseconds (for example) with:
The conversion will truncate fractions of a cpu cycle to form the nanosecond. Other rounding modes are possible, but that's a different topic.
For me this will return a duration as low as 18 clock ticks, which truncates to 6 nanoseconds.
I've added some "invariant checking" to the above clock, the most important of which is checking that the
clock::period
is correct for the machine. Again, this is not portable code, but if you're using this clock, you've already committed to that. The privateget_clock_speed()
function shown here gets the maximum cpu frequency on OS X, and that should be the same number as the constant denominator ofclock::period
.Adding this will save you a little debugging time when you port this code to your new machine and forget to update the
clock::period
to the speed of your new machine. All of the checking is done either at compile-time or at program startup time. So it won't impact the performance ofclock::now()
in the least.