可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have to compute execution time of a C++ code snippet in seconds. It must be working either on Windows or Unix machines.
I use code the following code to do this. (import before)
clock_t startTime = clock();
// some code here
// to compute its execution duration in runtime
cout << double( clock() - startTime ) / (double)CLOCKS_PER_SEC<< \" seconds.\" << endl;
However for small inputs or short statements such as a = a + 1, I get \"0 seconds\" result. I think it must be something like 0.0000001 seconds or something like that.
I remember that System.nanoTime()
in Java works pretty well in this case. However I can\'t get same exact functionality from clock()
function of C++.
Do you have a solution?
回答1:
You can use this function I wrote. You call GetTimeMs64()
, and it returns the number of milliseconds elapsed since the unix epoch using the system clock - the just like time(NULL)
, except in milliseconds.
It works on both windows and linux; it is thread safe.
Note that the granularity is 15 ms on windows; on linux it is implementation dependent, but it usually 15 ms as well.
#ifdef _WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif
/* Remove if already defined */
typedef long long int64; typedef unsigned long long uint64;
/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
* windows and linux. */
uint64 GetTimeMs64()
{
#ifdef _WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;
/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
uint64 ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */
return ret;
#else
/* Linux */
struct timeval tv;
gettimeofday(&tv, NULL);
uint64 ret = tv.tv_usec;
/* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
ret /= 1000;
/* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
ret += (tv.tv_sec * 1000);
return ret;
#endif
}
回答2:
I have another working example that uses microseconds (UNIX, POSIX, etc).
#include <sys/time.h>
typedef unsigned long long timestamp_t;
static timestamp_t
get_timestamp ()
{
struct timeval now;
gettimeofday (&now, NULL);
return now.tv_usec + (timestamp_t)now.tv_sec * 1000000;
}
...
timestamp_t t0 = get_timestamp();
// Process
timestamp_t t1 = get_timestamp();
double secs = (t1 - t0) / 1000000.0L;
Here\'s the file where we coded this:
https://github.com/arhuaco/junkcode/blob/master/emqbit-bench/bench.c
回答3:
Here is a simple solution in C++11 which gives you satisfying resolution.
#include <iostream>
#include <chrono>
class Timer
{
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
Or on *nix, for c++03
#include <iostream>
#include <ctime>
class Timer
{
public:
Timer() { clock_gettime(CLOCK_REALTIME, &beg_); }
double elapsed() {
clock_gettime(CLOCK_REALTIME, &end_);
return end_.tv_sec - beg_.tv_sec +
(end_.tv_nsec - beg_.tv_nsec) / 1000000000.;
}
void reset() { clock_gettime(CLOCK_REALTIME, &beg_); }
private:
timespec beg_, end_;
};
Here is the example usage:
int main()
{
Timer tmr;
double t = tmr.elapsed();
std::cout << t << std::endl;
tmr.reset();
t = tmr.elapsed();
std::cout << t << std::endl;
return 0;
}
From https://gist.github.com/gongzhitaao/7062087
回答4:
#include <boost/progress.hpp>
using namespace boost;
int main (int argc, const char * argv[])
{
progress_timer timer;
// do stuff, preferably in a 100x loop to make it take longer.
return 0;
}
When progress_timer
goes out of scope it will print out the time elapsed since its creation.
UPDATE: I made a simple standalone replacement (OSX/iOS but easy to port): https://github.com/catnapgames/TestTimerScoped
回答5:
Windows provides QueryPerformanceCounter() function, and Unix has gettimeofday() Both functions can measure at least 1 micro-second difference.
回答6:
In some programs I wrote I used RDTS for such purpose. RDTSC is not about time but about number of cycles from processor start. You have to calibrate it on your system to get a result in second, but it\'s really handy when you want to evaluate performance, it\'s even better to use number of cycles directly without trying to change them back to seconds.
(link above is to a french wikipedia page, but it has C++ code samples, english version is here)
回答7:
I suggest using the standard library functions for obtaining time information from the system.
If you want finer resolution, perform more execution iterations. Instead of running the program once and obtaining samples, run it 1000 times or more.
回答8:
It is better to run the inner loop several times with the performance timing only once and average by dividing inner loop repetitions than to run the whole thing (loop + performance timing) several times and average. This will reduce the overhead of the performance timing code vs your actual profiled section.
Wrap your timer calls for the appropriate system. For Windows, QueryPerformanceCounter is pretty fast and \"safe\" to use.
You can use \"rdtsc\" on any modern X86 PC as well but there may be issues on some multicore machines (core hopping may change timer) or if you have speed-step of some sort turned on.
回答9:
If you want to get good exact results, then as stated above, your execution time does depend on thread scheduling. A complete unfailing solution to this, that should yield exactly the same times per each test, is to compile your program to be OS independent and boot up your computer so as to run the program in an OS-free environment. However, A good substitute to this is just to set the affinity of the current thread to 1 core and the priority to the highest. The result of doing this is very consistent results. And, one more thing is that you should turn off optimizations, which for g++ or gcc means adding -O0 to the command line, to prevent your code being tested from being optimized out. Here is an example of how I am bench-marking square root functions on a Windows computer.
#include <iostream>
#include <iomanip>
#include <cstdio>
#include <chrono>
#include <cmath>
#include <windows.h>
#include <intrin.h>
#pragma intrinsic(__rdtsc)
class Timer {
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
unsigned int guess_sqrt32(register unsigned int n) {
register unsigned int g = 0x8000;
if(g*g > n) {
g ^= 0x8000;
}
g |= 0x4000;
if(g*g > n) {
g ^= 0x4000;
}
g |= 0x2000;
if(g*g > n) {
g ^= 0x2000;
}
g |= 0x1000;
if(g*g > n) {
g ^= 0x1000;
}
g |= 0x0800;
if(g*g > n) {
g ^= 0x0800;
}
g |= 0x0400;
if(g*g > n) {
g ^= 0x0400;
}
g |= 0x0200;
if(g*g > n) {
g ^= 0x0200;
}
g |= 0x0100;
if(g*g > n) {
g ^= 0x0100;
}
g |= 0x0080;
if(g*g > n) {
g ^= 0x0080;
}
g |= 0x0040;
if(g*g > n) {
g ^= 0x0040;
}
g |= 0x0020;
if(g*g > n) {
g ^= 0x0020;
}
g |= 0x0010;
if(g*g > n) {
g ^= 0x0010;
}
g |= 0x0008;
if(g*g > n) {
g ^= 0x0008;
}
g |= 0x0004;
if(g*g > n) {
g ^= 0x0004;
}
g |= 0x0002;
if(g*g > n) {
g ^= 0x0002;
}
g |= 0x0001;
if(g*g > n) {
g ^= 0x0001;
}
return g;
}
unsigned int empty_function( unsigned int _input ) {
return _input;
}
unsigned long long empty_ticks=0;
double empty_seconds=0;
Timer my_time;
template<unsigned int benchmark_repetitions>
void benchmark( char* function_name, auto (*function_to_do)( auto ) ) {
register unsigned int i=benchmark_repetitions;
register unsigned long long start=0;
my_time.reset();
start=__rdtsc();
while ( i-- ) {
(*function_to_do)( i << 7 );
}
if ( function_name == nullptr ) {
empty_ticks = (__rdtsc()-start);
empty_seconds = my_time.elapsed();
std::cout<< \"Empty:\\n\" << empty_ticks
<< \" ticks\\n\" << benchmark_repetitions << \" repetitions\\n\"
<< std::setprecision(15) << empty_seconds
<< \" seconds\\n\\n\";
} else {
std::cout<< function_name<<\":\\n\" << (__rdtsc()-start-empty_ticks)
<< \" ticks\\n\" << benchmark_repetitions << \" repetitions\\n\"
<< std::setprecision(15) << (my_time.elapsed()-empty_seconds)
<< \" seconds\\n\\n\";
}
}
int main( void ) {
void* Cur_Thread= GetCurrentThread();
void* Cur_Process= GetCurrentProcess();
unsigned long long Current_Affinity;
unsigned long long System_Affinity;
unsigned long long furthest_affinity;
unsigned long long nearest_affinity;
if( ! SetThreadPriority(Cur_Thread,THREAD_PRIORITY_TIME_CRITICAL) ) {
SetThreadPriority( Cur_Thread, THREAD_PRIORITY_HIGHEST );
}
if( ! SetPriorityClass(Cur_Process,REALTIME_PRIORITY_CLASS) ) {
SetPriorityClass( Cur_Process, HIGH_PRIORITY_CLASS );
}
GetProcessAffinityMask( Cur_Process, &Current_Affinity, &System_Affinity );
furthest_affinity = 0x8000000000000000ULL>>__builtin_clzll(Current_Affinity);
nearest_affinity = 0x0000000000000001ULL<<__builtin_ctzll(Current_Affinity);
SetProcessAffinityMask( Cur_Process, furthest_affinity );
SetThreadAffinityMask( Cur_Thread, furthest_affinity );
const int repetitions=524288;
benchmark<repetitions>( nullptr, empty_function );
benchmark<repetitions>( \"Standard Square Root\", standard_sqrt );
benchmark<repetitions>( \"Original Guess Square Root\", original_guess_sqrt32 );
benchmark<repetitions>( \"New Guess Square Root\", new_guess_sqrt32 );
SetThreadPriority( Cur_Thread, THREAD_PRIORITY_IDLE );
SetPriorityClass( Cur_Process, IDLE_PRIORITY_CLASS );
SetProcessAffinityMask( Cur_Process, nearest_affinity );
SetThreadAffinityMask( Cur_Thread, nearest_affinity );
for (;;) { getchar(); }
return 0;
}
Also, credit to Mike Jarvis for his Timer.
And, please note (this is very important) that if you are going to be running bigger code snippets, then you REALLY must turn down the number of iterations to prevent your computer from freezing up.
回答10:
(windows specific solution)
The current (circa 2017) way to get accurate timings under windows is to use \"QueryPerformanceCounter\". This approach has the benefit of giving very accurate results and is recommended by MS. Just plop the code blob into a new console app to get a working sample. There is a lengthy discussion here: Acquiring High resolution time stamps
#include <iostream>
#include <tchar.h>
#include <windows.h>
int main()
{
constexpr int MAX_ITER{ 10000 };
constexpr __int64 us_per_hour{ 3600000000ull }; // 3.6e+09
constexpr __int64 us_per_min{ 60000000ull };
constexpr __int64 us_per_sec{ 1000000ull };
constexpr __int64 us_per_ms{ 1000ull };
// easy to work with
__int64 startTick, endTick, ticksPerSecond, totalTicks = 0ull;
QueryPerformanceFrequency((LARGE_INTEGER *)&ticksPerSecond);
for (int iter = 0; iter < MAX_ITER; ++iter) {// start looping
QueryPerformanceCounter((LARGE_INTEGER *)&startTick); // Get start tick
// code to be timed
std::cout << \"cur_tick = \" << iter << \"\\n\";
QueryPerformanceCounter((LARGE_INTEGER *)&endTick); // Get end tick
totalTicks += endTick - startTick; // accumulate time taken
}
// convert to elapsed microseconds
__int64 totalMicroSeconds = (totalTicks * 1000000ull)/ ticksPerSecond;
__int64 hours = totalMicroSeconds / us_per_hour;
totalMicroSeconds %= us_per_hour;
__int64 minutes = totalMicroSeconds / us_per_min;
totalMicroSeconds %= us_per_min;
__int64 seconds = totalMicroSeconds / us_per_sec;
totalMicroSeconds %= us_per_sec;
__int64 milliseconds = totalMicroSeconds / us_per_ms;
totalMicroSeconds %= us_per_ms;
std::cout << \"Total time: \" << hours << \"h \";
std::cout << minutes << \"m \" << seconds << \"s \" << milliseconds << \"ms \";
std::cout << totalMicroSeconds << \"us\\n\";
return 0;
}
回答11:
just a simple class that benchmark the codeblock :
using namespace std::chrono;
class benchmark {
public:
time_point<high_resolution_clock> t0, t1;
unsigned int *d;
benchmark(unsigned int *res) : d(res) {
t0 = high_resolution_clock::now();
}
~benchmark() { t1 = high_resolution_clock::now();
milliseconds dur = duration_cast<milliseconds>(t1 - t0);
*d = dur.count();
}
};
// simple usage
// unsigned int t;
// { // put the code in a block
// benchmark bench(&t);
// // ...
// // code to benchmark
// }
// HERE the t contains time in milliseconds
// one way to use it can be :
#define BENCH(TITLE,CODEBLOCK) \\
unsigned int __time__##__LINE__ = 0; \\
{ benchmark bench(&__time__##__LINE__); \\
CODEBLOCK \\
} \\
printf(\"%s took %d ms\\n\",(TITLE),__time__##__LINE__);
int main(void) {
BENCH(\"TITLE\",{
for(int n = 0; n < testcount; n++ )
int a = n % 3;
});
return 0;
}
回答12:
boost::timer will probably give you as much accuracy as you\'ll need. It\'s nowhere near accurate enough to tell you how long a = a+1;
will take, but I what reason would you have to time something that takes a couple nanoseconds?
回答13:
I created a lambda that calls you function call N times and returns you the average.
double c = BENCHMARK_CNT(25, fillVectorDeque(variable));
You can find the c++11 header here.
回答14:
For cases where you want to time the same stretch of code every time it gets executed (e.g. for profiling code that you think might be a bottleneck), here is a wrapper around (a slight modification to) Andreas Bonini\'s function that I find useful:
#ifdef _WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#endif
/*
* A simple timer class to see how long a piece of code takes.
* Usage:
*
* {
* static Timer timer(\"name\");
*
* ...
*
* timer.start()
* [ The code you want timed ]
* timer.stop()
*
* ...
* }
*
* At the end of execution, you will get output:
*
* Time for name: XXX seconds
*/
class Timer
{
public:
Timer(std::string name, bool start_running=false) :
_name(name), _accum(0), _running(false)
{
if (start_running) start();
}
~Timer() { stop(); report(); }
void start() {
if (!_running) {
_start_time = GetTimeMicroseconds();
_running = true;
}
}
void stop() {
if (_running) {
unsigned long long stop_time = GetTimeMicroseconds();
_accum += stop_time - _start_time;
_running = false;
}
}
void report() {
std::cout<<\"Time for \"<<_name<<\": \" << _accum / 1.e6 << \" seconds\\n\";
}
private:
// cf. http://stackoverflow.com/questions/1861294/how-to-calculate-execution-time-of-a-code-snippet-in-c
unsigned long long GetTimeMicroseconds()
{
#ifdef _WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;
/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* * to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
unsigned long long ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10; /* From 100 nano seconds (10^-7) to 1 microsecond (10^-6) intervals */
#else
/* Linux */
struct timeval tv;
gettimeofday(&tv, NULL);
unsigned long long ret = tv.tv_usec;
/* Adds the seconds (10^0) after converting them to microseconds (10^-6) */
ret += (tv.tv_sec * 1000000);
#endif
return ret;
}
std::string _name;
long long _accum;
unsigned long long _start_time;
bool _running;
};
回答15:
I created a simple utility for measuring performance of blocks of code, using the chrono library\'s high_resolution_clock: https://github.com/nfergu/codetimer.
Timings can be recorded against different keys, and an aggregated view of the timings for each key can be displayed.
Usage is as follows:
#include <chrono>
#include <iostream>
#include \"codetimer.h\"
int main () {
auto start = std::chrono::high_resolution_clock::now();
// some code here
CodeTimer::record(\"mykey\", start);
CodeTimer::printStats();
return 0;
}
回答16:
You could also look at the [cxx-rtimers][1]
on GitHub, which provide some header-only routines for gathering statistics on the run-time of any code-block where you can create a local variable. Those timers have versions that use std::chrono on C++11, or timers from the Boost library, or standard POSIX timer functions. These timers will report the average, maximum & minimum duration spent within a function, as well as the number of times it is called. They can be used as simply as follows:
#include <rtimers/cxx11.hpp>
void expensiveFunction() {
static rtimers::cxx11::DefaultTimer timer(\"expensive\");
auto scopedStartStop = timer.scopedStart();
// Do something costly...
}