How to Calculate Execution Time of a Code Snippet

2019-01-01 12:27发布

I have to compute execution time of a C++ code snippet in seconds. It must be working either on Windows or Unix machines.

I use code the following code to do this. (import before)

clock_t startTime = clock();
// some code here
// to compute its execution duration in runtime
cout << double( clock() - startTime ) / (double)CLOCKS_PER_SEC<< " seconds." << endl;

However for small inputs or short statements such as a = a + 1, I get "0 seconds" result. I think it must be something like 0.0000001 seconds or something like that.

I remember that System.nanoTime() in Java works pretty well in this case. However I can't get same exact functionality from clock() function of C++.

Do you have a solution?

16条回答
谁念西风独自凉
2楼-- · 2019-01-01 12:46

boost::timer will probably give you as much accuracy as you'll need. It's nowhere near accurate enough to tell you how long a = a+1; will take, but I what reason would you have to time something that takes a couple nanoseconds?

查看更多
查无此人
3楼-- · 2019-01-01 12:46

I created a simple utility for measuring performance of blocks of code, using the chrono library's high_resolution_clock: https://github.com/nfergu/codetimer.

Timings can be recorded against different keys, and an aggregated view of the timings for each key can be displayed.

Usage is as follows:

#include <chrono>
#include <iostream>
#include "codetimer.h"

int main () {
    auto start = std::chrono::high_resolution_clock::now();
    // some code here
    CodeTimer::record("mykey", start);
    CodeTimer::printStats();
    return 0;
}
查看更多
几人难应
4楼-- · 2019-01-01 12:48

Here is a simple solution in C++11 which gives you satisfying resolution.

#include <iostream>
#include <chrono>

class Timer
{
public:
    Timer() : beg_(clock_::now()) {}
    void reset() { beg_ = clock_::now(); }
    double elapsed() const { 
        return std::chrono::duration_cast<second_>
            (clock_::now() - beg_).count(); }

private:
    typedef std::chrono::high_resolution_clock clock_;
    typedef std::chrono::duration<double, std::ratio<1> > second_;
    std::chrono::time_point<clock_> beg_;
};

Or on *nix, for c++03

#include <iostream>
#include <ctime>

class Timer
{
public:
    Timer() { clock_gettime(CLOCK_REALTIME, &beg_); }

    double elapsed() {
        clock_gettime(CLOCK_REALTIME, &end_);
        return end_.tv_sec - beg_.tv_sec +
            (end_.tv_nsec - beg_.tv_nsec) / 1000000000.;
    }

    void reset() { clock_gettime(CLOCK_REALTIME, &beg_); }

private:
    timespec beg_, end_;
};

Here is the example usage:

int main()
{
    Timer tmr;
    double t = tmr.elapsed();
    std::cout << t << std::endl;

    tmr.reset();
    t = tmr.elapsed();
    std::cout << t << std::endl;

    return 0;
}

From https://gist.github.com/gongzhitaao/7062087

查看更多
梦该遗忘
5楼-- · 2019-01-01 12:48

(windows specific solution) The current (circa 2017) way to get accurate timings under windows is to use "QueryPerformanceCounter". This approach has the benefit of giving very accurate results and is recommended by MS. Just plop the code blob into a new console app to get a working sample. There is a lengthy discussion here: Acquiring High resolution time stamps

#include <iostream>
#include <tchar.h>
#include <windows.h>

int main()
{
constexpr int MAX_ITER{ 10000 };
constexpr __int64 us_per_hour{ 3600000000ull }; // 3.6e+09
constexpr __int64 us_per_min{ 60000000ull };
constexpr __int64 us_per_sec{ 1000000ull };
constexpr __int64 us_per_ms{ 1000ull };

// easy to work with
__int64 startTick, endTick, ticksPerSecond, totalTicks = 0ull;

QueryPerformanceFrequency((LARGE_INTEGER *)&ticksPerSecond);

for (int iter = 0; iter < MAX_ITER; ++iter) {// start looping
    QueryPerformanceCounter((LARGE_INTEGER *)&startTick); // Get start tick
    // code to be timed
    std::cout << "cur_tick = " << iter << "\n";
    QueryPerformanceCounter((LARGE_INTEGER *)&endTick); // Get end tick
    totalTicks += endTick - startTick; // accumulate time taken
}

// convert to elapsed microseconds
__int64 totalMicroSeconds =  (totalTicks * 1000000ull)/ ticksPerSecond;

__int64 hours = totalMicroSeconds / us_per_hour;
totalMicroSeconds %= us_per_hour;
__int64 minutes = totalMicroSeconds / us_per_min;
totalMicroSeconds %= us_per_min;
__int64 seconds = totalMicroSeconds / us_per_sec;
totalMicroSeconds %= us_per_sec;
__int64 milliseconds = totalMicroSeconds / us_per_ms;
totalMicroSeconds %= us_per_ms;


std::cout << "Total time: " << hours << "h ";
std::cout << minutes << "m " << seconds << "s " << milliseconds << "ms ";
std::cout << totalMicroSeconds << "us\n";

return 0;
}
查看更多
只若初见
6楼-- · 2019-01-01 12:49
#include <boost/progress.hpp>

using namespace boost;

int main (int argc, const char * argv[])
{
  progress_timer timer;

  // do stuff, preferably in a 100x loop to make it take longer.

  return 0;
}

When progress_timer goes out of scope it will print out the time elapsed since its creation.

UPDATE: I made a simple standalone replacement (OSX/iOS but easy to port): https://github.com/catnapgames/TestTimerScoped

查看更多
临风纵饮
7楼-- · 2019-01-01 12:50

Windows provides QueryPerformanceCounter() function, and Unix has gettimeofday() Both functions can measure at least 1 micro-second difference.

查看更多
登录 后发表回答