I want to measure the speed of a function within a loop. But why my way of doing it always print "0" instead of high-res timing with 9 digits decimal precision (i.e. in nano/micro seconds)?
What's the correct way to do it?
#include <iomanip>
#include <iostream>
#include <time.h>
int main() {
for (int i = 0; i <100; i++) {
std::clock_t startTime = std::clock();
// a very fast function in the middle
cout << "Time: " << setprecision(9) << (clock() - startTime + 0.00)/CLOCKS_PER_SEC << endl;
}
return 0;
}
See a question I asked about the same thing: apparently
clock()
's resolution is not guaranteed to be so high.C++ obtaining milliseconds time on Linux -- clock() doesn't seem to work properly
Try
gettimeofday
function, or boostYou might want to look into using openMp.
If you need platform independence you need to use something like ACE_High_Res_Timer (http://www.dre.vanderbilt.edu/Doxygen/5.6.8/html/ace/a00244.html)
A few pointers:
Move your time calculation functions outside
for () { .. }
statement then devide total execution time by the number of operations in your testing loop.Note: std::clock() lacks sufficient precision for profiling. Reference.
If you need higher resolution, the only way to go is platform dependent.
On Windows, check out the QueryPerformanceCounter/QueryPerformanceFrequency API's.
On Linux, look up clock_gettime().