I had an interesting discussion with my friend about benchmarking a C/C++ code (or code, in general). We wrote a simple function which uses getrusage
to measure cpu time for a given piece of code. (It measures how much time of cpu it took to run a specific function). Let me give you an example:
const int iterations = 409600;
double s = measureCPU();
for( j = 0; j < iterations; j++ )
function(args);
double e = measureCPU();
std::cout << (e-s)/iterations << " s \n";
We argued, should we divide (e-s) by the number of iterations, or not? I mean, when we dont divide it the result is in acceptable form (ex. 3.0 s) but when we do divide it, it gives us results like 2.34385e-07 s ...
So here are my questions:
- should we divide (e-s) by the number of iterations, if so, why?
- how can we print 2.34385e-07 s in more human-readable form? (let's say, it took 0.00000003 s) ?
should we first make a function call for once, and after that measure cpu time for iterations, something like this:
// first function call, doesnt bother with it at all function(args); // real benchmarking const int iterations = 409600; double s = measureCPU(); for( j = 0; j < iterations; j++ ) function(args); double e = measureCPU(); std::cout << (e-s)/iterations << " s \n";