Before you cringe at the duplicate title, the other question wasn't suited to what I ask here (IMO). So.
I am really wanting to use virtual functions in my application to make things a hundred times easier (isn't that what OOP is all about ;)). But I read somewhere they came at a performance cost, seeing nothing but the same old contrived hype of premature optimization, I decided to give it a quick whirl in a small benchmark test using:
CProfiler.cpp
#include "CProfiler.h"
CProfiler::CProfiler(void (*func)(void), unsigned int iterations) {
gettimeofday(&a, 0);
for (;iterations > 0; iterations --) {
func();
}
gettimeofday(&b, 0);
result = (b.tv_sec * (unsigned int)1e6 + b.tv_usec) - (a.tv_sec * (unsigned int)1e6 + a.tv_usec);
};
main.cpp
#include "CProfiler.h"
#include <iostream>
class CC {
protected:
int width, height, area;
};
class VCC {
protected:
int width, height, area;
public:
virtual void set_area () {}
};
class CS: public CC {
public:
void set_area () { area = width * height; }
};
class VCS: public VCC {
public:
void set_area () { area = width * height; }
};
void profileNonVirtual() {
CS *abc = new CS;
abc->set_area();
delete abc;
}
void profileVirtual() {
VCS *abc = new VCS;
abc->set_area();
delete abc;
}
int main() {
int iterations = 5000;
CProfiler prf2(&profileNonVirtual, iterations);
CProfiler prf(&profileVirtual, iterations);
std::cout << prf.result;
std::cout << "\n";
std::cout << prf2.result;
return 0;
}
At first I only did 100 and 10000 iterations, and the results were worrying: 4ms for non virtualised, and 250ms for the virtualised! I almost went "nooooooo" inside, but then I upped the iterations to around 500,000; to see the results become almost completely identical (maybe 5% slower without optimization flags enabled).
My question is, why was there such a significant change with a low amount of iterations compared to high amount? Was it purely because the virtual functions are hot in cache at that many iterations?
Disclaimer
I understand that my 'profiling' code is not perfect, but it, as it has, gives an estimate of things, which is all that matters at this level. Also I am asking these questions to learn, not to solely optimize my application.
Extending Charles' answer.
The problem here is that your loop is doing more than just testing the virtual call itself (the memory allocation probably dwarfs the virtual call overhead anyway), so his suggestion is to change the code so that only the virtual call is tested.
Here the benchmark function is template, because template may be inlined while call through function pointers are unlikely to.
Classes:
And the measuring:
Note: this tests the overhead of a virtual call in comparison to a regular function. The functionality is different (since you do not have runtime dispatch in the second case), but it's therefore a worst-case overhead.
EDIT:
Results of the run (gcc.3.4.2, -O2, SLES10 quadcore server) note: with the functions definitions in another translation unit, to prevent inlining
Not really convincing.
When using too few iterations, there is a lot of noise in the measurement. The
gettimeofday
function is not going to be accurate enough to give you good measurements for only a handful of iterations, not to mention that it records total wall time (which includes time spent when preempted by other threads).Bottom line, though, you shouldn't come up with some ridiculously convoluted design to avoid virtual functions. They really don't add much overhead. If you have incredibly performance critical code and you know that virtual functions make up most of the time, then perhaps it's something to worry about. In any practical application, though, virtual functions won't be what's making your application slow.
In my opinion, When there was less number of loops, may be there was no context switching, But when you increased the number of loops, then there are very strong chances that context switching takes place and that is dominating the reading. For example first program takes 1 sec and second program 3 secs, but if context switch takes 10 secs, then the difference is 13/11 instead of 3/1.
There is a performance impact to calling a virtual function, because it does slightly more than calling a regular function. However, the impact is likely to be completely negligible in a real-world application -- even less so than appear in even the most finely crafted benchmarks.
In a real world application, the alternative to a virtual function is usually going to involve you hand-writing some similar system anyhow, because the behavior of calling a virtual function and calling a non-virtual function differs -- the former changes based on the runtime type of the invoking object. Your benchmark, even disregarding whatever flaws it has, doesn't measure equivalent behavior, only equivalent-ish syntax. If you were to institute a coding policy banning virtual functions you'd either have to write some potentially very roundabout or confusing code (which might be slower) or re-implement a similar kind of runtime dispatch system that the compiler is using to implement virtual function behavior (which is certainly going to be no faster than what the compiler does, in most cases).
I think that this kind of testing is pretty useless, in fact:
1) you are wasting time for profiling itself invoking
gettimeofday()
;2) you are not really testing virtual functions, and IMHO this is the worst thing.
Why? Because you use virtual functions to avoid writing things such as:
in this code, you miss the "if... else" block so you don't really get the advantage of virtual functions. This is a scenario where they are always "loser" against non-virtual.
To do a proper profiling, I think you should add something like the code I've posted.
There could be several reasons for the difference in time.
the heap manager may influence the result, because
sizeof(VCS) > sizeof(VS)
. What happens if you move thenew
/delete
out of the loop?Again, due to size differences, memory cache may indeed be part of the difference in time.
BUT: you should really compare similar functionality. When using virtual functions, you do so for a reason, which is calling a different member function dependent on the object's identity. If you need this functionality, and don't want to use virtual functions, you would have to implement it manually, be it using a function table or even a switch statement. This comes at a cost, too, and that's what you should compare against virtual functions.