I understand the mechanics of static polymorphism using the Curiously Recurring Template Pattern. I just do not understand what is it good for.
The declared motivation is:
We sacrifice some flexibility of dynamic polymorphism for speed.
But why bother with something so complicated like:
template <class Derived>
class Base
{
public:
void interface()
{
// ...
static_cast<Derived*>(this)->implementation();
// ...
}
};
class Derived : Base<Derived>
{
private:
void implementation();
};
When you can just do:
class Base
{
public:
void interface();
}
class Derived : public Base
{
public:
void interface();
}
My best guess is that there is no semantic difference in the code and that it is just a matter of good C++ style.
Herb Sutter wrote in Exceptional C++ style: Chapter 18
that:
Prefer to make virtual functions private.
Accompanied of course with a thorough explanation why this is good style.
In the context of this guideline the first example is good, because:
The void implementation()
function in the example can pretend to be virtual, since it is here to perform customization of the class. It therefore should be private.
And the second example is bad, since:
We should not meddle with the public interface to perform customization.
My question is:
- What am I missing about static polymorphism? Is it all about good C++ style?
- When should it be used? What are some guidelines?
The link you provide mentions boost iterators as an example of static polymorphism. STL iterators also exhibit this pattern. Lets take a look at an example and consider why the authors of those types decided this pattern was appropriate:
Now, how would we implement
int vector<int>::const_iterator::operator*() const;
Can we use polymprhism for this? Well, no. What would the signature of our virtual function be?void const* operator*() const
? That's useless! The type has been erased (degraded from int to void*). Instead, the curiously recurring template pattern steps in to help us generate the iterator type. Here is a rough approximation of the iterator class we would need to implement the above:Traditional dynamic polymorphism could not provide the above implementation!
A related and important term is parametric polymorphism. This allows you to implement similar APIs in, say, python that you can using the curiously recurring template pattern in C++. Hope this is helpful!
I think it's worth taking a stab at the source of all this complexity, and why languages like Java and C# mostly try to avoid it: type erasure! In c++ there is no useful all containing
Object
type with useful information. Instead we havevoid*
and once you havevoid*
you truely have nothing! If you have an interface that decays tovoid*
the only way to recover is by making dangerous assumptions or keeping extra type information around.Static polymorphism and runtime polymorphism are different things and accomplish different goals. They are both technically polymorphism, in that they decide which piece of code to execute based on the type of something. Runtime polymorphism defers binding the type of something (and thus the code that runs) until runtime, while static polymorphism is completely resolved at compile time.
This results in pros and cons for each. For instance, static polymorphism can check assumptions at compile time, or select among options which would not compile otherwise. It also provides tons of information to the compiler and optimizer, which can inline knowing fully the target of calls and other information. But static polymorphism requires that implementations be available for the compiler to inspect in each translation unit, can result in binary code size bloat (templates are fancy pants copy paste), and don't allow these determinations to occur at runtime.
For instance, consider something like
std::advance
:There's no way to get this to compile using runtime polymorphism. You have to make the decision at compile time. (Typically you would do this with tag dispatch e.g.)
Similarly, there are cases where you really don't know the type at compile time. Consider:
Here,
DoAndLog
doesn't know anything about the actualostream
implementation it gets -- and it may be impossible to statically determine what type will be passed in. Sure, this can be turned into a template:But this forces
DoAndLog
to be implemented in a header file, which may be impractical. It also requires that all possible implementations ofStreamT
are visible at compile time, which may not be true -- runtime polymorphism can work (although this is not recommended) across DLL or SO boundaries.This is like someone coming to you and saying "when I'm writing a sentence, should I use compound sentences or simple sentences"? Or perhaps a painter saying "should I always use red paint or blue paint?" There is no right answer, and there is no set of rules that can be blindly followed here. You have to look at the pros and cons of each approach, and decide which best maps to your particular problem domain.
As for the CRTP, most use cases for that are to allow the base class to provide something in terms of the derived class; e.g. Boost's
iterator_facade
. The base class needs to have things likeDerivedClass operator++() { /* Increment and return *this */ }
inside -- specified in terms of derived in the member function signatures.It can be used for polymorphic purposes, but I haven't seen too many of those.
While there may be cases where static polymorphism is useful (the other answers have listed a few), I would generally see it as a bad thing. Why? Because you cannot actually use a pointer to the base class anymore, you always have to provide a template argument providing the exact derived type. And in that case, you could just as well use the derived type directly. And, to put it bluntly, static polymorphism is not what object orientation is about.
The runtime difference between static and dynamic polymorphism is exactly two pointer dereferenciations (iff the compiler really inlines the dispatch method in the base class, if it doesn't for some reason, static polymorphism is slower). That's not really expensive, especially since the second lookup should virtually always hit the cache. All in all, those lookups are usually cheaper than the function call itself, and are certainly worth it to get the real flexibility provided by dynamic polymorphism.