These should probably be in different questions, but they're related so...
Why do we need to write constexpr
at all? Given a set of restrictions couldn't a compiler evaluate code to see if it satisfies the constexpr
requirements, and treat it as constexpr
if it does? As a purely documentation keyword I'm not sure it holds up because I can't think of a case where I (the user of someone else's constexpr
function) should really care if it's run time or not.
Here's my logic: If it's an expensive function I think as a matter of good practice I should treat it as such regardless of whether I give it compile-time constant input or not. That might mean calling it during load time and saving off the result, instead of calling it during a critical point in the execution. The reason is because constexpr
doesn't actually guarantee to me that it will not be executed in run time in the first place — so perhaps a new/different mechanism should do that.
The constexpr
restrictions seem to exclude many, if not most, functions from being compile-time evaluated which logically could be. I've read this is at least in part (or perhaps wholly?) to prevent infinite looping and hanging the compiler. But, if this is the reason, is it legitimate?
Shouldn't a compiler be able to compute if, for any given constexpr
function with the given inputs used, it loops infinitely? This is not solving the halting problem for any input. The input to a constexpr
function is compile time constant and finite, so the compiler only has to check for infinite looping for a finite set of input: the input actually used. It should be a regular compilation error if you write a compile-time infinite loop.
I asked a very similar question, Why do we need to mark functions as constexpr?
When I pressed Richard Smith, a Clang author, he explained:
The constexpr keyword does have utility.
It affects when a function template specialization is instantiated (constexpr function template specializations may need to be instantiated if they're called in unevaluated contexts; the same is not true for non-constexpr functions since a call to one can never be part of a constant expression). If we removed the meaning of the keyword, we'd have to instantiate a bunch more specializations early, just in case the call happens to be a constant expression.
It reduces compilation time, by limiting the set of function calls that implementations are required to try evaluating during translation. (This matters for contexts where implementations are required to try constant expression evaluation, but it's not an error if such evaluation fails -- in particular, the initializers of objects of static storage duration.)
This all didn't seem convincing at first, but if you work through the details, things do unravel without constexpr
. A function need not be instantiated until it is ODR-used, which essentially means used at runtime. What is special about constexpr
functions is that they can violate this rule and require instantiation anyway.
Function instantiation is a recursive procedure. Instantiating a function results in instantiation of the functions and classes it uses, regardless of the arguments to any particular call.
If something went wrong while instantiating this dependency tree (potentially at significant expense), it would be difficult to swallow the error. Furthermore, class template instantiation can have runtime side-effects.
Given an argument-dependent compile-time function call in a function signature, overload resolution may incur instantiation of function definitions merely auxiliary to the ones in the overload set, including the functions that don't even get called. Such instantiations may have side effects including ill-formedness and runtime behavior.
It's a corner case to be sure, but bad things can happen if you don't require people to opt-in to constexpr
functions.
As for constexpr
objects, certain types can produce core constant expressions which are usable in constant expression contexts without having been declared constexpr
. But you don't really want the compiler to try evaluating every single expression at compile time. That's what constant propagation is for. On the other hand it seems pretty essential to document when something needs to happen at compile time.
[Note, I totally changed my answer]
To answer your second question, there are two cases for the compiler here:
The compiler has to be able to handle any arbitrary constexpr
function(s). In this case you still have the halting problem because the set of inputs is all combinations of constexpr
functions and calls to them.
The compiler can handle a finite set of constexpr
function(s). In this case the compiler can in fact determine whether some programs will result in infinite loops, while other programs will be uncompilable (since they aren't in the set of valid inputs).
So presumably the restrictions are in place so that it satisfies case 2 for a reasonable amount of compiler effort.
There are both technical and ideological reasons behind this decision.
Not always do we want constexpr
ourselves by default - it can take
too much compiling time. That's first. Just imagine you implemented
isPrime
function and you have 100 calls with big constexpr
values passed in. I think you don't (in most cases) want to make
compiler compiling this for a couple of minutes longer because it
decided that you need those values in compile-time by itself. But if
it's exactly the case - specify constexpr
modifier manually. And this adds the next point:
backward compatibility - it's unwise to assume that every possible C++98 program author who converted this program to C++11 wantsconstexpr
.
The second point is that deciding if the function can be constexpr
would take compiling time by itself. And if it was trying to do that for every possible function it would take some additional time overhead. Even more, often compiler
couldn't decide if the given function can be constexpr at all, so
your first assumption is not correct.