Consider the following code
template<unsigned int N> void foo(std::bitset<N> bs)
{ /* whatever */ }
int main()
{
bitset<8> bar;
foo(bar);
return 0;
}
g++ complains about this on 64 bit because the <8> gets interpreted as an unsigned long int, which doesn't exactly match the template. If I change the template to say unsigned long int, then 32-bit compiles complain.
Obviously one way to fix this is to change bitset<8> to bitset<8ul>, but is there any way to re-write the template part so that it will work with whatever the default interpretation of a numeric literal is?
The problem isn't whether or not you write 8u
or 8
. The problem has to do with the type of the template parameter of your function template. Its type has to match the one used in the declaration of std::bitset
. That's size_t
according to the Standard (section 23.3.5
)
namespace std {
template<size_t N> class bitset {
public:
// bit reference:
...
The exception are array dimensions, for which you can use any integer type (even bool
- then the only size that can be accepted is 1
of course):
// better size_t (non-negative), but other types work too
template<int N> void f(char(&)[N]);
But in other occasions, types have to match. Note that this is only true for autodeduced template arguments, but not for explicitly given ones. The reason is that for deduced ones, the compiler tries to figure out the best match between actual template arguments and what it deduced from the call to it. Many otherwise implicit conversions are disallowed then. You have the full range of conversions available if you put the argument explicit (ignoring the solution of using size_t
now to make my point)
template<int N> void foo(std::bitset<N> bs)
{ /* whatever */ }
int main() {
bitset<8> bar;
foo<8>(bar); // no deduction, but full range of conversions
}
Use size_t
. So sayeth the MSDN at least.
a numeric literal should be interpreted as an int no matter the platform