C++20 introduced the std::ssize()
free function as below:
template <class C>
constexpr auto ssize(const C& c)
-> std::common_type_t<std::ptrdiff_t,
std::make_signed_t<decltype(c.size())>>;
A possible implementation seems using static_cast
, to convert the return value of the size()
member function of class C into its signed counterpart.
Since the size()
member function of C always returns non-negative values, why would anyone want to store them in signed variables? In case one really wants to, it is a matter of simple static_cast
.
Why is std::ssize()
introduced in C++20?
The rationale is described in this paper. A quote:
When span was adopted into C++17, it used a signed integer both as an index and a size. Partly this was to allow for the use of "-1" as a sentinel value to indicate a type whose size was not known at compile time. But having an STL container whose size() function returned a signed value was problematic, so P1089 was introduced to "fix" the problem. It received majority support, but not the 2-to-1 margin needed for consensus.
This paper, P1227, was a proposal to add non-member std::ssize and member ssize() functions. The inclusion of these would make certain code much more straightforward and allow for the avoidance of unwanted unsigned-ness in size computations. The idea was that the resistance to P1089 would decrease if ssize() were made available for all containers, both through std::ssize() and as member functions.
Gratuitously stolen from Eric Niebler:
'Unsigned types signal that a negative index/size is not sane'
was
the prevailing wisdom when the STL was first designed. But logically,
a count of things need not be positive. I may want to keep a count in
a signed integer to denote the number of elements either added to or
removed from a collection. Then I would want to combine that with the
size of the collection. If the size of the collection is unsigned, now
I'm forced to mix signed and unsigned arithmetic, which is a bug farm.
Compilers warn about this, but because the design of the STL pretty
much forces programmers into this situation, the warning is so common
that most people turn it off. That's a shame because this hides real
bugs.
Use of unsigned ints in interfaces isn't the boon many people think it
is. If by accident a user passes a slightly negative number to the
API, it suddenly becomes a huge positive number. Had the API taken the
number as signed, then it can detect the situation by asserting the
number is greater than or equal to zero.
If we restrict our use of unsigned ints to bit twiddling (e.g., masks)
and use signed ints everywhere else, bugs are less likely to occur,
and easier to detect when they do occur.