A company's internal c++ coding standards document states that even for basic data types like int, char, etc. one should define own typedefs like "typedef int Int". This is justified by advantage of portability of the code. However are there general considerations/ advice about when (in means for which types of projects) does it really make sense? Thanks in advance..
问题:
回答1:
Typedefing int
to Int
offers almost no advantage at all (it provides no semantic benefit, and leads to absurdities like typedef long Int
on other platforms to remain compatible).
However, typedefing int
to e.g. int32_t
(along with long
to int64_t
, etc.) does offer an advantage, because you are now free to choose the data-type with the relevant width in a self-documenting way, and it will be portable (just switch the typedefs on a different platform).
In fact, most compilers offer a stdint.h
which contains all of these definitions already.
回答2:
That depends. The example you cite:
typedef int Int;
is just plain dumb. It's a bit like defining a constant:
const int five = 5;
Just as there is zero chance of the variable five
ever becoming a different number, the typedef Int
can only possibly refer to the primitive type int
.
OTOH, a typedef like this:
typedef unsigned char byte;
makes life easier on the fingers (though it has no portability benefits), and one like this:
typedef unsigned long long uint64;
Is both easier to type and more portable, since, on Windows, you would write this instead (I think):
typedef unsigned __int64 uint64;
回答3:
Rubbish.
"Portability" is non-sense, because int
is always an int
. If they think they want something like an integer type that's 32-bits, then the typedef should be typedef int int32_t;
, because then you are naming a real invariant, and can actually ensure that this invariant holds, via the preprocessor etc.
But this is, of course, a waste of time, because you can use <cstdint>
, either in C++0x, or by extensions, or use Boost's implementation of it anyway.
回答4:
Typedefs can help describing the semantics of the data type. For instance, if you typedef float distance_t;
, you're letting the developer in on how the values of distance_t
will be interpreted. For instance you might be saying that the values may never be negative. What is -1.23 kilometers? In this scenario, it might just not make sense with negative distances.
Of course, typedefs does not in any way constraint the domain of the values. It is just a way to make code (should at least) readable, and to convey extra information.
The portability issues your work place seem to mention would be when you want ensure that a particular datatype is always the same size, no matter what compiler is used. For instance
#ifdef TURBO_C_COMPILER
typedef long int32;
#elsif MSVC_32_BIT_COMPILER
typedef int int32;
#elsif
...
#endif
回答5:
typedef int Int
is a dreadful idea... people will wonder if they're looking at C++, it's hard to type, visually distracting, and the only vaguely imaginable rationalisation for it is flawed, but let's put it out there explicitly so we can knock it down:
if one day say a 32-bit app is being ported to 64-bit, and there's lots of stupid code that only works for 32-bit ints, then at least the typedef can be changed to keep Int at 32 bits.
Critique: if the system is littered which code that's so badly written (i.e. not using an explicitly 32-bit type from cstdint), it's overwhelmingly likely to have other parts of the code where it will now need to be using 64-bit ints that will get stuck at 32-bit via the typedef. Code that interacts with library/system APIs using ints are likely to be given Ints, resulting in truncated handles that work until they happen to be outside the 32-bit range etc.. The code will need a complete reexamination before being trustworthy anyway. Having this justification floating around in people's minds can only discourage them from using explicitly-sized types where they are actually useful ("what are you doing that for?" "portability?" "but Int's for portability, just use that").
That said, the coding rules might be meant to encourage typedefs for things that are logically distinct types, such as temperatures, prices, speeds, distances etc.. In that case, typedefs can be vaguely useful in that they allow an easy way to recompile the program to say upgrade from float
precision to double
, downgrade from a real type to an integral one, or substitute a user-defined type with some special behaviours. It's quite handy for containers too, so that there's less work and less client impact if the container is changed, although such changes are usually a little painful anyway: the container APIs are designed to be a bit incompatible so that the important parts must be reexamined rather than compiling but not working or silently performing dramatically worse than before.
It's essential to remember though that a typedef
is only an "alias" to the actual underlying type, and doesn't actually create a new distinct type, so people can pass any value of that same type without getting any kind of compiler warning about type mismatches. This can be worked around with a template such as:
template <typename T, int N>
struct Distinct
{
Distinct(const T& t) : t_(t) { }
operator T&() { return t_; }
operator const T&() const { return t_; }
T t_;
};
typedef Distinct<float, 42> Speed;
But, it's a pain to make the values of N unique... you can perhaps have a central enum
listing the distinct values, or use __LINE__
if you're dealing with one translation unit and no multiple typedefs on a line, or take a const char*
from __FILE__
as well, but there's no particularly elegant solution I'm aware of.
(One classic article from 10 or 15 years ago demonstrated how you could create templates for types that knew of several orthogonal units, keeping counters of the current "power" in each, and adjusting the type as multiplications, divisions etc were performed. For example, you could declare something like Meters m; Time t; Acceleration a = m / t / t;
and have it check all the units were sensible at compile time.)
Is this a good idea anyway? Most people clearly consider it overkill, as almost nobody ever does it. Still, it can be useful and I have used it on several occasions where it was easy and/or particularly dangerous if values were accidentally misassigned.
回答6:
I suppose, that the main reason is portability of your code. For example, once you assume to use 32 bit integer type in the program, you need to be shure that the other's platform int is also 32 bits long. Typedef in header helps you to localize the changes of your code in one place.
回答7:
I would like to put out that it could also be used for people who speak a different language. Say for instance, if you speak spanish and your code is all in spanish wouldn't you want a type definition in spanish. Just something to consider.