In C#, int
and Int32
are the same thing, but I've read a number of times that int
is preferred over Int32
with no reason given. Is there a reason, and should I care?
相关问题
- Sorting 3 numbers without branching [closed]
- Graphics.DrawImage() - Throws out of memory except
- Why am I getting UnauthorizedAccessException on th
- 求获取指定qq 资料的方法
- How to know full paths to DLL's from .csproj f
It doesn't matter. int is the language keyword and Int32 its actual system type.
See also my answer here to a related question.
Though they are (mostly) identical (see below for the one [bug] difference), you definitely should care and you should use Int32.
The name for a 16-bit integer is Int16. For a 64 bit integer it's Int64, and for a 32-bit integer the intuitive choice is: int or Int32?
The question of the size of a variable of type Int16, Int32, or Int64 is self-referencing, but the question of the size of a variable of type int is a perfectly valid question and questions, no matter how trivial, are distracting, lead to confusion, waste time, hinder discussion, etc. (the fact this question exists proves the point).
Using Int32 promotes that the developer is conscious of their choice of type. How big is an int again? Oh yeah, 32. The likelihood that the size of the type will actually be considered is greater when the size is included in the name. Using Int32 also promotes knowledge of the other choices. When people aren't forced to at least recognize there are alternatives it become far too easy for int to become "THE integer type".
The class within the framework intended to interact with 32-bit integers is named Int32. Once again, which is: more intuitive, less confusing, lacks an (unnecessary) translation (not a translation in the system, but in the mind of the developer), etc.
int lMax = Int32.MaxValue
orInt32 lMax = Int32.MaxValue
?int isn't a keyword in all .NET languages.
Although there are arguments why it's not likely to ever change, int may not always be an Int32.
The drawbacks are two extra characters to type and [bug].
This won't compile
But this will:
int is the C# language's shortcut for System.Int32
Whilst this does mean that Microsoft could change this mapping, a post on FogCreek's discussions stated [source]
"On the 64 bit issue -- Microsoft is indeed working on a 64-bit version of the .NET Framework but I'm pretty sure int will NOT map to 64 bit on that system.
Reasons:
1. The C# ECMA standard specifically says that int is 32 bit and long is 64 bit.
2. Microsoft introduced additional properties & methods in Framework version 1.1 that return long values instead of int values, such as Array.GetLongLength in addition to Array.GetLength.
So I think it's safe to say that all built-in C# types will keep their current mapping."
Byte size for types is not too interesting when you only have to deal with a single language (and for code which you don't have to remind yourself about math overflows). The part that becomes interesting is when you bridge between one language to another, C# to COM object, etc., or you're doing some bit-shifting or masking and you need to remind yourself (and your code-review co-wokers) of the size of the data.
In practice, I usually use Int32 just to remind myself what size they are because I do write managed C++ (to bridge to C# for example) as well as unmanaged/native C++.
Long as you probably know, in C# is 64-bits, but in native C++, it ends up as 32-bits, or char is unicode/16-bits while in C++ it is 8-bits. But how do we know this? The answer is, because we've looked it up in the manual and it said so.
With time and experiences, you will start to be more type-conscientious when you do write codes to bridge between C# and other languages (some readers here are thinking "why would you?"), but IMHO I believe it is a better practice because I cannot remember what I've coded last week (or I don't have to specify in my API document that "this parameter is 32-bits integer").
In F# (although I've never used it), they define int, int32, and nativeint. The same question should rise, "which one do I use?". As others has mentioned, in most cases, it should not matter (should be transparent). But I for one would choose int32 and uint32 just to remove the ambiguities.
I guess it would just depend on what applications you are coding, who's using it, what coding practices you and your team follows, etc. to justify when to use Int32.
I'd recommend using Microsoft's StyleCop.
It is like FxCop, but for style-related issues. The default configuration matches Microsoft's internal style guides, but it can be customised for your project.
It can take a bit to get used to, but it definitely makes your code nicer.
You can include it in your build process to automatically check for violations.
Using the
Int32
type requires a namespace reference toSystem
, or fully qualifying (System.Int32
). I tend towardint
, because it doesn't require a namespace import, therefore reducing the chance of namespace collision in some cases. When compiled to IL, there is no difference between the two.