Why does .NET use int instead of uint in certain c

2019-01-02 21:26发布

问题:

I always come across code that uses int for things like .Count, etc, even in the framework classes, instead of uint.

What's the reason for this?

回答1:

UInt32 is not CLS compliant so it might not be available in all languages that target the Common Language Specification. Int32 is CLS compliant and therefore is guaranteed to exist in all languages.



回答2:

int, in c, is specifically defined to be the default integer type of the processor, and is therefore held to be the fastest for general numeric operations.



回答3:

Another reason for using int:

Say you have a for-loop like this:

for (i = 0; i <= someList.Count - 1; i++) {
  // Stuff
}

(though you probably shouldn't do it that way)

obviously, if someList.Count was 0 and unsigned, you would have a problem.



回答4:

If the number is truly unsigned by its intrinsic nature then I would declare it an unsigned int. However, if I just happen to be using a number (for the time being) in the positive range then I would call it an int.

The main reasons being that:

  • It avoids having to do a lot of type-casting as most methods/functions are written to take an int and not an unsigned int.
  • It eliminates possible truncation warnings.
  • You invariably end up wishing you could assign a negative value to the number that you had originally thought would always be positive.

Are just a few quick thoughts that came to mind.

I used to try and be very careful and choose the proper unsigned/signed and I finally realized that it doesn't really result in a positive benefit. It just creates extra work. So why make things hard by mixing and matching.



回答5:

UInt32 isn't CLS-Compliant. http://msdn.microsoft.com/en-us/library/system.uint32.aspx

I think that over the years people have come to the conclusions that using unsigned types doesn't really offer that much benefit. The better question is what would you gain by making Count a UInt32?



回答6:

Unsigned types only behave like whole numbers if the sum or product of a signed and unsigned value will be a signed type large enough to hold either operand, and if the difference between two unsigned values is a signed value large enough to hold any result. Thus, code which makes significant use of UInt32 will frequently need to compute values as Int64. Operations on signed integer types may fail to operate like whole numbers when the operands are overly large, but they'll behave sensibly when operands are small. Operations on unpromoted arguments of unsigned types pose problems even when operands are small. Given UInt32 x; for example, the inequality x-1 < x will fail for x==0 if the result type is UInt32, and the inequality x<=0 || x-1>=0 will fail for large x values if the result type is Int32. Only if the operation is performed on type Int64 can both inequalities be upheld.

While it is sometimes useful to define unsigned-type behavior in ways that differ from whole-number arithmetic, values which represent things like counts should generally use types that will behave like whole numbers--something unsigned types generally don't do unless they're smaller than the basic integer type.



回答7:

Some things use int so that they can return -1 as if it were "null" or something like that. Like a ComboBox will return -1 for it's SelectedIndex if it doesn't have any item selected.



回答8:

Some old libraries and even InStr use negative numbers to mean special cases. I believe either its laziness or there's negative special values.



标签: