Why is the row property of NSIndexPath a signed integer?
Could it ever take on a "valid" negative value?
edit
I haven't thought about this until today when I set LLVM to check sign comparison. This made the compiler spew out warnings whenever there was indexPath.row <= [someArray count]
or similar.
What happens if you use negative numbers?
It isn't wise to use negative values, if you do, you'll get crazy results
NSIndexPath* path = [NSIndexPath indexPathForRow:-2 inSection:0];
The above results in a section of 0, and a row of 4294967294 (which looks like underflow of an NSUInteger
to me!) Be safe in the knowledge that this only occurs within the UIKit Additions category, and not within NSIndexPath itself. Looking at the concept behind NSIndexPath
, it really doesn't make sense to hold negative values. So why?
(Possible) Reason for why it is so
The core object NSIndexPath
from OS X uses NSUInteger
s for its indices, but the UIKit Addition uses NSInteger
. The category only builds on top of the core object, but the use of NSInteger
over NSUInteger
doesn't provide any extra capabilities.
Why it works this way, I have no idea. My guess (and I stipulate guess), is it was a naive API slipup when first launching iOS. When UITableView
was released in iOS 2, it used NSInteger
s for a variety of things (such as numberOfSections
). Think about it: This conceptually doesn't make sense, you can't have a negative number of sections. Now even in iOS 6, it still uses NSInteger
, so not to break previous application compatibility with table views.
Alongside UITableView
, we have the additions to NSIndexPath
, which are used in conjunction with the table view for accessing it's rows and such. Because they have to work together, they need compatible types (in this case NSInteger
).
To change the type to NSUInteger
across the board would break a lot of things, and for safe API design, everything would need to be renamed so that the NSInteger and NSUInteger counterparts could work safely side by side. Apple probably don't want this hassle (and neither do the developers!), and as such they have kept it to NSInteger
.
One possible reason is that unsigned types underflow very easily. As an example, I had an NSUInteger
variable for stroke width in my code. I needed to create an “envelope” around a point painted with this stroke, hence this code:
NSUInteger width = 3;
CGRect envelope = CGRectInset(CGRectZero, -width, -width);
NSLog(@"%@", NSStringFromCGRect(envelope));
With an unsigned type this outputs {{inf, inf}, {0, 0}}
, with a signed integer you get {{-3, -3}, {6, 6}}
. The reason is that the unary minus before the width
variable creates an underflow. This might be obvious to somebody, but will surprise a lot of programmers:
NSUInteger a = -1;
NSUInteger b = 1;
NSLog(@"a: %u, b: %u", a, -b); // a: 4294967295, b: 4294967295
So even in situations where it doesn’t make sense to use a negative value (stroke width can’t be negative) it makes sense to use the value in a negative context, causing an underflow. Switching to a signed type leads to less surprises, while still keeping the range reasonably high. Sounds like a nice compromise.
I think UIKit Additions on NSIndexPath
use NSInteger
type intentionally. If for some reason negative row would be passed as parameter
to any method (I see none at the moment, though...), autocast to NSIntegerMax + parameter
value would not happen and any possible object would not look for a ridiculously large parameter that does not exist. Still, there are other ways to prevent this, so it might be just a matter of taste.
I, for example, would not take NSUInteger
as parameters in NSIndexPath
class, but rather NSInteger
and checked for a sign and wouldn't create NSIndexPath
at all, if any parameter was negative.