I was messing around with storing float
s and double
s using NSUserDefaults
for use in an iPhone application, and I came across some inconsistencies in how the precision works with them, and how I understood it works.
This works exactly as I figured:
{
NSString *key = @"OneLastKey";
[PPrefs setFloat:235.1f forKey:key];
GHAssertFalse([PPrefs getFloatForKey:key] == 235.1, @"");
[PPrefs removeObjectForKey:key];
}
However, this one doesn't:
{
NSString *key = @"SomeDoubleKey";
[PPrefs setDouble:234.32 forKey:key];
GHAssertEquals([PPrefs getDoubleForKey:key], 234.32, @"");
[PPrefs removeObjectForKey:key];
}
This is the output GHUnit gives me:
'234.320007324' should be equal to '234.32'.
But, if I first cast the double to a float
, and then to a double
it works without fail:
{
NSString *key = @"SomeDoubleKey";
[PPrefs setDouble:234.32 forKey:key];
GHAssertEquals([PPrefs getDoubleForKey:key], (double)(float)234.32, @"");
[PPrefs removeObjectForKey:key];
}
I was under the assumption that numbers entered without an 'f' at the end were already considered double
s. Is this incorrect? If so, why does casting to a float
and then double
work correctly?
Solved! Turns out my framework method
+(void)setDouble:(double)value forKey:(NSString*)key
was actually defined as+(void)setDouble:(
float
)value forKey:(NSString*)key
. The value passed was a double but was converted to a float for use in the method. A simple copy and paste issue. Too bad the Objective-C compiler didn't at least throw up a warning like it seems to do for everything else...