Sin(int) is broken in Xcode debugger (lldb)

2019-02-24 06:27发布

问题:

I have an universal iOS app targeting iOS SDK 6.1, and the compiler is set to Apple LLVM compiler 4.2. When I place a breakpoint in my code and run the following I get weird results for sin(int).

For reference, sin(70) = 0.7739 (70 is in radians).

(lldb) p (double)sin(70)
(double) $0 = -0.912706376367676 // initial value
(lldb) p (double)sin(1.0)
(double) $1 = 0.841470984807897 // reset the value sin(int) will return
(lldb) p (double)sin(70)
(double) $2 = 0.841470984807905 // returned same as sin(1.0)
(lldb) p (double)sin(70.0)
(double) $3 = 0.773890681557889 // reset the value sin(int) will return
(lldb) p (double)sin(70)
(double) $4 = 0.773890681558519
(lldb) p (double)sin((float)60)
(double) $5 = -0.304810621102217 // casting works the same as appending a ".0"
(lldb) p (double)sin(70)
(double) $6 = -0.30481062110269
(lldb) p (double)sin(1) 
(double) $7 = -0.304810621102223 // every sin(int) behaves the same way

Observations:

  • The first value for sin(int) in a debug session is always -0.912706376367676.
  • sin(int) will always return the same value that was returned from the last executed sin(float).
  • If I replace p with po, or expr (e.g. expr (double)sin(70)), I get the same exact results.

Why is the debugger behaving like this?

Does this mean that I should type cast every single parameter each time I call a function?

Some more interesting behavior with NSLog:

(lldb) expr (void)NSLog(@"%f", (float)sin(70))
0.000000 // new initial value
(lldb) expr (void)NSLog(@"%f", (float)sin(70.0))
0.773891
(lldb) expr (void)NSLog(@"%f", (float)sin(70))
0.000000 // does not return the previous sin(float) value
(lldb) p (double)sin(70)
(double) $0 = 1.48539705402154e-312 // sin(int) affected by sin(float) differently
(lldb) p (double)sin(70.0)
(double) $1 = 0.773890681557889
(lldb) expr (void)NSLog(@"%f", (float)sin(70))
0.000000 // not affected by sin(float)

回答1:

You're walking into the wonderful world of default argument promotions in C. Remember, lldb doesn't know what the argument types or return type of sin() is. The correct prototype is double sin (double). When you write

(lldb) p (float) sin(70)

there are two problems with this. First, you're providing an integer argument and the C default promotion rules are going to pass this as an int, a 4-byte value on the architectures in question. double, besides being 8-bytes, is an entirely different encoding. So sin is getting garbage input. Second, sin() returns a double, or 8-byte on these architectures, value but you're telling lldb to grab 4 bytes of it and do something meaningful. If you'd called p (float)sin((double)70) (so only the return type was incorrect) lldb would print a nonsensical value like 9.40965e+21 instead of 0.773891.

When you wrote

(lldb) p (double) sin(70.0)

you fixed these mistakes. The default C promotion for a floating point type is to pass it as a double. If you were calling sinf(), you'd have problems because the function expected only a float.

If you want to provide lldb with a proper prototype for sin() and not worry about these issues, it is easy. Add this to your ~/.lldbinit file,

settings set target.expr-prefix ~/lldb/prefix.h

(I have a ~/lldb directory where I store useful python files and things like this) and ~/lldb/prefix.h will read

extern "C" {
int strcmp (const char *, const char *);
void printf (const char *, ...);
double sin(double);
}

(you can see that I also have prototypes for strcmp() and printf() in my prefix file so I don't need to cast these.) You don't want to put too many things in here - this file is prepended to every expression you evaluate in lldb and it will slow your expression evaluations down if you put all the prototypes in /usr/include in there.

With that prototype added to my target.expr-prefix setting:

(lldb) p sin(70)
(double) $0 = 0.773890681557889