I have an unit test failing on a Math.Tan(-PI/2)
returning the wrong version in .NET.
The 'expected' value is taken from Wolfram online (using the spelled-out constant for -Pi/2). See for yourselves here.
As correctly observed in the comments, the mathematical result of tan(-pi/2) is infinity. However, the constant Math.PI
does not perfectly represent PI, so this is a 'near the limit' input.
Here's the code.
double MINUS_HALF_PI = -1.570796326794896557998981734272d;
Console.WriteLine(MINUS_HALF_PI == -Math.PI/2); //just checking...
double tan = Math.Tan(MINUS_HALF_PI);
Console.WriteLine("DotNET {0:E20}", tan);
double expected = -1.633123935319534506380133589474e16;
Console.WriteLine("Wolfram {0:E20}", expected);
double off = Math.Abs(tan-expected);
Console.WriteLine(" {0:E20}", off);
This is what gets printed:
True
DotNET -1.63317787283838440000E+016
Wolfram -1.63312393531953460000E+016
5.39375188498000000000E+011
I thought it's an issue of floating-point representation.
Strangely though, the same thing in Java DOES return the same value as Wolfram, down to the last digit - see it evaluated in Eclipse. (The expressions are cropped - you'll have to believe me they use the same constant as MINUS_HALF_PI
above.)
True
DotNET -1.63317787283838440000E+016
Wolfram -1.63312393531953460000E+016
Java -1.63312393531953700000E+016
As you can see, the difference is:
- between Wolfram and .NET:
~5.39 * 10^11
- between Wolfram and Java:
=2.40 * 10^1
That's ten orders of magnitude!
So, any ideas why the .NET and Java implementations differ so much? I would expect them both to just defer the actual computing to the processor. Is this assumption unrealistic for x86?
Update
As requested, I tried running in Java with strictfp
. No change: