I have the following python code:
In [1]: import decimal
In [2]: decimal.getcontext().prec = 80
In [3]: (1-decimal.Decimal('0.002'))**5
Out[3]: Decimal('0.990039920079968')
Shouldn't it match 0.99003992007996799440405766290496103465557098388671875
according to this http://www.wolframalpha.com/input/?i=SetPrecision%5B%281+-+0.002%29%5E5%2C+80%5D ?
wolfram is wrong, try it to the power of one and you get
0.9979999999999999982236431605997495353221893310546875
instead of0.998
. They are likely using floating point numbers.Following Andrews answer, it is a result of the precision of the entered literal being taken to be machine precision before the SetPrecision directive gets to it.
Another fix to this, that is nice in that it retains your basic input notation, is to directly specify the precision of the literal with a backtic notation:
Produces the desired result.
For anyone who still doesn't follow, you could also key in all the zeros..
These work in alpha and mathematica..
Wolfram alpha is actually wrong here.
is exactly
0.990039920079968
.You can verify that by simply assessing that there are 15 digits after the
.
, which matches5 * 3
, 3 being the number of digits after the.
in the expression(1 - 0.002)
. There couldn't be any digit after the 15th by definition.Edit
A little more digging got me something interesting:
This notation
Decimal('0.002')
creates an actual decimal with this exact value. UsingDecimal(0.002)
the decimal is made from a float rather than a string, creating an imprecision. Using this notation is the original formula :Returns
Decimal('0.99003992007996799979349352807411754897106595345737537649055432859002826694496107'
which is indeed 80 digits long after the.
, but different from the wolfram alpha value.This is probably caused by a difference of precision between python and wolfram alpha floating point representation, and is a further indication that wolfram alpha is using floats when SetPrecision is used.
Nota: directly asking for the result returns the correct value (see http://www.wolframalpha.com/input/?i=%281+-+0.002%29%5E5).
Here's what's happening here: Because it looks like syntax from the Mathematica programming language, WolframAlpha is interpreting the input
SetPrecision[(1 - 0.002)^5, 80]
as Mathematica source code, which it proceeds to evaluate. In Mathematica, as others have surmised in other answers, 0.002 is a machine precision floating point literal value. Roundoff error ensues. Finally, the resulting machine precision value is cast by SetPrecision to the nearest 80-precision value.To get around this, you have a couple of options.
Finally, I want to point out that in Mathematica, and by extension in a WolframAlpha query consisting of Mathematica code, you usually want N (documentation) rather than SetPrecision. They are often similar (identical in this case), but there is a subtle difference:
N works slightly harder but gets you the right number of correct digits (assuming the input is sufficiently precise).
So my final suggestion for using WolframAlpha to do this calculation via Mathematica Code is N[(1 - 2*^-3)^5, 80].