I have been working on hive and found something peculiar. Basically, while using double as a datatype for your column we need not have any precision specified (hive takes the precision dynamically depending on the value). But, here is the issue. Whenever my values are 2 and 7 after the decimal point i see the returning value also changes number of positions in the precision.
Let me clear that with my simple example.
hive> select cast('34.2234' + '32.6554' -3.1 as double);
OK
63.7788
Time Taken 0.077 seconds, Fetched: 1 row(s)
When i use 1 after my decimal place (1 in 3.1 while subtracting) i could see the result seems to be good. But when 3.2 or 3.7 is given for subtraction, i see the below change
While using 3.2
hive> select cast('34.2234' + '32.6554' -3.2 as double);
OK
63.678799999999995
Time Taken 0.077 seconds, Fetched: 1 row(s)
While using 3.7
hive> select cast('34.2234' + '32.6554' -3.7 as double);
OK
63.178799999999995
Time Taken 0.077 seconds, Fetched: 1 row(s)
Though the result seems to be correct, why is the precision getting changed in this case. Its the same when we use any value before decimal point and only 2 or 7 after it (like 4.2, 4.7, 3.2, 2.7 etc). What is it so peculiar with 2 and 7 that it changes the precision to 15 values and why not for other values.