I'm in the process of normalising a database, and part of this involves converting a column from one table from a FLOAT
to a DECIMAL(28,18)
. When I then try to join this converted column back to the source column, it returns no results in some cases.
It seems to bee something to do with the ways its converted. For example, the FLOAT
converted to a DECIMAL(28,18)
produces:
51.051643260000006000
The original FLOAT
is
51.05164326
I have tried various ways of modifying the FLOAT
, and none of these work either:
CAST(51.05164326 AS DECIMAL(28,18)) = 51.051643260000000000
STR(51.05164326 , 28,18) = 51.0516432599999990
The reason for the conversion is due to improving the accuracy of these fields.
Has anyone got a consistent strategy to convert these numbers, and be able to ensure subsequent joins work?
Thanks in advance
CM
For your application, you need to consider how many decimal places you need. It looks like in reality you require about 8-14 decimal places not 18.
One way to do the conversion is cast(cast(floatColumn as decimal(28,14)) as decimal(28,18))
.
To do a join between a decimal and float column, you can do something like this:
ON cast(cast(floatColumn as decimal(28,14)) as decimal(28,18)) = decimalColumn
Provided the double-cast is the same double-cast used to create the decimalColumn
, this will allow you to make use of an index on the decimalColumn
.
Alternatively you can use a range join:
ON floatColumn > decimalColumn - @epsilon AND floatColumn < decimalColumn + @epsilon
This should still make use of the index on decimalColumn
.
However, it is unusual to join on decimals. Unless you actually need to join on them or need to do a direct equality comparision (as opposed to a range comparison), it may be better to simply do the conversion as you are, and document the fact that there is a small loss of accuracy due to the initial choice of an inappropriate data type.
For more information see: