SQL Server Float data type calculation vs decimal

2019-01-26 23:40发布

问题:

In the following query

declare @a float(23)
declare @b float(23)
declare @c float(53)
set @a = 123456789012.1234
set @b = 1234567.12345678
set @c = @a * @b
select @c

select LTRIM(STR((@c),32,12))

declare @x  decimal(16,4)
declare @y decimal(16,8)
declare @z decimal (32,12)

set @x = 123456789012.1234
set @y = 1234567.12345678
set @z = @x * @y
select @z

I get answers as

1.52415693411713E+17
152415693411713020.000000000000
152415692881907790.143935926652

From the above answers the third answer is the correct one. Is this the reason why float data type is called Approximate Numeric Data Type

Or am I doing something fundamentally wrong.

BTW this is due to a problem I have with legacy system wherein I have to use float as storage data type, at the same time in there should not be loss of precision while calculation.

Please suggest alternatives, or an explanation.

回答1:

Float is accurate to 15 significant figures only (in SQL Server).

This is demonstrated by 1.52415693411713 E+17 where 1.52415693411713 (15 digits) is as accurate as you'll get. The final 020... after 152415693411713 with STR is made up is the resolution of floating point

To keep precision, don't use float. It is that simple. CAST to decimal if you want for calculation, but if you CAST back to float you are limited to 15 digits

See "What Every Computer Scientist Should Know About Floating-Point Arithmetic"



回答2:

The last answer

152415692881907790.143935926652

is providing scale up to 12 decimal places because you have declared @z accordingly.

declare @z decimal (32,12)

The second parameter in this declaration is scale which is set to 12.

More on the this can be found at http://msdn.microsoft.com/en-us/library/ms187746.aspx