-->

Difference between numeric, float and decimal in S

2018-12-31 14:50发布

问题:

I searched in Google and also visited decimal and numeric and SQL Server Helper to glean the difference between numeric, float and decimal datatypes and also to find out which one should be used in which situation.

For any kind of financial transaction (e.g. for salary field), which one is prefered and why?

回答1:

use the float or real data types only if the precision provided by decimal (up to 38 digits) is insufficient

  • Approximate numeric data types do not store the exact values specified for many numbers; they store an extremely close approximation of the value.(Technet)

  • Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators (Technet)

so generally because the precision provided by decimal is [10E38 ~ 38 digits] if your number can fit in it, and smaller storage space (and maybe speed) of Float is not important and dealing with abnormal behaviors and issues of approximate numeric types are not acceptable, use Decimal generally.

more useful information

  • numeric = decimal (5 to 17 bytes) (Exact Numeric Data Type)
    • will map to Decimal in .NET
    • both have (18, 0) as default (precision,scale) parameters in SQL server
    • scale = maximum number of decimal digits that can be stored to the right of the decimal point.
    • kindly note that money(8 byte) and smallmoney(4 byte) are also exact and map to Decimal In .NET and have 4 decimal points(MSDN)
    • decimal and numeric (Transact-SQL) - MSDN
  • real (4 byte) (Approximate Numeric Data Type)
    • will map to Single in .NET
    • The ISO synonym for real is float(24)
    • float and real (Transact-SQL) - MSDN
  • float (8 byte) (Approximate Numeric Data Type)
    • will map to Double in .NET
  • All exact numeric types always produce the same result, regardless of which kind of processor architecture is being used or the magnitude of the numbers
  • The parameter supplied to the float data type defines the number of bits that are used to store the mantissa of the floating point number.
  • Approximate Numeric Data Type usually uses less storage and have better speed (up to 20x) and you should also consider when they got converted in .NET
    • What is the difference between Decimal, Float and Double in C#
    • Decimal vs Double Speed
    • SQL Server - .NET Data Type Mappings (From MSDN)

\"Exact \"Approximate

main source : MCTS Self-Paced Training Kit (Exam 70-433): Microsoft® SQL Server® 2008 Database Development - Chapter 3 - Tables , Data Types , and Declarative Data Integrity Lesson 1 - Choosing Data Types (Guidelines) - Page 93



回答2:

Guidelines from MSDN: Using decimal, float, and real Data

The default maximum precision of numeric and decimal data types is 38. In Transact-SQL, numeric is functionally equivalent to the decimal data type. Use the decimal data type to store numbers with decimals when the data values must be stored exactly as specified.

The behavior of float and real follows the IEEE 754 specification on approximate numeric data types. Because of the approximate nature of the float and real data types, do not use these data types when exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. Instead, use the integer, decimal, money, or smallmoney data types. Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators. It is best to limit float and real columns to > or < comparisons.



回答3:

Not a complete answer, but a useful link:

\"I frequently do calculations against decimal values. In some cases casting decimal values to float ASAP, prior to any calculations, yields better accuracy. \"

http://sqlblog.com/blogs/alexander_kuznetsov/archive/2008/12/20/for-better-precision-cast-decimals-before-calculations.aspx



回答4:

They Differ in Data Type Precedence

Decimal and Numeric are the same functionally but there is still data type precedence, which can be crucial in some cases.

SELECT SQL_VARIANT_PROPERTY(CAST(1 AS NUMERIC) + CAST(1 AS DECIMAL),\'basetype\')

The resulting data type is numeric because it takes data type precedence.

Exhaustive list of data types by precedence:

Reference link



回答5:

Decimal has a fixed precision while float has variable precision.

EDIT (failed to read entire question): Float(53) (aka real) is a double-precision (32-bit) floating point number in SQL Server. Regular Float is a single-precision floating point number. Double is a good combination of precision and simplicty for a lot of calculations. You can create a very high precision number with decimal -- up to 136-bit -- but you also have to be careful that you define your precision and scale correctly so that it can contain all your intermediate calculations to the necessary number of digits.



回答6:

Float is Approximate-number data type, which means that not all values in the data type range can be represented exactly.

Decimal/Numeric is Fixed-Precision data type, which means that all the values in the data type range can be represented exactly with precision and scale. You can use decimal for money saving.

Converting from Decimal or Numeric to float can cause some loss of precision. For the Decimal or Numeric data types, SQL Server considers each specific combination of precision and scale as a different data type. DECIMAL(2,2) and DECIMAL(2,4) are different data types. This means that 11.22 and 11.2222 are different types though this is not the case for float. For FLOAT(6) 11.22 and 11.2222 are same data types.

You can also use money data type for saving money. This is native data type with 4 digit precision for money. Most experts prefers this data type for saving money.

Reference 1 2 3



回答7:

The case for Decimal

What it the underlying need?

It arises from the fact that, ultimately, computers represent, internally, numbers in binary format. That leads, inevitably, to rounding errors.

Consider this:

0.1 (decimal, or \"base 10\") = .00011001100110011... (binary, or \"base 2\")

The abose ellipsis [...] means \'infinite\'. If you look at it carefully, there is an infinite repeating pattern (=\'0011\')

So, at some point the computer has to round that value. This leads to accumulation errors deriving from the repeated use of numbers that are inexactly stored.

Say that you want to store financial amounts (which are numbers that may have a fractional part). First of all, you cannot use integers obviously (integers don\'t have a fractional part). From a purely mathematical point of view, the natural tendency would be to use a float. But, in a computer, floats have the part of a number that is located after a decimal point - the \"mantissa\" - limited. That leads to rounding errors.

To overcome this, computers offer specific datatypes that limit the binary rounding error in computers for decimal numbers. These are the data type that should absolutely be used to represent financial amounts. These data types typically go by the name of Decimal. That\'s the case in C#, for example. Or, DECIMAL in most databases.



回答8:

Although the question didn\'t include the MONEY data type some people coming across this thread might be tempted to use the MONEY data type for financial calculations.

Be wary of the MONEY data type, it\'s of limited precision.

There is a lot of good information about it in the answers to this Stackoverflow question:

Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?