I searched in Google and also visited decimal and numeric and SQL Server Helper to glean the difference between numeric, float and decimal datatypes and also to find out which one should be used in which situation.
For any kind of financial transaction (e.g. for salary field), which one is prefered and why?
Although the question didn't include the MONEY data type some people coming across this thread might be tempted to use the MONEY data type for financial calculations.
Be wary of the MONEY data type, it's of limited precision.
There is a lot of good information about it in the answers to this Stackoverflow question:
Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?
Decimal has a fixed precision while float has variable precision.
EDIT (failed to read entire question): Float(53) (aka real) is a double-precision (32-bit) floating point number in SQL Server. Regular Float is a single-precision floating point number. Double is a good combination of precision and simplicty for a lot of calculations. You can create a very high precision number with decimal -- up to 136-bit -- but you also have to be careful that you define your precision and scale correctly so that it can contain all your intermediate calculations to the necessary number of digits.
Approximate numeric data types do not store the exact values specified for many numbers; they store an extremely close approximation of the value.(Technet)
Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators (Technet)
so generally because the precision provided by decimal is [10E38 ~ 38 digits] if your number can fit in it, and smaller storage space (and maybe speed) of Float is not important and dealing with abnormal behaviors and issues of approximate numeric types are not acceptable, use Decimal generally.
more useful information
main source : MCTS Self-Paced Training Kit (Exam 70-433): Microsoft® SQL Server® 2008 Database Development - Chapter 3 - Tables , Data Types , and Declarative Data Integrity Lesson 1 - Choosing Data Types (Guidelines) - Page 93
Not a complete answer, but a useful link:
"I frequently do calculations against decimal values. In some cases casting decimal values to float ASAP, prior to any calculations, yields better accuracy. "
http://sqlblog.com/blogs/alexander_kuznetsov/archive/2008/12/20/for-better-precision-cast-decimals-before-calculations.aspx
The case for Decimal
What it the underlying need?
It arises from the fact that, ultimately, computers represent, internally, numbers in binary format. That leads, inevitably, to rounding errors.
Consider this:
The abose ellipsis [...] means 'infinite'. If you look at it carefully, there is an infinite repeating pattern (='0011')
So, at some point the computer has to round that value. This leads to accumulation errors deriving from the repeated use of numbers that are inexactly stored.
Say that you want to store financial amounts (which are numbers that may have a fractional part). First of all, you cannot use integers obviously (integers don't have a fractional part). From a purely mathematical point of view, the natural tendency would be to use a
float
. But, in a computer, floats have the part of a number that is located after a decimal point - the "mantissa" - limited. That leads to rounding errors.To overcome this, computers offer specific datatypes that limit the binary rounding error in computers for decimal numbers. These are the data type that should absolutely be used to represent financial amounts. These data types typically go by the name of
Decimal
. That's the case in C#, for example. Or,DECIMAL
in most databases.Guidelines from MSDN: Using decimal, float, and real Data