How come dividing two 32 bit int numbers as ( int / int ) returns to me 0
, but if I use Decimal.Divide()
I get the correct answer? I'm by no means a c# guy.
相关问题
- Sorting 3 numbers without branching [closed]
- Graphics.DrawImage() - Throws out of memory except
- Why am I getting UnauthorizedAccessException on th
- 求获取指定qq 资料的方法
- How to know full paths to DLL's from .csproj f
I reckon
Decimal.Divide(decimal, decimal)
implicitly converts its 2 int arguments to decimals before returning a decimal value (precise) where as 4/5 is treated as integer division and returns 0If you are looking for 0 < a < 1 answer, int / int will not suffice. int / int does integer division. Try casting one of the int's to a double inside the operation.
In the first case, you're doing integer division, so the result is truncated (the decimal part is chopped off) and an integer is returned.
In the second case, the ints are converted to decimals first, and the result is a decimal. Hence they are not truncated and you get the correct result.
The following line:
...will be performed using integer arithmetic.
Decimal.Divide
on the other hand takes two parameters of the typeDecimal
, so the division will be performed on decimal values rather than integer values. That is equivalent of this:To examine this, you can add the following code lines after each of the above examples:
The output in the first case will be
..and in the second case:
In my case nothing worked above.
what I want to do is divide 278 by 575 and multiply by 100 to find percentage.
%: 48,3478260869565 --> 278 / 575 ---> 0 %: 51,6521739130435 --> 297 / 575 ---> 0
if I multiply the PeopleCount by 1.0 it makes it decimal and division will be 48.34... also multiply by 100.0 not 100.
The answer marked as such is very nearly there, but I think it is worth adding that there is a difference between using double and decimal.
I would not do a better job explaining the concepts than Wikipedia, so I will just provide the pointers:
floating-point arithmetic
decimal data type
In financial systems, it is often a requirement that we can guarantee a certain number of (base-10) decimal places accuracy. This is generally impossible if the input/source data is in base-10 but we perform the arithmetic in base-2 (because the number of decimal places required for the decimal expansion of a number depends on the base; one third takes infinitely many decimal places to express in base-10 as 0.333333..., but it takes only one decimal in base-3: 0.1).
Floating-point numbers are faster to work with (in terms of CPU time; programming-wise they are equally simple) and preferred whenever you want to minimize rounding error (as in scientific applications).