Why does Decimal.Divide(int, int) work, but not (i

2019-01-22 16:56发布

问题:

How come dividing two 32 bit int numbers as ( int / int ) returns to me 0, but if I use Decimal.Divide() I get the correct answer? I'm by no means a c# guy.

回答1:

int is an integer type; dividing two ints performs an integer division, i.e. the fractional part is truncated since it can't be stored in the result type (also int!). Decimal, by contrast, has got a fractional part. By invoking Decimal.Divide, your int arguments get implicitly converted to Decimals.

You can enforce non-integer division on int arguments by explicitly casting at least one of the arguments to a floating-point type, e.g.:

int a = 42;
int b = 23;
double result = (double)a / b;


回答2:

In the first case, you're doing integer division, so the result is truncated (the decimal part is chopped off) and an integer is returned.

In the second case, the ints are converted to decimals first, and the result is a decimal. Hence they are not truncated and you get the correct result.



回答3:

The following line:

int a = 1, b = 2;
object result = a / b;

...will be performed using integer arithmetic. Decimal.Divide on the other hand takes two parameters of the type Decimal, so the division will be performed on decimal values rather than integer values. That is equivalent of this:

int a = 1, b = 2;
object result = (Decimal)a / (Decimal)b;

To examine this, you can add the following code lines after each of the above examples:

Console.WriteLine(result.ToString());
Console.WriteLine(result.GetType().ToString());

The output in the first case will be

0
System.Int32

..and in the second case:

0,5
System.Decimal


回答4:

I reckon Decimal.Divide(decimal, decimal) implicitly converts its 2 int arguments to decimals before returning a decimal value (precise) where as 4/5 is treated as integer division and returns 0



回答5:

You want to cast the numbers:

double c = (double)a/(double)b;

Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:

double c = (double)a/b;

here is a Small Program :

static void Main(string[] args)
        {
            int a=0, b = 0, c = 0;
            int n = Convert.ToInt16(Console.ReadLine());
            string[] arr_temp = Console.ReadLine().Split(' ');
            int[] arr = Array.ConvertAll(arr_temp, Int32.Parse);
            foreach (int i in arr)
            {
                if (i > 0) a++;
                else if (i < 0) b++;
                else c++;
            }
            Console.WriteLine("{0}", (double)a / n);
            Console.WriteLine("{0}", (double)b / n);
            Console.WriteLine("{0}", (double)c / n);
            Console.ReadKey();
        }


回答6:

If you are looking for 0 < a < 1 answer, int / int will not suffice. int / int does integer division. Try casting one of the int's to a double inside the operation.



回答7:

In my case nothing worked above.

what I want to do is divide 278 by 575 and multiply by 100 to find percentage.

double p = (double)((PeopleCount * 1.0 / AllPeopleCount * 1.0) * 100.0);

%: 48,3478260869565 --> 278 / 575 ---> 0 %: 51,6521739130435 --> 297 / 575 ---> 0

if I multiply the PeopleCount by 1.0 it makes it decimal and division will be 48.34... also multiply by 100.0 not 100.



回答8:

The answer marked as such is very nearly there, but I think it is worth adding that there is a difference between using double and decimal.

I would not do a better job explaining the concepts than Wikipedia, so I will just provide the pointers:

floating-point arithmetic

decimal data type

In financial systems, it is often a requirement that we can guarantee a certain number of (base-10) decimal places accuracy. This is generally impossible if the input/source data is in base-10 but we perform the arithmetic in base-2 (because the number of decimal places required for the decimal expansion of a number depends on the base; one third takes infinitely many decimal places to express in base-10 as 0.333333..., but it takes only one decimal in base-3: 0.1).

Floating-point numbers are faster to work with (in terms of CPU time; programming-wise they are equally simple) and preferred whenever you want to minimize rounding error (as in scientific applications).



标签: c# math int divide