Can anyone explain the best logic / method for incrementing floating, decimal numbers ?
For example, if we assume starting decimal number is 2.0, so the next items should automatically get number accordingly say, 2.01, 2.02, 2.03, ....., 2.09, 2.10, 2.11.. etc.
But what is the starting number is 2.1, so what would be the next sequence :
Is it ? 2.11, 2.12, 2.13, 2.14, ..... 2.19, 2.20 etc...?
I is logically correct? I am confused. Please help.
It depends entirely on what the application needs. Do you need to count by hundredths? If so, then those sequences make sense.
To avoid round-off error, you may want to store the initial value, the number of increments, and the increment size, as in:
float start = 2.0;
float increment = 0.01;
for (int i = 0; i < 10; i++)
{
printf("%f ", start + increment * i);
}
I think it would be arbitrarily defined by what you are trying to represent. In software versions, for example, it's ok to have 7.9, 7.10, 7.11 and so on and you still will know 7.10 is greater than 7.9. Of course, you would give them a special treatment, the default arithmetic wouldn't apply.
"The increment of float value should be done by an increment operator".
But this thing is not possible, Because increment operator works only on the number before decimal digits. If increment operator is applied on float, the value sequence is 0.1, 1.1, 2.1, 3.1 etc.
double i= 5.012; //variable i holds a double floating point number.
// float/double/long double any type can be used.
i+= 0.1; //increment i by 0.1