Here's the code:
static void Main(string[] args)
{
int xd2 = 5;
for (double xd = (double)xd2; xd <= 6; xd += 0.01)
{
Console.WriteLine(xd);
}
}
and here's the output:
I want to keep on adding 0.01 (as You can see on the screen, sometimes it happens to add 0.99999)
Thanks
Use decimal
if you want to keep this kind of accuracy.
Floating point types cannot accurately represent certain values. I suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic for a comprehensive explanation.
decimal xd2 = 5m;
for (decimal xd = xd2; xd <= 6m; xd += 0.01m)
{
Console.WriteLine(xd);
}
No. That is how doubles work.... try using decimal instead
int xd2 = 5;
for (decimal xd = (decimal)xd2; xd <= 6; xd += 0.01M)
{
Console.WriteLine(xd);
}
if you want to stick with doubles, but only care to two decimal places use...
int xd2 = 5;
for (double xd = (double)xd2; xd <= 6; xd += 0.01)
{
Console.WriteLine(Math.Round(xd,2));
}
This is because double is float pointing and this arithmetic is not precise.
You can use decimal instead, like this:
static void Main(string[] args)
{
int xd2 = 5;
for (decimal xd = (decimal)xd2; xd <= 6; xd += 0.01M)
{
Console.WriteLine(xd);
}
Console.ReadLine();
}
See this article too: Double precision problems on .NET
If possible you should always use absolute instead of iterative calculations to get rid of these kinds of rounding errors:
public static void Main(string[] args)
{
int xd2 = 5;
for (int i = 0; i < 100; ++i) {
Console.WriteLine(xd2 + i * 0.01);
}
}