This feels like the kind of code that only fails in-situ, but I will attempt to adapt it into a code snippet that represents what I'm seeing.
float f = myFloat * myConstInt; /* Where myFloat==13.45, and myConstInt==20 */
int i = (int)f;
int i2 = (int)(myFloat * myConstInt);
After stepping through the code, i==269, and i2==268. What's going on here to account for the difference?
Because floating point variables are not infinitely accurate. Use a decimal if you need that kind of accuracy.
Different rounding modes may also play into this issue, but the accuracy problem is the one you're running into here, AFAIK.
Replace with
and see if you get the same answer.
I'd like to offer a different explanation.
Here's the code, which I've annotated (I looked into memory to dissect the floats):
Let's look closer at the calculations:
f = 1101.01110011001100110011 * 10100 = 100001100.111111111111111 111
The part after the space is bits 25-27, which cause bit 24 to be rounded up, and hence the whole value to be rounded up to 269
int i2 = (int)(myFloat * myConstInt)
myfloat is extended to double precision for the calculation (0s are appended): 1101.0111001100110011001100000000000000000000000000000
myfloat * 20 = 100001100.11111111111111111100000000000000000000000000
Bits 54 and beyond are 0s, so no rounding is done: the cast results in the integer 268.
(A similar explanation would work if extended precision is used.)
UPDATE: I refined my answer and wrote a full-blown article called When Floats Don’t Behave Like Floats
Floating point has limited accuracy, and is based on binary rather than decimal. The decimal number 13.45 cannot be precisely represented in binary floating point, so rounds down. The multiplication by 20 further exaggerates the loss of precision. At this point you have 268.999... - not 269 - therefore the conversion to integer truncates to 268.
To get rounding to the nearest integer, you could try adding 0.5 before converting back to integer.
For "perfect" arithmetic, you could try using a Decimal or Rational numeric type - I believe C# has libraries for both, but am not certain. These will be slower, however.
EDIT - I have found a "decimal" type so far, but not a rational - I may be wrong about that being available. Decimal floating point is inaccurate, just like binary, but it's the kind of inaccuracy we're used to, so it gives less surprising results.
Float math can be performed at higher precision than advertised. But as soon as you store it in float f, that extra precision is lost. You're not losing that precision in the second method until, of course, you cast the result down to int.
Edit: See this question Why differs floating-point precision in C# when separated by parantheses and when separated by statements? for a better explanation than I probably provided.