Recently I came across a bug/feature in several languages. I have a very basic knowledge about how it's caused (and I'd like some detailed explanation), but when I think of all the bugs I must have made over the years, the question is how can I determine "Hey, this might cause a riddiculous bug, I'd better use arbitrary precision functions", what other languages do have this bug (and those who don't, why). Also, why 0.1+0.7 does this and i.e. 0.1+0.3 doesn't, are there any other well-known examples?
PHP
//the first one actually doesn't make any sense to me,
//why 7 after typecast if it's represented internally as 8?
debug_zval_dump((0.1+0.7)*10); //double(8) refcount(1)
debug_zval_dump((int)((0.1+0.7)*10)); //long(7) refcount(1)
debug_zval_dump((float)((0.1+0.7)*10)); //double(8) refcount(1)
Python:
>>> ((0.1+0.7)*10)
7.9999999999999991
>>> int((0.1+0.7)*10)
7
Javascript:
alert((0.1+0.7)*10); //7.999999999999999
alert(parseInt((0.7+0.1)*10)); //7
Ruby:
>> ((0.1+0.7)*10).to_i
=> 7
>>((0.1+0.7)*10)
=> 7.999999999999999
PHP uses floating point numbers by default, you need to manually cast to integers.
You should be aware of floating point arithmetic. The other posts here provide enough links about that.
Personally I use round/ ceil/ float depending on what I expect as opposed to int