• Asked to answer
    + 2 comments

    Basicly thats because floats have a fixed precision range, and the decimal point in them is 'floating', computing of integer and float divisions are completly different operations. I recommend you read about IEEE floating point standard, and how the problem of loseing precision is dealt with in cases when it matters. Most important thing to remeber for a begginer is that you should never try to evaluate if two floats are equal, you can do it only if you consider by how much they can differ (epsilon). Hope this helps, my spell checker doesn't work, so sorry for the spelling mistakes ;)