• + 1 comment

    A float stops being precise already when its defined, it's just how it works - there is always a decimal point in it, and some digits far behind it. When you declare float(2.65) it isn't 2.65 any more its something like 2.6549999999456456356 it's just that python is so inteligent that it knows that it's close enough to print it as 2.65. You can try it out yourself:

    from decimal import Decimal
    print(Decimal(2.65))