We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
A float stops being precise already when its defined, it's just how it works - there is always a decimal point in it, and some digits far behind it. When you declare float(2.65) it isn't 2.65 any more its something like 2.6549999999456456356 it's just that python is so inteligent that it knows that it's close enough to print it as 2.65. You can try it out yourself:
from decimal import Decimal
print(Decimal(2.65))
Cookie support is required to access HackerRank
Seems like cookies are disabled on this browser, please enable them to open this website
Python: Division
You are viewing a single comment's thread. Return to all comments →
A float stops being precise already when its defined, it's just how it works - there is always a decimal point in it, and some digits far behind it. When you declare float(2.65) it isn't 2.65 any more its something like 2.6549999999456456356 it's just that python is so inteligent that it knows that it's close enough to print it as 2.65. You can try it out yourself: