Why is the precision accurate when Decimal() takes in a string instead of float? in Python

337 Views Asked by At

Why are these values different and how does it differ from each other?

>>> from decimal import Decimal
>>> Decimal('0.1') + Decimal('0.1') + Decimal('0.1') - Decimal('0.3')
Decimal('0.0')

>>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) - Decimal(0.3)
Decimal('2.775557561565156540423631668E-17')
2

There are 2 best solutions below

0
Mahmoud Elshahat On BEST ANSWER

This is quoted from Decimal module source code which explains pretty good, if the input is float, the module internally calls the class method "Decimal.from_float()":

Note that Decimal.from_float(0.1) is not the same as Decimal('0.1'). Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is 0x1.999999999999ap-4. The exact equivalent of the value in decimal is 0.1000000000000000055511151231257827021181583404541015625.

0
Devesh Kumar Singh On

When you pass '0.1' as string, the decimal is converted to a float, without losing precision, but it loses precision when you pass the float directly as 0.1, as you can see below

>>> Decimal(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
>>> Decimal('0.1')
Decimal('0.1')

This then leads to all sort of wierd results

>>> Decimal(0.3) - Decimal(0.1) + Decimal(0.1) + Decimal(0.1)
Decimal('0.3999999999999999944488848768')