When I type this into the Visual Studio 2008 immediate window:
? .9 - .8999999999999995
It gives me this as the answer:
0.00000000000000055511151231257827
The documentation says that a double has 15-16 digits of precision, but it's giving me a result with 32 digits of precision. Where is all that extra precision coming from?
There are only 15-16 digits in the answer. All those leading zeroes don't count. The number is actually more like 5.5511151231257827 × 10-16. The mantissa portion has 15-16 digits in it. The exponent (-16) serves to shift the decimal point over by 16 places, but doesn't change the number of digits in the overall number.
Edit
After getting some comments, I'm curious now about what's really going on. I plugged the number in question into this IEEE-754 Converter. It took the liberty of rounding the last "27" into "30", but I don't think that changes the results.
The converter breaks down the number into its three binary parts:
Sign: 0 (positive)
Exponent: -51
Significand: 1.0100000000000000000000000000000000000000000000000000 (binary for 1.2510)
So this number is 1.012 × 2-51, or 1.2510 × 2-51. Since there are only three significant binary digits being stored, that would suggest that Lars may be onto something. They can't be "random noise" since they are the same each time the number is converted.
The data suggests that the only stored digit is "5". The leading zeros come from the exponent and the rest of the seemingly-random digits are from computing 2-51.