This is my first time posting.
So here is my problem, I don't understand the following example.
Binary representation: 01000000011000000000000000000000
=+(1.11)base 2x 2^(128-127)
<-all questions refer to this line.
• =+(1.11)base 2 x2^1
• =+(11.1) base 2
• =+(1x21+1x20+1x2-1)=(3.5) base 10
Questions:
Where does the 128-127 come from?
Why is it 1.11?
First of all, the very first thing you have to do is separate the fields (given IEEE 754 32-bit Floating Point encoding):
The (128 - 127) is calculating the exponent by subtracting the exponent bias.
When converting from floating point to decimal, you subract the exponent bias. When converting the other way, you add it. The exponent bias is calculated as:
2^(k−1) − 1 where k is the number of bits in the exponent field.
The mantissa is 1.11 as base 2 (binary). The mantissa is composed of a fraction and has an implied leading 1. Hence, with 11000... in the mantissa bits, you have an implied leading one to give you 1.11
Had the mantissa bits been 1011, your value of the fraction would be 1.011