I have a small C++ 32-bit application with this variable assignment:
double unknown = -4.656612875e-10;
I want to understand how this is represented in assembly. And apparently it's represented as 0xbe000000001c5f68 in xmm0.
To verify, I now add this line:
double unknown1 = 0xbe000000001c5f68;
I was expecting it to result in the exact same assembly, but it is different.
double unknown = -4.656612875e-10;
004F2265 movsd xmm0,mmword ptr [__real@be000000001c5f68 (04F9BE8h)]
004F226D movsd mmword ptr [unknown],xmm0
double unknown1 = 0xbe000000001c5f68;
004F2272 movsd xmm0,mmword ptr [__real@43e7c0000000038c (04F9B40h)]
004F227A movsd mmword ptr [unknown1],xmm0
What hexdecimal value must I assign to unknown1 so the assembly is equal to unknown?
0xbe000000001c5f68is the same as the integer literal13690942867208167272, so yourunknown1is initialized to that value (which is nowhere near-4e10, it is on the order of1e19).What you see in the assembly is that
-4.656612875e-10has an object representation of the same eight bytes as0xbe000000001c5f68(BE 00 00 00 00 1C 5F 68). You can convert type while keeping the object representation withstd::bit_castto reinterpret the bits:In general, this is called "type punning": keeping the object representation unchanged. Normal conversions and casts keep the value represented the same, or as close as possible (e.g. truncating a double to an integer, or rounding a huge integer to the nearest representable double).
Without C++20, the portable way to type-pun is with
memcpy(&foo_double, &foo_u64, sizeof(foo_double));There are also hexadecimal floating point literals, which would look like:
But it doesn't match one-to-one with the object representation. You can see the mantissa field is there literally in hex, but the first 12 bits (sign and exponent fields) of the binary64 are
(e + 1023) | (negative ? 0x800 : 0). For this number,(-31 + 1023) | 0x800 == 0xbe0, the first 3 hexadecimal digits.