How do I know which endian to use in struct.unpack() when converting hexadecimal string to float?

339 Views Asked by At

I have data in form of hexadecimal string and I convert it to float as:

import struct, binascii
a = '0X437A1AF6'
x = struct.unpack('>f', binascii.unhexlify(str(a)[2:]))
print(x[0])

I get the right result but How do I prove that using big endian '>f' is right choice or how do I determine what endian to use in general? Trial an error is one option but what are other?

1

There are 1 best solutions below

16
On

Endianness is how the bytes in the object are ordered. I know that you used floats in your code, but I'm using integers here for simplicity.

Big endian means that the bytes are ordered largest-to-smallest: 437a1af6 in memory would mean 43 7a 1a f6, or 1132075766.

Little endian means that the bytes are ordered smallest-to-largest: 437a1af6 in memory would mean f6 1a 7a 43, or -166036925 (when signed, or 4128930371 when unsigned).

Floating point has a specific byte ordering as well, see here. The endianness affects the byte order of the floating point representation, and it can drastically change the value that is returned.


Whichever endian you use doesn't really matter as long as you stay consistent, but in current x86 implementations, little endian is more commonly used. There is no right or wrong choice.

In your case, little endian unpacks to -7.832944125711889e+32 and big unpacks to 250.10531616210938.