I'm currently writing something in C#/.NET that involves sending unsigned 16-bit integers in a network packet. The ordering of the bytes needs to be big endian.
At the bit level, my understanding of 'big endian' is that the most significant bit goes at the end, and in reverse for little endian.
And at the byte level, my understanding is the same -- if I'm converting a 16 bit integer to the two 8 bit integers that comprise it, and the architecture is little endian, then I would expect the most significant byte to go at the beginning.
However, BitConverter appears to put the byte with the smallest value at the end of the array, as opposed to the byte with the least-significant value, e.g.
ushort number = 4;
var bytes = BitConverter.GetBytes(number);
Debug.Assert(bytes[BitConverter.IsLittleEndian ? 0 : 1] == 0);
For clarity, if my understanding is correct, then on a little endian machine I would expect the above to return 0x00, 0x04 and on a big endian machine 0x04, 0x00. However, on my little endian Windows x86 workstation running .NET 5, it returns 0x04, 0x00
It's even documented that they've considered the endianness. From: https://learn.microsoft.com/en-us/dotnet/api/system.bitconverter.getbytes?view=net-5.0
The order of bytes in the array returned by the GetBytes method depends on whether the computer architecture is little-endian or big-endian.
Am I being daft or does this seem like the wrong behaviour?
I am indeed being daft. As @mjwills pointed out, and Microsoft's documentation explains (https://learn.microsoft.com/en-us/dotnet/api/system.bitconverter.islittleendian?view=net-5.0#remarks):
Wikipedia has a slightly better explanation:
So, if you imagine the memory addresses, converting a 16-bit integer with a value of 4 becomes:
Hopefully this'll help anyone equally daft in future!