When I read DDIA page 119, I see that in Trift CompactProtocol when encode int64 with variable length.
Example taken from DDIA
Number in Base 10 to be encoded: 1337 It is encoded to 1|111001|0 0|0010100
1st byte: 1|111011|0 --->
the first 1 represents there are more data coming in after this byte
then 111011 is the actual coding of 37 (i assuem they use little endian)
the last 0 represent the sign +
2nd byte: 0|0010100
My question is, how are 37 and 13 being encoded to 111011 and 0010100? Shouldn't they be 100101 and 1101? Thank you
I searched online but it seems this is so straightforward that nobody has ever doubted about this :(
Decimal
1337is hexadecimal0539or binary0000010100111001.There is a clear and comprehensive explanation and comparison of BinaryProtocol vs Thrift CompactProtocol at Designing Data-Intensive Applications by Martin Kleppmann (Chapter 4. Encoding and Evolution) found e.g. at first three links via googling DDIA "Thrift CompactProtocol":