strange 0xF0B8 constant in CRC verification

353 Views Asked by At

I'm studying a working program that between others verifies the validity of an incoming message. The CRC used is CCITT 16, the initial value of the CRC is set to 0xFFFF, and the message is sent in bytes that are received LSb first and the bytes the MSB first. The message forms a long bitstream up to 1024 bits long, including the 16 CRC bits at the end.

I fully understand the calculation of the CRC value. What puzzles me, and I can find no reference justifying it, is the validity check performed after all message has been received. Namely, the CRC value is checked against 0xF0B8 instead of 0x0000. Am I missing something?

// crcValue is the 16 bit CRC variable under construction
// dataBit is a byte containing one bit of the data bitstream at LSb

...

// CRC calculation bit by bit

if(dataBit ^ (crcValue & 0x0001)) crcValue = (crcValue >> 1) ^ 0x8408;
else                              crcValue >>=1;
          
//After running over all the bitstream the crcValue is checked 

if(crcValue != 0xF0B8) ... // reject data  
else ...  // accept data

The program works. As said I cannot understand where the 0xF0B8 value comes from.

Thank You in advance

1

There are 1 best solutions below

1
On

You left out the fact that are also inverting the CRC before appending to the message. Checking against 0x0000 would work if there were no final invert or other final exclusive-or. 0xf08b is that CRC (initializing with 0xffff, but not inverting the result) of two zero bytes, which is your appended CRC in the case of an empty message.