I have the following problem: I'm trying to calculate the adler32 checksum of a data block using Crypto++
, but I get the wrong checksum after converting the byte[4] array output to a uint32_t.
This function with crc32 works just fine:
CryptoPP::CRC32 crc;
byte digest[CryptoPP::CRC32::DIGESTSIZE];
crc.CalculateDigest(digest, (const byte*)pData.data(), pData.size());
uint32_t checksum = *(uint32_t*)digest; //this works fine
but the function calculating the adler32 returns a invalid value:
CryptoPP::Adler32 adler;
byte digest[CryptoPP::Adler32::DIGESTSIZE];
adler.CalculateDigest(digest, (const byte*)pData.data(), pData.size());
uint32_t checksum = *(uint32_t*)digest; //this returns a invalid value
hope someone could give me a hint.
greetz Fabian
The problem is that this code doesn't do what you want:
This code says to read the digest as if it held a 32-bit integer in the form this CPU natively stores 32-bit integers. But it doesn't contain that. It contains an array of 4-bytes that represent the hash, but not as an x86-CPU style integer.
Try this:
This says to read the raw bytes into an integer, and then convert them to X86 host format.