I was reading about error detection and stumbled upon a statement which I didn't quite understand. The statement said "for a k bit string we need lg k parity bits to detect 2 bit errors".where lg is log to the base 2
I couldn't quite understand why this is true, is there any formal derivation that confirms this.
The name of the book is Data Networks by Gallahager.
Im not doubting what the book says, but am just curious enough to see a derivation.
Thanks, Chander
Have a look at the wikipedia page for Cyclic Redundancy Check. Strictly speaking a parity bit is a 1 bit check so talking about more than 1 parity bit is probably shorthand for the equivalent CRC. The article "Mathematics of CRC" gives more on the derivation.