I'm reading the C++ Primer 5th Edition, and I don't understand the following part:
In an unsigned type, all the bits represent the value. For example, an 8-bit unsigned char can hold the values from 0 through 255 inclusive.
What does it mean with "all the bits represent the value"?
This is mostly a theoretical thing. On real hardware, the same holds for
signed
integers as well. Obviously, with signed integers, some of those values are negative.Back to
unsigned
- what the text says is basically that the value of an unsigned number is simply 1<<0 + 1<<1 + 1<<2 + ... up to the total number of bits. Importantly, not only are all bits contributing, but all combinations of bits form a valid number. This is NOT the case forsigned
integers. Therefore, if you need a bitmask, it has to be anunsigned
type of sufficient width, or you could run into invalid bit patterns.