Why does C++ break down into nibbles?

315 Views Asked by At

Why is information stored in sequences of four bits (nibbles)? Is there any particular reason that four bits were selected, over perhaps three bits, or five bits? I've just been wondering about this question, and I haven't found a definitive answer (if there is one) as to why we group bits in this manner.

3

There are 3 best solutions below

0
On BEST ANSWER

The closest nibbles get to relevant is that a number in hex format has one digit per nibble... the reason hex is seen quite a bit in code is simply that it allows the common 8-bit width of bytes to be represented with exactly and just 2 hex digits, which is reasonably concise and not too hard for humans to get used to. It's easy enough to mentally convert back to binary, while not losing track of which digits you're looking at the way you can with a 32-bit or 64-bit value in binary.

  • Example: seeing 0x30000 it's clear the 4*4=16 less-significant bits on the right are 0s, so it's the 17th and 18th bits that are set. That's easier and less error prone than interpreting 0b110000000000000000 or (decimal) 1114112.

C++ bit fields allow structs to pack in arbitrary widths and positions, so you can create "nibbles" if you like, but in the likely case that the CPU lacks any special support for nibbles, or the C++ optimiser considers benefit from such instructions so infrequent that it doesn't bother to utilise them, the compiled C++ code will be bit-shifting and bitwise ORing and ANDing into/from the CPU-addressable units of memory (bytes or words) that hold them, just as it's likely to have to do for other unusual-width fields.

A few CPUs have supported Binary Coded Decimal number representations where each decimal digit occupied a nibble, but that's not supported by the C++ Standard.

0
On

There's no mechanism in c++ that breaks down information is stored into nibbles.

What you mostly see with implementations is an octet, not a nibble. An octet represents a byte which is the smallest unit of memory that can be addressed in c++.

How many bits are used to represent a byte (=> unsigned char) in the c++ language is actually implementation defined.

0
On

There is no guarantee that information is stored in sequences of four bits. It is more likely stored in sequences of 8 bits (a byte), but this is entirely dependent on your architecture, and value of CHAR_BIT. sizeof can only return the size of your data types in bytes, and sizeof(char) is guaranteed to return 1. The standard does not dictate that a byte is 8 bits.