Why is C#'s decimal type 128 bits?

686 Views Asked by At

Is there a reason why C#'s decimal type was chosen to be 128 bits? Analogy with double would have suggested 64 bits (only decimal floating-point instead of binary), whereas analogy with Java, or a philosophy of 'make sure it has whatever it takes', would have suggested arbitrary precision.

This is not a rhetorical question. I personally would've gone with arbitrary precision, but 128 bits probably works fine. I'm just asking whether the actual reasons that went into the choice have been documented anywhere, or plausibly guessed; whether it was aimed at a specific use case, or decided by intuition; whether there are known use cases for which 64 bits is not enough, but don't require arbitrary precision?

1

There are 1 best solutions below

0
On BEST ANSWER

Decimal allows to have a higher capacity to store a real number than double (8 bytes = 64 bits), but less than BigInteger that is not a data type to manipulate reals.

Our computer science is based on microprocessor architecture and CPUs actually manage for the most 64bits registers to work with internal data and memory.

From the past, CPU registers can be 8, 16, 32 or 64 and even 128bits for particular chipset.

Hence actually, the usual operation and the optimization of CPU in 64 bits is equal to 8 bytes.

x64 CPUs run at their best capacities and speed when using 8 bytes registers to get/set memory cells, calculate, doing contidtional tests, calling procedures, returning procs, do stack pushes and pops, and so on.

So to allow the fastest calculation on numbers having more than 8 bytes, the first best choice is to use two 64bits register hence 128 bits that equals to 8 + 8 bytes.

Think that it is a road : 32bits is a small road, 64 bits the standard road and 128 bits a big road: using 32bits on a x64 system requires to limit the road, and using a 128bits access requires to monopolize 2 roads.

But for best optimization, and because of electronic manufacturing, every things is a multiple of 8 bits in ours CPU architectures, so 1/2/4/8/16/32/64/128/256... bytes. Because of the hardware and how constructors do the things. Because of the binary arithmetic theory.

Thus the first best possibility to have a big real number over a 64bits is to use 128bits, that is to say two CPU registers of 64 bits (on a x64 system).

Intel® 64 and IA-32 Architectures Software Developer Manuals