According to IEEE 754-2008, there are binary32 and decimal32 standard:
Decimal Decimal
Name Common name Base Digits E min E max Digits E max
binary32 Single precision 2 23+1 −126 +127 7.22 38.23
decimal32 10 7 −95 +96 7 96
So both use 32 bit but decimal 32 has 7 digit with E max as 96 while float32 has 7.22 digit and E max is ~38.
Does this mean decimal 32 has similar precision but far better range? So what prevents using decimal32 over float32? Is that their performance (ie.speed)?
Your reasoning when you say “decimal 32 has similar precision …” is flawed: between 1 and 1e7, binary32 can represent far more numbers than decimal32. Choosing to compare the precision expressed as an “equivalent” number decimal digits of a binary format gives the wrong impression, because over these sequences of decimal digits, in some areas, the binary format can represent numbers with additional precision.
The number of binary32 numbers between 1 and 1e7 can be computed by subtracting their binary representations as if they were integers. The number of decimal32 numbers in the same range is 7 decades(*), or 7e7 (1e7 numbers between 1 and 9.999999, another 1e7 numbers between 10 and 99.99999, …).
(*) like a binade but for powers of ten.