I know float type is A IEEE floating point, and it's not accuracy in calculation, for example, if I'd like to sum two floats 8.4
and 2.4
, what I get is 10.7999999
rather than 10.8
. I also know BigDecimal can solve this problem, but BigDecimal is much slower than float type.
In most real productions we'd like an accuracy value like above 10.8 not a 10.7999.. so my question is shall I prevent to use float as much as I can in programming? if not is there any use cases? I mean in a real production.
If you're handling monetary amounts, then numbers like
8.4
and2.4
are exact values, and you'll want to useBigDecimal
for those. However, if you're doing a physics calculation where you're dealing with measurements, the values 8.4 and 2.4 aren't going to be exact anyway, since measurements aren't exact. That's a use case where usingdouble
is better. Also, a scientific calculation could involve things like square roots, trigonometric functions, logarithms, etc., and those can be done only using IEEE floats. Calculations involving money don't normally involve those kinds of functions.By the way, there's very little reason to ever use the
float
type; stick withdouble
.