I'm developing a time critical algorithm in Java and therefore am not using BigDecimal. To handle the rounding errors, I set an upper error bound instead, below which different floating point numbers are considered to be exactly the same. Now the problem is what should that bound be? Or in other words, what's the biggest possible rounding error that can occur, when performing computational operations with floating-point numbers (floating-point addition, subtraction, multiplication and division)?
With an experiment I've done, it seems that a bound of 1e-11 is enough.
PS: This problem is language independent.
EDIT: I'm using double data type. The numbers are generated with Random's nextDouble() method.
EDIT 2: It seems I need to calculate the error based on how the floating-point numbers I'm using are generated. The nextDouble() method looks like this:
public double nextDouble() {
return (((long)(next(26)) << 27) + next(27))
/ (double)(1L << 53); }
Based on the constants in this method, I should be able calculate the the biggest possible error that can occur for floating-point number generated with this method specifically (its machine epsilon?). Would be glad if someone could post the calculation .
The worst case rounding error on a single simple operation is half the gap between the pair of doubles that bracket the real number result of the operation. Results from Random's nextDouble method are "from the range 0.0d (inclusive) to 1.0d (exclusive)". For those numbers, the largest gap is about 1e-16 and the worst case rounding error is about 5e-17.
Here is a program that prints the gap for some sample numbers, including the largest result of Random's nextDouble:
Output:
Depending on the calculation you are doing, errors can accumulate across multiple operations, giving bigger total rounding error than you would predict from this simplistic single-operation approach. As Mark Dickinson said in a comment, "Numerical analysis is a bit more complicated than that."