Why is rounding not a solution to the floating point determinism problem?

152 Views Asked by At

I'm looking to make a deterministic networked game and everywhere online there are stories about how floating point math isn't deterministic and will often give different results on different computers, architectures, processors, browsers, etc.

One reason for this is because some processors may do math with 80-bit precision (x87 I think?) and then truncate down to 64 bits for the final result.

Then, if you read these posts, people always mention either doing all math with integers, using very slow fixed point math libraries, or rewriting entire math libraries to be deterministic.

But, I'm confused. For all the math in my game, I don't really care if my floating point has 5 decimals of precision vs 15. So why isn't a solution to this problem just rounding?

That is, whenever you do an operation on floating points, apply this function to the result:

function round(n) {
  return Math.truncate(n * 100000) / 100000;
}

It doesn't really matter to me that this makes my floats less precise. I don't really need insane levels of precision anyway. But doesn't this fix the problem of floating point determinism? Because while different processors might have slightly different results on math functions, none of them will be that far off, no? So if we simply round the result to 6 decimal places with the above function, shouldn't all architectures give the same results?

And then you don't have to rewrite your entire math library or use extremely slow fixed point operators everywhere. Marking this as Javascript because I only really care as this relates to Javascript personally, although it is a bit language agnostic.

0

There are 0 best solutions below