This question is very simple. It is related to but definitely not a dupe of:
Most unpatched Tomcat webservers are vulnerable, who's at fault?
Seen the amazing amount of things that can go wrong with floating-point numbers (including, but not limited to, different results on different architectures, wrong results when used incorrectly, two denial of services crashes affecting two different languages, etc.) I'm wondering a very simple question:
Are floating-point numbers used without an epsilon always a code-smell or a spec-smell?
(that is: should floating-point number really only ever be used for scientific computation and all the rest should be done using a fixed number of bits of precision?)
sometimes there's really no need for precision. Game engines use floating points for rendering all the time. Likewise, you wouldn't do monetary computation with floats, but it's not a problem writing a graphing system with floats.
The problem comes when you conflate two different sets of data (one non-precise, one precise (or more precise)) For example, game developers often wind up using the same coordinate system for their world-coordinates as they do for the rendering, and then when the world gets huge, their single-precision floats start showing major rounding errors. The tolerance for a full global-location coordinate isn't the same as the tolerance in a 2D-local-area-to-3D-screen-space coordinate system.