I'm thinking to add a value close to 0, the so-called "epsilon", to the denominator to prevent zero division error, such as:
double EPS = DBL_MIN;
double no_zerodivision_error = 0.0 / (0.0 + EPS);
When setting this epsilon value, are there any general best practices or considerations to prevent future problems?
Also, if I choose between DBL_MIN and DBL_EPSILON, is there a preferred value between the two?
I thought any small numbers would be fine, but I'm afraid I might run into a silent problem in the future that is difficult to spot.
Edit1) In my application, there are many normal cases where the denominator can be zero. That's why I'm not considering throwing an exception.
Edit2) There are some cases such "epsilon" is added to the denominator, such as some deep learning calculations. For example,:
# eps: term added to the denominator to improve numerical stability (default: 1e-8)
torch.optim.Adam(..., eps=1e-08, ...)
There are also SO questions and answers such as this.
Now you can simply check whether or not you need to do such tricks:
If the platform supports iec559, floating division by zero is defined to return properly signed version of
double_meta::infinity(). Otherwise,double_meta::denorm_min()is the smallest positive value representable bydoubleand division results in values much higher thandouble_meta::epsilon()which is much larger (as the smallest positive value whose addition to 1 results in a value greater than 1).By the way using
double::denorm_min()is not exactly the same as none trapping0.0. the division may result in none-NaN numbers. So the code will be platform-dependent, which is inevitable if floating point division by zero is valid part of the code.A better approach would be to guard the division itself; but I can't give a specific resultant constant, since the availability of any std constant relies on iec559 compliance(in which case floating point division by zero automatically results in infinty).