In a related question, How to trap floating-point exceptions on M1 Macs?, someone wanted to understand how to make the following code work natively on macOS hosted by a machine using the M1 processor:
#include <cmath> // for sqrt()
#include <csignal> // for signal()
#include <iostream>
#include <xmmintrin.h> // for _mm_setcsr
void fpe_signal_handler(int /*signal*/) {
std::cerr << "Floating point exception!\n";
exit(1);
}
void enable_floating_point_exceptions() {
_mm_setcsr(_MM_MASK_MASK & ~_MM_MASK_INVALID);
signal(SIGFPE, fpe_signal_handler);
}
int main() {
const double x{-1.0};
std::cout << sqrt(x) << "\n";
enable_floating_point_exceptions();
std::cout << sqrt(x) << "\n";
}
I am looking at this from another angle, and want to understand why it doesn't work using Rosetta 2. I compiled it using the following command:
clang++ -g -std=c++17 -arch x86_64 -o fpe fpe.cpp
When I run it, I see the following output:
nan
nan
Mind you, when I do the same thing on a Intel-based Mac, I see the following output:
nan
Floating point exception!
Does anyone know if it is possible to trap floating-point exceptions on Rosetta 2?
Considering the difference in trapping on Intel using:
and trapping on Apple Silicon using:
it seems more likely it is a bug in the Rosetta implementation.