Difference between INT_MIN , INT8_MIN , INT16_MIN. For MAX too

124 Views Asked by At

In VScode, I was using INT_MIN/INT_MAX, But today I got an error, saying "Unidentified Classifier". Instead it suggested me to use INT8_MIN.

After using this it worked perfectly.

But, what is the core difference between them ?

1

There are 1 best solutions below

3
wohlstad On

Your code might compile now, but it will not work as you expect. The INT8_* macros have totally different values than the INT_* ones.

INT_MIN/INT_MAX are the min/max values for an int (and are defined in <limits.h> header).
It is typically 32 bit and therefore the values that you used before were probably –2147483648 and 2147483647 respectively (they could be larger for 64 bit integers, or smaller for 16 bit ones).

On the other hand INT8_MIN/INT8_MAXare min/max values for a 8 bit signed integer (AKA int8_t), which are -128 and 127 respectively. BTW - they are also defined in a different header (<stdint.h>) which might explain why using it solved your compilation error.

The bottom line:
In order to get the behavior you had before, you should use std::numeric_limits<int>::min() and std::numeric_limits<int>::max(), from the <limits> header.

Note that INT_MIN and similar constants are actually macros "inherited" from C. In C++ we prefer to use the std::numeric_limit template (mentioned above) which accepts the type as a template agrument. This makes is less likely to make mistakes. You could even use decltype(variable) as the the template agrument.

Finally - you also mentioned INT16_MIN/INT16_MAX in your title: it is the correlative min/max values for 16 bit signed integer - i.e. -32768 and 32767 respectively. The same principle applies to similar constants. Again they have an equivalent in std::numeric_limits which is recommended in C++.