Consider this simple code (t0.c):
#include <stdio.h>
#include <float.h>
#if DBL_HAS_SUBNORM == 1
double d = 0x23d804860c09bp-1119;
int main(void)
{
printf("%a\n", d);
return 0;
}
#endif
Invocation and output:
# host: CPU: Intel, OS: Windows 10
$ gcc t0.c -std=c11 && ./a.exe
0x1.2p-1070
# host: CPU: Intel, OS: Windows 10
$ clang t0.c -std=c11 && ./a.exe
0x1.2p-1070
# host: CPU: Intel, OS: Linux
$ gcc t0.c -std=c11 && ./a.out
0x0.0000000000012p-1022
# host: CPU: Intel, OS: Linux
$ clang t0.c -std=c11 && ./a.out
0x0.0000000000012p-1022
Question: For conversion specifier %a
, how:
- the exact format of hexadecimal floating-point constant is selected, and
- the parameters of this hexadecimal floating-point constant are selected?
For example, why 0x1.2p-1070
and not 0x0.0000000000012p-1022
(or other variations) (and vise versa)?
C allows some latitude in the details
Notable variations I have seen is if the first
h
digit is'0'-'F'
or limited to'0'-'1'
.Leading digit specified as non-zero for normal values. Yet as OP's value looks like a sub-normal one, either would have been acceptable. It is unspecified.