Why is int to float conversion failing in printf?

152 Views Asked by At

When using a int to float implicit conversion, it fails with printf()

#include <stdio.h>

int main(int argc, char **argv) {
    float s = 10.0;
    printf("%f %f %f %f\n", s, 0, s, 0); 
    return 0;
}

when compiled with gcc -g scale.c -o scale it does output garbage

./scale
10.000000 10.000000 10.000000 -5486124068793688683255936251187209270074392635932332070112001988456197381759672947165175699536362793613284725337872111744958183862744647903224103718245670299614498700710006264535590197791934024641512541262359795191593953928908168990292758500391456212260452596575509589842140073806143686060649302051520512.000000

If I explicitly cast the integer to float, or use a 0.0 (which is a double) it works as designed.

#include <stdio.h>

int main(int argc, char **argv) {
    float s = 10.0;
    printf("%f %f %f %f\n", s, 0.0, s, 0.0); 
    return 0;
}

when compiled with gcc -g scale.c -o scale it does output the expected output

./scale
10.000000 0.000000 10.000000 0.000000

What is happening ?

I'm using gcc (Debian 10.2.1-6) 10.2.1 20210110 if that's important.

2

There are 2 best solutions below

14
Vlad from Moscow On BEST ANSWER

The conversion specifier f expects an object of the type double. Usually sizeof( double ) is equal to 8 while sizeof( int ) is equal to 4. And moreover integers and doubles have different internal representations.

Using an incorrect conversion specifier results in undefined behavior.

From the C Standard (7.21.6.1 The fprintf function)

9 If a conversion specification is invalid, the behavior is undefined.275) If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.

As for objects of the type float then due to default argument promotions they are converted to the type double.

From the C Standard (6.5.2.2 Function calls)

6 If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type float are promoted to double. These are called the default argument promotions.

So these calls of printf

printf("%f %f %f %f\n", s, 0.0, s, 0.0); 

and

printf("%f %f %f %f\n", s, 0.0f, s, 0.0f); 

are equivalent relative to the result.

Pay attention to that some programmers to output doubles use the length modifier l in conversion specification as for example %lf. However the length modifier has no effect and should be removed.

0
DevSolar On

A variadic function needs some kind of information on the number and type of arguments it was passed, as it has to actively retrieve those using va_arg (which requires the type of the argument to retrieve as a parameter). In the case of printf, this information is in the first parameter, the format string.

If there is a %f in the format string, the variadic function will expect a double argument1, and will retrieve this using va_arg.

But the compiler has no idea about this2. It does not take the format string into account -- all it sees is the actual type of the argument, in this case an int.

So there was an int put in, and a double1 taken out. This happens to be undefined behavior.

Your observed behavior is because your first platform passes floating point values differently than integer values -- the integer 0 got put in one place, but va_arg read from another -- reading, in fact, the second actual float1 argument the first time, and whatever was in the place a third float1 value would have been put in had you passed one.


1 float arguments to variadic functions are automatically expanded to double.

2 It could, and modern compilers can generate a warning if the format string and the argument types mismatch. But the compiler is not allowed to change the generated code depending on the format string, as that would make printf behave differently than e.g. a custom variadic function.