While typing a program as a high level programmer, n = 0; looks more efficient and clean.
But is n = 0; really more efficient than if (n != 0) n = 0;?
when
nis more likely to be0.when
nis less likely to be0.when
nis absolutely uncertainty.
Language: C (C90)
Compiler: Borland's Turbo C++
Minimal reproducible code
void scanf();
void main()
{
int n; // 2 bytes
n=0; // Expression 1
scanf("%d",&n); // Absolutely uncertain
if(n!=0) n=0; // Expression 2
}
Note: I have mentioned the above code only for your reference. Please don't go with it's flow.
If your not comfortable with the above language/standard/compiler, then please feel free to explain the above 3 cases in your preferred language/standard/compiler.
If
nis a 2's complement integral type or an unsigned integral type, then writingn = 0directly will certainly be no slower than the version with the condition check, and a good optimising compiler will generate the same code. Some compilers compile assignment to zero as XOR'ing a register value with itself, which is a single instruction.If
nis a floating point type, a 1s' complement integral type, or a signed magnitude integral type, then the two code snippets differ in behaviour. E.g. ifnis signed negative zero for example. (Acknowledge @chqrlie.) Also ifnis a pointer on a system than has multiple null pointers representations, thenif (n != 0) n = 0;will not assignn, whennis one of the various null pointers.n = 0;imparts a different functionality."will always be more efficient" is not true. Should reading
nhave a low cost, writingna high cost (Think of re-writing non-volatile memory that needs to re-write a page) and is likelyn == 0, thenn = 0;is slower, less efficient thanif (n != 0) n = 0;.