Why the output of below code is -127. Why shouldn't it be -1?
#include<stdio.h>
int main(){
int a = 129;
char *ptr;
ptr = (char *)&a;
printf("%d ",*ptr);
return 0;
}
Why the output of below code is -127. Why shouldn't it be -1?
#include<stdio.h>
int main(){
int a = 129;
char *ptr;
ptr = (char *)&a;
printf("%d ",*ptr);
return 0;
}
On
If you will output the variable a in hexadecimal you will see that it is represented like 0x81.
Here is a demonstrative program.
#include <stdio.h>
int main(void)
{
int a = 129;
printf( "%#x\n", a );
return 0;
}
Its output is
0x81
0x80 is the minimal value of an object of the type char if the type char behaves as the type signed char. This value is equal to the decimal value -128.
Here is another demonstrative program.
#include <stdio.h>
int main(void)
{
char c = -128;
printf( "%#hhx\n", c );
return 0;
}
Its output is
0x80
If to add 1 you will get the value -127.
The two's complement representation of the character value -1 looks in hexadecimal like 0xff. If to add 1 you will get 0.
On
When you convert a signed char to int, what will be filled for left 24 bits? The answer is not always '0', but the 'sign bit' -- highest bit -- .
You may interest the following demo code:
#include<stdio.h>
int main(){
int a = 129;
char *ptr;
ptr = (char *)&a;
// output "0x81", yes 129 == 0x81
printf("0x%hhx\n",*ptr);
// output 0xffffff81
int b = *ptr; // convert a 'signed char' to 'int'
// since the 'sign bit' of the char is 'minus', so left padding that much '1'
printf("0x%x\n",b);
unsigned char *ptru = (unsigned char *)&a;
b = *ptru;
printf("0x%x\n",b); // still, output "0x81"
return 0;
}
It can be understood as follows:
We all know that the range of (signed)
charis from[-128 to 127], now let's break this range into binary format, so, the bits required is8(1Byte)0:0000 00001:0000 00012:0000 00103:0000 0011...
126:0111 1110127:0111 1111<-- max +ve number as after that we will overflow in sign bit-128:1000 0000<-- weird number as the number and its 2's Complement are same.-127:1000 0001-126:1000 0010...
-3:1111 1101-2:1111 1110-1:1111 1111So, now coming back to the question, we had
int a = 129;, clearly129when stored inside thechardata type it is going to overflow as the max positive permissible value is127. But why we got-127and not something else?Simple, binary equivalent of
129is1000 0001and forchardata-type that comes somewhere around,127:0111 1111-128:1000 0000-127:1000 0001<-- here!-126:1000 0010...
So, we get
-127when129is stored in it.