int a=128;
byte b;
b=(byte)a;
System.out.println(b);
This prints -128.
But in the Java book the same code outputs 0.
What's the difference between them?
int a=128;
byte b;
b=(byte)a;
System.out.println(b);
This prints -128.
But in the Java book the same code outputs 0.
What's the difference between them?
128 represented as a 32-bit integer (int
) is 00000000 00000000 00000000 10000000
in binary.
As a byte
is only 8 bits, when it is cast to a byte
it becomes 10000000
. Because all integers in Java are signed integers using two's complement, the first bit (1) is the sign bit, therefore the value becomes -128.
Not sure why the book said the output should be 0. Are you sure the example is exactly the same in the book?
More information on the Java primitives types here and Wikipedia has a fairly comprehensive article on two's complement.
Right answer was posted already but id like to expand it a little.
To understand better how it works, try to read about it form other sources. @DanielGibbs provided few you could use
i suggest you also try to run code like:
output of this code should let you see clearly meaning of less significant (and one most significant which determines sign) bites in int and how they are fit in byte type.
PS. -256 is not
10000000 00000000 00000001 00000000
, but11111111 11111111 11111111 00000000
.