I apologize if the question I’m raising is way too basic, but I really don’t understand this. And I my theoretical background is not software engineering.
Let’s say se are dealing with a Java int, which is 32 bit. So:
2 = 00000000 00000000 00000000 00000010
I understand Tilde of 2 is the bitwise of 2. So:
~2 = 11111111 11111111 11111111 11111101
For which I must find which positive number is the twos complement of this, which is 3, so ~2 = -3
But what I don’t get about all this is…
How does the machine differentiates when 11111111 11111111 11111111 11111101 means -3 And when does it mean -2billion?
I can’t understand this because it seems to violate the injective behavior between bits and numbers.
What am I getting wrong here?
As an unsigned 32-bit number, the value would be 4,294,967,293.
As mentioned, Java doesn't have a type that is a 32-bit unsigned integer, so for Java, the question is moot.
For other languages, and other machines, the concept does exist. You ask 'how does the machine differentiate' and in general, the answer is that it doesn't. It's up to the low-level programmer to decide whether 11111111 11111111 11111111 11111101 means -3 or 4,294,967,293. And thus you get to decide whether the result of adding 1 to 11111111 11111111 11111111 11111101 means 4,294,967,294 or -2. It's the same twos-complement addition.
There are some cases, say for a 'compare' instruction, where it matters whether the comparands are regarded as signed or unsigned. But (again, generally) this is handled by the programmer deciding to code 'signed compare' or 'unsigned compare'. For higher-level languages, this decision is implicit in the data type used: signed integer or unsigned integer.