negative char Value JAVA - java

Why doe it happen the following:
char p = 0;
p--;
System.out.println(p);
result 65535
Why does not give it out a compilation error or a runtime Exception?
I expected it as chars cannot be negative. Instead it starts back counting from upside down.
Thanks in advance.

Why does not give it out a compilation error or a runtime Exception?
Because the language specification mandates that arithmetic on primitive types is modulo 2^width, so -1 becomes 2^16-1 as a char.
In the section on integer operations, it is stated that
The built-in integer operators do not indicate overflow or underflow in any way.
so that forbids throwing an exception.
For the postfix-decrement operator used, specifically, its behaviour is specified in 15.14.3
Otherwise, the value 1 is subtracted from the value of the variable and the difference is stored back into the variable. Before the subtraction, binary numeric promotion (§5.6.2) is performed on the value 1 and the value of the variable. If necessary, the difference is narrowed by a narrowing primitive conversion (§5.1.3) and/or subjected to boxing conversion (§5.1.7) to the type of the variable before it is stored. The value of the postfix decrement expression is the value of the variable before the new value is stored.
The binary numeric promotion converts both, the value and 1, to int (since the type here is char), thus you have the intermediate result -1 as an int, then the narrowing primitive conversion is performed:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
resulting in a char value of 0xFFFF (since Java specifies two's complement representation for its signed integer types, explicitly stated in the specification of unary minus):
For integer values, negation is the same as subtraction from zero. The Java programming language uses two's-complement representation for integers, and the range of two's-complement values is not symmetric, so negation of the maximum negative int or long results in that same maximum negative number. Overflow occurs in this case, but no exception is thrown. For all integer values x, -x equals (~x)+1.
For the general wrap-around behaviour for out-of-range results, as an example in the specification of the multiplication operator:
If an integer multiplication overflows, then the result is the low-order bits of the mathematical product as represented in some sufficiently large two's-complement format. As a result, if overflow occurs, then the sign of the result may not be the same as the sign of the mathematical product of the two operand values.
Similar phrases occur in the specification of integer addition, and subtraction is required to fulfill a - b == a + (-b), so the overflow behaviour follows.

Because that's how the Java language is defined. The runtime doesn't check bounds at each operation (probably because it would be extremely expensive). It just overflows or underflows.

Related

How type extension occurs in case of positives and negatives in Java?

Is there any difference between how type conversion happens in case of positive and negative numbers?
For example, if we have
short a = 100;
and put it to
int b = a;
if we change '100' to '-100', does it make any difference?
I tried to find it compiling in IDEA, but didn't find difference, but I have this questions from my mentor.
Disclaimer: Since this is a homework question, what I say here might not be the "expected" answer.
There are two conversions involved here. The first one is a narrowing primitive conversion from int (the literal 100 evaluates to a value of type int) to short. The second one is a widening primitive conversion from short to int.
The second conversion will never lose information, as per the JLS §5.1.2:
A widening primitive conversion does not lose information about the
overall magnitude of a numeric value in the following cases, where the
numeric value is preserved exactly:
from an integral type to another integral type
from byte, short, or char to a floating point type
from int to double
from float to double in a strictfp expression (§15.4)
The first conversion is done like this, according to the JLS §5.1.3
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T. In addition to a possible loss of
information about the magnitude of the numeric value, this may cause
the sign of the resulting value to differ from the sign of the input
value.
Both -100 and 100 is representable with short, whose range is -65536...65535, so no information is lost here either.
In short, it doesn't matter whether you use 100 or -100, the result will be that b will store the value of 100 or -100 respectively.

Type convertion of negative value to char [duplicate]

Why doe it happen the following:
char p = 0;
p--;
System.out.println(p);
result 65535
Why does not give it out a compilation error or a runtime Exception?
I expected it as chars cannot be negative. Instead it starts back counting from upside down.
Thanks in advance.
Why does not give it out a compilation error or a runtime Exception?
Because the language specification mandates that arithmetic on primitive types is modulo 2^width, so -1 becomes 2^16-1 as a char.
In the section on integer operations, it is stated that
The built-in integer operators do not indicate overflow or underflow in any way.
so that forbids throwing an exception.
For the postfix-decrement operator used, specifically, its behaviour is specified in 15.14.3
Otherwise, the value 1 is subtracted from the value of the variable and the difference is stored back into the variable. Before the subtraction, binary numeric promotion (§5.6.2) is performed on the value 1 and the value of the variable. If necessary, the difference is narrowed by a narrowing primitive conversion (§5.1.3) and/or subjected to boxing conversion (§5.1.7) to the type of the variable before it is stored. The value of the postfix decrement expression is the value of the variable before the new value is stored.
The binary numeric promotion converts both, the value and 1, to int (since the type here is char), thus you have the intermediate result -1 as an int, then the narrowing primitive conversion is performed:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
resulting in a char value of 0xFFFF (since Java specifies two's complement representation for its signed integer types, explicitly stated in the specification of unary minus):
For integer values, negation is the same as subtraction from zero. The Java programming language uses two's-complement representation for integers, and the range of two's-complement values is not symmetric, so negation of the maximum negative int or long results in that same maximum negative number. Overflow occurs in this case, but no exception is thrown. For all integer values x, -x equals (~x)+1.
For the general wrap-around behaviour for out-of-range results, as an example in the specification of the multiplication operator:
If an integer multiplication overflows, then the result is the low-order bits of the mathematical product as represented in some sufficiently large two's-complement format. As a result, if overflow occurs, then the sign of the result may not be the same as the sign of the mathematical product of the two operand values.
Similar phrases occur in the specification of integer addition, and subtraction is required to fulfill a - b == a + (-b), so the overflow behaviour follows.
Because that's how the Java language is defined. The runtime doesn't check bounds at each operation (probably because it would be extremely expensive). It just overflows or underflows.

How to perform unassigned sum with short variable with out converting in to integer in java

My requirement is to perform unassigned sum with short variables in java with out converting in to integer.
Please suggest is it possible to perform unassigned sum or not.
Java doesn't have an addition operator for short values. If you look at JLS 15.18.2 (Additive Operators (+ and -) for Numeric Types) you'll see that one of the first things that happens is:
Binary numeric promotion is performed on the operands (§5.6.2).
That will always convert short values into int, float, long or double values, depending on the other operand.
Basically, to perform 16-bit addition, you let the compiler convert both operands into 32 bits, do the addition, and then cast back:
(short) (a + b)
... being aware that this may lose information due to overflow when casting.

Rules governing narrowing of double to int

Please note I am NOT looking for code to cast or narrow a double to int.
As per JLS - $ 5.1.3 Narrowing Primitive Conversion
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T.
So, when I try to narrow a 260 (binary representation as 100000100) to a byte then result is 4 because the lowest 8 bits is 00000100 which is a decimal 4 OR a long value 4294967296L (binary representation 100000000000000000000000000000000) to a byte then result is 0.
Now, why I want to know the rule for narrowing rule from double to int, byte etc. is when I narrow a double value 4294967296.0 then result is 2147483647 but when I narrow a long 4294967296L value then result is 0.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
... I want to understand the why and how part.
The JLS (JLS 5.1.3) specifies what the result is. A simplified version (for int) is:
a NaN becomes zero
an Inf becomes "max-int" or "min-int"
otherwise:
round towards zero to get a mathematical integer
if the rounded number is too big for an int, the result becomes "min-int" or "max-int"
"How" is implementation specific. For examples of how it could be implemented, look at the Hotspot source code (OpenJDK version) or get the JIT compiler to dump some native code for you to look at. (I imagine that the native code maps uses a single instruction to do the actual conversion .... but I haven't checked.)
"Why" is unknowable ... unless you can ask one of the original Java designers / spec authors. A plausible explanation is a combination of:
it is easy to understand
it is consistent with C / C++,
it can be implemented efficiently on common hardware platforms, and
it is better than (hypothetical) alternatives that the designers considered.
(For example, throwing an exception for NaN, Inf, out-of-range would be inconsistent with other primitive conversions, and could be more expensive to implement.)
Result is Integer.MAX_VALUE when converting a double to an integer, and the value exceeds the range of an integer. Integer.MAX_VALUE is 2^31 - 1.
When you start with the double value 4294967296.0, it is greater than the greatest long value which is 2147483647 so the following rule is applied (from the page you cited) : The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long and you get 0x7FFFFFF = 2147483647
But when you try to convert 4294967296L = 0x100000000, you start from an integral type, so the rule is : A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits so if n is less than 32 (8 bytes) you just get a 0.

Why does a loop with a char as it's index loop infinitely?

This loop will continue indefinitely:
char a = 100;
for(a=100; a>=0;--a)
System.out.println(a);
Does it happen because a gets promoted to an int value for arithmetic operations and gets widened to 32 bits from 16 bit char value and hence will always stay positive?
It will indeed loop indefinitely -- and the reason you stated is close. It happens because a can't represent any number that doesn't satisfy a >= 0 -- char is unsigned. Arithmetic underflow is well-defined in Java and unindicated. See the below relevant parts of the specification.
§4.2.2
The integer operators do not indicate overflow or underflow in any way.
This means there is no indication of overflow/underflow other than just comparing the values... e.g. if a <= --a, then that means an underflow occured.
§15.15.2
Before the subtraction, binary numeric promotion (§5.6.2) is performed on the value 1 and the value of the variable. If necessary, the difference is narrowed by a narrowing primitive conversion (§5.1.3) and/or subjected to boxing conversion (§5.1.7) to the type of the variable before it is stored. The value of the prefix decrement expression is the value of the variable after the new value is stored.
So, we can see that there are two major steps here: binary numeric promotion, followed by a narrowing pritimive conversion.
§5.6.2
Widening primitive conversion (§5.1.2) is applied to convert either or both operands as specified by the following rules:
If either operand is of type double, the other is converted to double.
Otherwise, if either operand is of type float, the other is converted to float.
Otherwise, if either operand is of type long, the other is converted to long.
Otherwise, both operands are converted to type int.
We can see that the decrement expression works with a treated as int, thus performing a widening conversion. This allows it to represent the value -1.
§5.1.3
A narrowing primitive conversion may lose information about the overall magnitude of a numeric value and may also lose precision and range.
...
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
Keeping only the n lowest order bits means that only the lowest 16 bits of the int expression a - 1 are kept. Since -1 is 0b11111111 11111111 11111111 11111111 here, only the lower 0b11111111 11111111 is saved. Since char is unsigned, all of these bits contribute to the result, giving 65535.
Noticing something here? Essentially, this all means that Java integer arithmetic is modular; in this case, the modulus is 2^16, or 65536, because char is a 16-bit datatype. -1 (mod 65536) ≡ 65535, so the decrement will wrap back around.
Nope. char values are unsigned - when they go below 0, they come back around to 65535.
Swap char with byte - then it'll work.
As others have said, the char type in Java is unsigned, so a >= 0 will always be true. When a hits 0 and then is decremented once more, it becomes 65535. If you just want to be perverse, you can write your loop to terminate after 101 iterations this way:
char a = 100;
for(a=100; a<=100;--a)
System.out.println(a);
Code reviewers can then have a field day tearing you apart for writing such a horrible thing. :)

Categories