Rules governing narrowing of double to int - java

Please note I am NOT looking for code to cast or narrow a double to int.
As per JLS - $ 5.1.3 Narrowing Primitive Conversion
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T.
So, when I try to narrow a 260 (binary representation as 100000100) to a byte then result is 4 because the lowest 8 bits is 00000100 which is a decimal 4 OR a long value 4294967296L (binary representation 100000000000000000000000000000000) to a byte then result is 0.
Now, why I want to know the rule for narrowing rule from double to int, byte etc. is when I narrow a double value 4294967296.0 then result is 2147483647 but when I narrow a long 4294967296L value then result is 0.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.

I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
... I want to understand the why and how part.
The JLS (JLS 5.1.3) specifies what the result is. A simplified version (for int) is:
a NaN becomes zero
an Inf becomes "max-int" or "min-int"
otherwise:
round towards zero to get a mathematical integer
if the rounded number is too big for an int, the result becomes "min-int" or "max-int"
"How" is implementation specific. For examples of how it could be implemented, look at the Hotspot source code (OpenJDK version) or get the JIT compiler to dump some native code for you to look at. (I imagine that the native code maps uses a single instruction to do the actual conversion .... but I haven't checked.)
"Why" is unknowable ... unless you can ask one of the original Java designers / spec authors. A plausible explanation is a combination of:
it is easy to understand
it is consistent with C / C++,
it can be implemented efficiently on common hardware platforms, and
it is better than (hypothetical) alternatives that the designers considered.
(For example, throwing an exception for NaN, Inf, out-of-range would be inconsistent with other primitive conversions, and could be more expensive to implement.)

Result is Integer.MAX_VALUE when converting a double to an integer, and the value exceeds the range of an integer. Integer.MAX_VALUE is 2^31 - 1.

When you start with the double value 4294967296.0, it is greater than the greatest long value which is 2147483647 so the following rule is applied (from the page you cited) : The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long and you get 0x7FFFFFF = 2147483647
But when you try to convert 4294967296L = 0x100000000, you start from an integral type, so the rule is : A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits so if n is less than 32 (8 bytes) you just get a 0.

Related

How type extension occurs in case of positives and negatives in Java?

Is there any difference between how type conversion happens in case of positive and negative numbers?
For example, if we have
short a = 100;
and put it to
int b = a;
if we change '100' to '-100', does it make any difference?
I tried to find it compiling in IDEA, but didn't find difference, but I have this questions from my mentor.
Disclaimer: Since this is a homework question, what I say here might not be the "expected" answer.
There are two conversions involved here. The first one is a narrowing primitive conversion from int (the literal 100 evaluates to a value of type int) to short. The second one is a widening primitive conversion from short to int.
The second conversion will never lose information, as per the JLS §5.1.2:
A widening primitive conversion does not lose information about the
overall magnitude of a numeric value in the following cases, where the
numeric value is preserved exactly:
from an integral type to another integral type
from byte, short, or char to a floating point type
from int to double
from float to double in a strictfp expression (§15.4)
The first conversion is done like this, according to the JLS §5.1.3
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T. In addition to a possible loss of
information about the magnitude of the numeric value, this may cause
the sign of the resulting value to differ from the sign of the input
value.
Both -100 and 100 is representable with short, whose range is -65536...65535, so no information is lost here either.
In short, it doesn't matter whether you use 100 or -100, the result will be that b will store the value of 100 or -100 respectively.

Mathematical operations on Java primitives? [duplicate]

Consider this code:
public class ShortDivision {
public static void main(String[] args) {
short i = 2;
short j = 1;
short k = i/j;
}
}
Compiling this produces the error
ShortDivision.java:5: possible loss of precision
found : int
required: short
short k = i/j;
because the type of the expression i/j is apparently int, and hence must be cast to short.
Why is the type of i/j not short?
From the Java spec:
5.6.2 Binary Numeric Promotion
When an operator applies binary numeric promotion to a pair of operands, each of which must denote a value of a numeric type, the following rules apply, in order, using widening conversion (§5.1.2) to convert operands as necessary:
If either operand is of type double, the other is converted to double.
Otherwise, if either operand is of type float, the other is converted to float.
Otherwise, if either operand is of type long, the other is converted to long.
Otherwise, both operands are converted to type int.
For binary operations, small integer types are promoted to int and the result of the operation is int.
EDIT: Why is it like that? The short answer is that Java copied this behavior from C. A longer answer might have to do with the fact that all modern machines do at least 32-bit native computations, and it might actually be harder for some machines to do 8-bit and 16-bit operations.
See also: OR-ing bytes in C# gives int
Regarding the motivation: lets imagine the alternatives to this behaviour and see why they don't work:
Alternative 1: the result should always be the same as the inputs.
What should the result be for adding an int and a short?
What should the result be for multiplying two shorts? The result in general will fit into an int, but because we truncate to short, most multiplications will fail silently. Casting to an int afterwards won't help.
Alternative 2: the result should always be the smallest type that can represent all possible outputs.
If the return type were a short, the answer would not always be representable as a short.
A short can hold values -32,768 to 32,767. Then this result will cause overflow:
short result = -32768 / -1; // 32768: not a short
So your question becomes: why does adding two ints not return a long? What should multiplication of two ints be? A long? A BigNumber to cover the case of squaring integer min value?
Alternative 3: Choose the thing most people probably want most of the time
So the result should be:
int for multiplying two shorts, or any int operations.
short if adding or subtracting shorts, dividing a short by any integer type, multiplying two bytes, ...
byte if bitshifting a byte to the right, int if bitshifting to the left.
etc...
Remembering all the special cases would be difficult if there is no fundamental logic to them. It's simpler to just say: the result of integer operations is always an int.
It just a design choice to be consistent with C/C++ which were dominate languages when Java was designed.
For example, i * j could be implemented so the type is promoted from byte => short, short => int, and int => long, and this would avoid overflows but it doesn't. (It does in some languages) Casting could be used if the current behaviour was desired, but the loss of some bits would be clear.
Similarly i / j could be prompted from byte/short => float or int/long => double.

Casting of primitives type

I am beginner in Java. I cannot understand this line even after a long try.
byte num=(byte)135;
this line gives result -121 why it is in signed number ?
Can any one elaborate it ?
In Java, bytes are always signed, and they are in the range -128 to 127. When the int literal 135 is downcasted to a byte, the result is a negative number because the 8th bit is set.
1000 0111
Specifically, the JLS, Section 5.1.3, states:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
When you cast an int literal such as 135 to a byte, that is a narrowing primitive conversion.

negative char Value JAVA

Why doe it happen the following:
char p = 0;
p--;
System.out.println(p);
result 65535
Why does not give it out a compilation error or a runtime Exception?
I expected it as chars cannot be negative. Instead it starts back counting from upside down.
Thanks in advance.
Why does not give it out a compilation error or a runtime Exception?
Because the language specification mandates that arithmetic on primitive types is modulo 2^width, so -1 becomes 2^16-1 as a char.
In the section on integer operations, it is stated that
The built-in integer operators do not indicate overflow or underflow in any way.
so that forbids throwing an exception.
For the postfix-decrement operator used, specifically, its behaviour is specified in 15.14.3
Otherwise, the value 1 is subtracted from the value of the variable and the difference is stored back into the variable. Before the subtraction, binary numeric promotion (§5.6.2) is performed on the value 1 and the value of the variable. If necessary, the difference is narrowed by a narrowing primitive conversion (§5.1.3) and/or subjected to boxing conversion (§5.1.7) to the type of the variable before it is stored. The value of the postfix decrement expression is the value of the variable before the new value is stored.
The binary numeric promotion converts both, the value and 1, to int (since the type here is char), thus you have the intermediate result -1 as an int, then the narrowing primitive conversion is performed:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
resulting in a char value of 0xFFFF (since Java specifies two's complement representation for its signed integer types, explicitly stated in the specification of unary minus):
For integer values, negation is the same as subtraction from zero. The Java programming language uses two's-complement representation for integers, and the range of two's-complement values is not symmetric, so negation of the maximum negative int or long results in that same maximum negative number. Overflow occurs in this case, but no exception is thrown. For all integer values x, -x equals (~x)+1.
For the general wrap-around behaviour for out-of-range results, as an example in the specification of the multiplication operator:
If an integer multiplication overflows, then the result is the low-order bits of the mathematical product as represented in some sufficiently large two's-complement format. As a result, if overflow occurs, then the sign of the result may not be the same as the sign of the mathematical product of the two operand values.
Similar phrases occur in the specification of integer addition, and subtraction is required to fulfill a - b == a + (-b), so the overflow behaviour follows.
Because that's how the Java language is defined. The runtime doesn't check bounds at each operation (probably because it would be extremely expensive). It just overflows or underflows.

why Integer.MAX_VALUE + 1 == Integer.MIN_VALUE?

System.out.println(Integer.MAX_VALUE + 1 == Integer.MIN_VALUE);
is true.
I understand that integer in Java is 32 bit and can't go above 231-1, but I can't understand why adding 1 to its MAX_VALUE results in MIN_VALUE and not in some kind of exception. Not mentioning something like transparent conversion to a bigger type, like Ruby does.
Is this behavior specified somewhere? Can I rely on it?
Because the integer overflows. When it overflows, the next value is Integer.MIN_VALUE. Relevant JLS
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
The integer storage gets overflowed and that is not indicated in any way, as stated in JSL 3rd Ed.:
The built-in integer operators do not indicate overflow or underflow in any way. Integer operators can throw a NullPointerException if unboxing conversion (§5.1.8) of a null reference is required. Other than that, the only integer operators that can throw an exception (§11) are the integer divide operator / (§15.17.2) and the integer remainder operator % (§15.17.3), which throw an ArithmeticException if the right-hand operand is zero, and the increment and decrement operators ++(§15.15.1, §15.15.2) and --(§15.14.3, §15.14.2), which can throw an OutOfMemoryError if boxing conversion (§5.1.7) is required and there is not sufficient memory available to perform the conversion.
Example in a 4-bits storage:
MAX_INT: 0111 (7)
MIN_INT: 1000 (-8)
MAX_INT + 1:
0111+
0001
----
1000
You must understand how integer values are represented in binary form, and how binary addition works. Java uses a representation called two's complement, in which the first bit of the number represents its sign. Whenever you add 1 to the largest java Integer, which has a bit sign of 0, then its bit sign becomes 1 and the number becomes negative.
This links explains with more details: http://www.cs.grinnell.edu/~rebelsky/Espresso/Readings/binary.html#integers-in-java
--
The Java Language Specification treats this behavior here: http://docs.oracle.com/javase/specs/jls/se6/html/expressions.html#15.18.2
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
Which means that you can rely on this behavior.
On most processors, the arithmetic instructions have no mode to fault on an overflow. They set a flag that must be checked. That's an extra instruction so probably slower. In order for the language implementations to be as fast as possible, the languages are frequently specified to ignore the error and continue. For Java the behaviour is specified in the JLS. For C, the language does not specify the behaviour, but modern processors will behave as Java.
I believe there are proposals for (awkward) Java SE 8 libraries to throw on overflow, as well as unsigned operations. A behaviour, I believe popular in the DSP world, is clamp the values at the maximums, so Integer.MAX_VALUE + 1 == Integer.MAX_VALUE [not Java].
I'm sure future languages will use arbitrary precision ints, but not for a while yet. Requires more expensive compiler design to run quickly.
The same reason why the date changes when you cross the international date line: there's a discontinuity there. It's built into the nature of binary addition.
This is a well known issue related to the fact that Integers are represented as two's complement down at the binary layer. When you add 1 to the max value of a two's complement number you get the min value. Honestly, all integers behaved this way before java existed, and changing this behavior for the Java language would have added more overhead to integer math, and confused programmers coming from other languages.
When you add 3 (in binary 11) to 1 (in binary 1), you must change to 0 (in binary 0) all binary 1 starting from the right, until you got 0, which you should change to 1. Integer.MAX_VALUE has all places filled up with 1 so there remain only 0s.
Easy to understand with byte example=>
byte a=127;//max value for byte
byte b=1;
byte c=(byte) (a+b);//assigns -128
System.out.println(c);//prints -128
Here we are forcing addition and casting it to be treated as byte.
So what will happen is that when we reach 127 (largest possible value for a byte) and we add plus 1 then the value flips (as shown in image) from 127 and it becomes -128.
The value starts circling around the type.
Same is for integer.
Also integer + integer stays integer ( unlike byte + byte which gets converted to int [unless casted forcefully as above]).
int int1=Integer.MAX_VALUE+1;
System.out.println(int1); //prints -2147483648
System.out.println(Integer.MIN_VALUE); //prints -2147483648
//below prints 128 as converted to int as not forced with casting
System.out.println(Byte.MAX_VALUE+1);
Cause overflow and two-compliant nature count goes on "second loop", we was on far most right position 2147483647 and after summing 1, we appeared at far most left position -2147483648, next incrementing goes -2147483647, -2147483646, -2147483645, ... and so forth to the far most right again and on and on, its nature of summing machine on this bit depth.
Some examples:
int a = 2147483647;
System.out.println(a);
gives: 2147483647
System.out.println(a+1);
gives: -2147483648 (cause overflow and two-compliant nature count goes on "second loop", we was on far most right position 2147483647 and after summing 1, we appeared at far most left position -2147483648, next incrementing goes -2147483648, -2147483647, -2147483646, ... and so fores to the far most right again and on and on, its nature of summing machine on this bit depth)
System.out.println(2-a);
gives:-2147483645 (-2147483647+2 seems mathematical logical)
System.out.println(-2-a);
gives: 2147483647 (-2147483647-1 -> -2147483648, -2147483648-1 -> 2147483647 some loop described in previous answers)
System.out.println(2*a);
gives: -2 (2147483647+2147483647 -> -2147483648+2147483646 again mathematical logical)
System.out.println(4*a);
gives: -4 (2147483647+2147483647+2147483647+2147483647 -> -2147483648+2147483646+2147483647+2147483647 -> -2-2 (according to last answer) -> -4)`

Categories