doubt with range of int variable - java

i have a doubt with the range of int value
int x=2147483647; /*NO Error--this number is the maximum range
of int value no error*/
int y=2147483648; /*Error--one more than the
maximum range of int*/
int z=2147483647+1; /*No Error even though it is one more
than the maximum range value*/
why ?

Here is an explanation in terms of the Java Language Specification.
The section on integer literals (JLS 3.10.1) says this:
The largest decimal literal of type int is 2147483648 (231). All decimal literals from 0 to 2147483647 may appear anywhere an int literal may appear, but the literal 2147483648 may appear only as the operand of the unary negation operator -.
So ...
The first statement is an assignment of a legal integer literal value. No compilation error.
The second statement is a compilation error because 2147483648 is not preceded by the unary negation operator.
The third statement does not contain an integer literal that is out-of-range, so it is not a compilation error from that perspective.
Instead, the third statement is a binary addition expression as described in JLS 15.18.2. This states the following about the integer case:
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
Thus, 2147483647 + 1 overflows and wraps around to -2147483648.
#Peter Lawrey's suggests (flippantly?) that the third statement could be "rewritten by the compiler" as +2147483648, resulting in a compilation error.
This is not correct.
There is nothing in the JLS that says that a constant expression can have a different meaning to a non-constant expression. On the contrary, in cases like 1 / 0 the JLS flips things around and says that the expression is NOT a constant expression BECAUSE it terminates abnormally. (It is in JLS 15.28)
The JLS tries very hard to avoid cases where some Java construct means different things, depending on the compiler. For instance, it is very particular about the "definite assignment" rules, to avoid the case where only a smart compiler can deduce that variable is always initialized before it is used. This is a GOOD THING from the perspective of code portability.
The only significant area where there is "wiggle room" for compiler implementers to do platform specific things is in the areas of concurrency and the Java memory model. And there is a sound pragmatic reason for that - to allow multi-threaded Java applications to run fast on multi-core / multi-processor hardware.

int ranges from Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647).
However, only int literals are checked against the range.
Java does not check that any given constant value expression fits within the range.
Calculations are "allowed" to pass those boundaries, but that will result in an overflow (i.e. only the lower bits of the resulting value will be stored). Therefore, the calculation 2147483647 + 1 is well-defined within int calculations, and it's -2147483648.

Because the third one is called integer overflow. You are doing computations and you overflow. The other ones are just constants.

The first two cases seem obvious. The third case will silently overflow. So in such cases you should always handle that in your calling code.

Because
int z=2147483647+1;
would be overflowed , which isn't equal to 2147483648

the third expression is an int-based addition, hence it will cast the result to a value within int's range.

The range for int is Integer.MIN_VALUE to Integer.MAX_VALUE. Java sliently overflows so the result of a calcuation is not detected by the compiler. (But might be detected by your IDE)
One of the most surprising overflow operations is -Integer.MIN_VALUE

Related

Rules governing narrowing of double to int

Please note I am NOT looking for code to cast or narrow a double to int.
As per JLS - $ 5.1.3 Narrowing Primitive Conversion
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T.
So, when I try to narrow a 260 (binary representation as 100000100) to a byte then result is 4 because the lowest 8 bits is 00000100 which is a decimal 4 OR a long value 4294967296L (binary representation 100000000000000000000000000000000) to a byte then result is 0.
Now, why I want to know the rule for narrowing rule from double to int, byte etc. is when I narrow a double value 4294967296.0 then result is 2147483647 but when I narrow a long 4294967296L value then result is 0.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
... I want to understand the why and how part.
The JLS (JLS 5.1.3) specifies what the result is. A simplified version (for int) is:
a NaN becomes zero
an Inf becomes "max-int" or "min-int"
otherwise:
round towards zero to get a mathematical integer
if the rounded number is too big for an int, the result becomes "min-int" or "max-int"
"How" is implementation specific. For examples of how it could be implemented, look at the Hotspot source code (OpenJDK version) or get the JIT compiler to dump some native code for you to look at. (I imagine that the native code maps uses a single instruction to do the actual conversion .... but I haven't checked.)
"Why" is unknowable ... unless you can ask one of the original Java designers / spec authors. A plausible explanation is a combination of:
it is easy to understand
it is consistent with C / C++,
it can be implemented efficiently on common hardware platforms, and
it is better than (hypothetical) alternatives that the designers considered.
(For example, throwing an exception for NaN, Inf, out-of-range would be inconsistent with other primitive conversions, and could be more expensive to implement.)
Result is Integer.MAX_VALUE when converting a double to an integer, and the value exceeds the range of an integer. Integer.MAX_VALUE is 2^31 - 1.
When you start with the double value 4294967296.0, it is greater than the greatest long value which is 2147483647 so the following rule is applied (from the page you cited) : The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long and you get 0x7FFFFFF = 2147483647
But when you try to convert 4294967296L = 0x100000000, you start from an integral type, so the rule is : A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits so if n is less than 32 (8 bytes) you just get a 0.

Arithmetic operations in Java

Can someone explain that why do arithmetic operations on integral types in Java always result in "int" or "long" results?
I think it's worth pointing out that this (arithmetic operations on integers producing integers) is a feature of many many programming languages, not only Java.
Many of those programming languages were invented before Java, many after Java, so I think that arguments that it is a hang-over from the days when hardware was less capable are wide of the mark. This feature of language design is about making languages type-safe. There are very good reasons for separating integers and floating-point numbers in programming languages, and for making the programmer responsible for identifying when and how conversions from type to type take place.
Check this out: http://www.particle.kth.se/~lindsey/JavaCourse/Book/Part1/Java/Chapter02/operators.html#ArithOps
It explains how the type of the return value is determined by the types of the operands. Essentially:
the arithmetic operators require a numeric type
if the type of either operand is an integral type, the return value will be the widest type included (so int + long = long)
if the type of either operand is a floating-point number then a floating-point number will be returned
if both operands are floating-point, then a double will be returned if either operand is a double
If you need to control the types, then you'll need to cast the operands to the appropriate types. For example, int * int could be too long for an int, so you may need to do:
long result = myInt * (long) anotherInt
Likewise for really large or really tiny floats resulting from arithmetic operations.
Because the basic integer arithmetic operators are only defined either between int and int or between long and long. In all other cases types are automatically widened to suit. There are no doubt some abstruse paragraphs in the Java Language Specification explaining exactly what happens.
Dummy answer: because this is how the Java Language Specification defines them:
4.2.2. Integer Operations
The Java programming language provides a number of operators that act on integral values:
[...]
The numerical operators, which result in a value of type int or long:
Do you mean why you don't get a double or BigInteger result? Historical accident and efficiency reasons, mostly. Detecting overflow from + or * and handing the result (from Integer.MAX_VALUE * Integer.MAX_VALUE, say) means generating lots of exception detection code that will almost never get triggered, but always needs to get executed. Much easier to define addition or multiplication modulo 2^32 (or 2^64) and not worry about it. Same for division with a fractional remainder.
This was certainly the case long ago with C. It is less of an issue today with superscalar processors and lots of bits to play with. But people got used to it, so it remains in Java today. Use Python 3 if you want your arithmetic autoconverted to a type that can hold the result.
The reason is kind of the same as why we have primitive types in Java at all -- it allows writing efficient code. You may argue that it also makes less efficient but correct code much uglier; you'd be about right. Keep in mind that the design choice was made around 1995.

why Integer.MAX_VALUE + 1 == Integer.MIN_VALUE?

System.out.println(Integer.MAX_VALUE + 1 == Integer.MIN_VALUE);
is true.
I understand that integer in Java is 32 bit and can't go above 231-1, but I can't understand why adding 1 to its MAX_VALUE results in MIN_VALUE and not in some kind of exception. Not mentioning something like transparent conversion to a bigger type, like Ruby does.
Is this behavior specified somewhere? Can I rely on it?
Because the integer overflows. When it overflows, the next value is Integer.MIN_VALUE. Relevant JLS
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
The integer storage gets overflowed and that is not indicated in any way, as stated in JSL 3rd Ed.:
The built-in integer operators do not indicate overflow or underflow in any way. Integer operators can throw a NullPointerException if unboxing conversion (§5.1.8) of a null reference is required. Other than that, the only integer operators that can throw an exception (§11) are the integer divide operator / (§15.17.2) and the integer remainder operator % (§15.17.3), which throw an ArithmeticException if the right-hand operand is zero, and the increment and decrement operators ++(§15.15.1, §15.15.2) and --(§15.14.3, §15.14.2), which can throw an OutOfMemoryError if boxing conversion (§5.1.7) is required and there is not sufficient memory available to perform the conversion.
Example in a 4-bits storage:
MAX_INT: 0111 (7)
MIN_INT: 1000 (-8)
MAX_INT + 1:
0111+
0001
----
1000
You must understand how integer values are represented in binary form, and how binary addition works. Java uses a representation called two's complement, in which the first bit of the number represents its sign. Whenever you add 1 to the largest java Integer, which has a bit sign of 0, then its bit sign becomes 1 and the number becomes negative.
This links explains with more details: http://www.cs.grinnell.edu/~rebelsky/Espresso/Readings/binary.html#integers-in-java
--
The Java Language Specification treats this behavior here: http://docs.oracle.com/javase/specs/jls/se6/html/expressions.html#15.18.2
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
Which means that you can rely on this behavior.
On most processors, the arithmetic instructions have no mode to fault on an overflow. They set a flag that must be checked. That's an extra instruction so probably slower. In order for the language implementations to be as fast as possible, the languages are frequently specified to ignore the error and continue. For Java the behaviour is specified in the JLS. For C, the language does not specify the behaviour, but modern processors will behave as Java.
I believe there are proposals for (awkward) Java SE 8 libraries to throw on overflow, as well as unsigned operations. A behaviour, I believe popular in the DSP world, is clamp the values at the maximums, so Integer.MAX_VALUE + 1 == Integer.MAX_VALUE [not Java].
I'm sure future languages will use arbitrary precision ints, but not for a while yet. Requires more expensive compiler design to run quickly.
The same reason why the date changes when you cross the international date line: there's a discontinuity there. It's built into the nature of binary addition.
This is a well known issue related to the fact that Integers are represented as two's complement down at the binary layer. When you add 1 to the max value of a two's complement number you get the min value. Honestly, all integers behaved this way before java existed, and changing this behavior for the Java language would have added more overhead to integer math, and confused programmers coming from other languages.
When you add 3 (in binary 11) to 1 (in binary 1), you must change to 0 (in binary 0) all binary 1 starting from the right, until you got 0, which you should change to 1. Integer.MAX_VALUE has all places filled up with 1 so there remain only 0s.
Easy to understand with byte example=>
byte a=127;//max value for byte
byte b=1;
byte c=(byte) (a+b);//assigns -128
System.out.println(c);//prints -128
Here we are forcing addition and casting it to be treated as byte.
So what will happen is that when we reach 127 (largest possible value for a byte) and we add plus 1 then the value flips (as shown in image) from 127 and it becomes -128.
The value starts circling around the type.
Same is for integer.
Also integer + integer stays integer ( unlike byte + byte which gets converted to int [unless casted forcefully as above]).
int int1=Integer.MAX_VALUE+1;
System.out.println(int1); //prints -2147483648
System.out.println(Integer.MIN_VALUE); //prints -2147483648
//below prints 128 as converted to int as not forced with casting
System.out.println(Byte.MAX_VALUE+1);
Cause overflow and two-compliant nature count goes on "second loop", we was on far most right position 2147483647 and after summing 1, we appeared at far most left position -2147483648, next incrementing goes -2147483647, -2147483646, -2147483645, ... and so forth to the far most right again and on and on, its nature of summing machine on this bit depth.
Some examples:
int a = 2147483647;
System.out.println(a);
gives: 2147483647
System.out.println(a+1);
gives: -2147483648 (cause overflow and two-compliant nature count goes on "second loop", we was on far most right position 2147483647 and after summing 1, we appeared at far most left position -2147483648, next incrementing goes -2147483648, -2147483647, -2147483646, ... and so fores to the far most right again and on and on, its nature of summing machine on this bit depth)
System.out.println(2-a);
gives:-2147483645 (-2147483647+2 seems mathematical logical)
System.out.println(-2-a);
gives: 2147483647 (-2147483647-1 -> -2147483648, -2147483648-1 -> 2147483647 some loop described in previous answers)
System.out.println(2*a);
gives: -2 (2147483647+2147483647 -> -2147483648+2147483646 again mathematical logical)
System.out.println(4*a);
gives: -4 (2147483647+2147483647+2147483647+2147483647 -> -2147483648+2147483646+2147483647+2147483647 -> -2-2 (according to last answer) -> -4)`

double d=1/0.0 vs double d=1/0

double d=1/0.0;
System.out.println(d);
It prints Infinity , but if we will write double d=1/0; and print it we'll get this exception: Exception
in thread "main" java.lang.ArithmeticException: / by zero
at D.main(D.java:3) Why does Java know in one case that diving by zero is infinity but for the int 0 it is not defined?
In both cases d is double and in both cases the result is infinity.
Floating point data types have a special value reserved to represent infinity, integer values do not.
In your code 1/0 is an integer division that, of course, fails. However, 1/0.0 is a floating point division and so results in Infinity.
strictly speaking, 1.0/0.0 isn't infinity at all, it's undefined.
As David says in his answer, Floats have a way of expressing a number that is not in the range of the highest number it can represent and the lowest. These values are collectively known as "Not a number" or just NaNs. NaNs can also occur from calculations that really are infinite (such as limx -> 0 ln2 x), values that are finite but overflow the range floats can represent (like 10100100), as well as undefined values like 1/0.
Floating point numbers don't quite clearly distinguish among undefined values, overflow and infinity; what combination of bits results from that calculation depends. Since just printing "NaN" or "Not a Number" is a bit harder to understand for folks that don't know how floating point values are represented, that formatter just prints "Infinity" or sometimes "-Infinity" Since it provides the same level of information when you do know what FP NaN's are all about, and has some meaning when you don't.
Integers don't have anything comparable to floating point NaN's. Since there's no sensible value for an integer to take when you do 1/0, the only option left is to raise an exception.
The same code written in machine language can either invoke an interrupt, which is comparable to a Java exception, or set a condition register, which would be a global value to indicate that the last calculation was a divide by zero. which of those are available varies a bit by platform.

Basic question on Java's int

Why does the below code prints 2147483647, the actual value being 2147483648?
i = (int)Math.pow(2,31) ;
System.out.println(i);
I understand that the max positive value that a int can hold is 2147483647. Then why does a code like this auto wraps to the negative side and prints -2147483648?
i = (int)Math.pow(2,31) +1 ;
System.out.println(i);
i is of type Integer. If the second code sample (addition of two integers) can wrap to the negative side if the result goes out of the positive range,why can't the first sample wrap?
Also ,
i = 2147483648 +1 ;
System.out.println(i);
which is very similar to the second code sample throws compile error saying the first literal is out of integer range?
My question is , as per the second code sample why can't the first and third sample auto wrap to the other side?
For the first code sample, the result is narrowed from a double to an int. the JLS 5.1.3 describes how narrowing conversions for doubles to ints are performed.
The relevant part is:
The value must be too large (a
positive value of large magnitude or
positive infinity), and the result of
the first step is the largest
representable value of type int or
long.
This is why 2^31 (2147483648) is reduced to Integer.MAX_VALUE (2147483647). The same is true for
i = (int)(Math.pow(2,31)+100.0) ; // addition note the parentheses
and
i = (int)10000000000.0d; // == 2147483647
When the addition is done without parentheses, as in your second example, we are then dealing with integer addition. Integral types use 2's complement to represent values. Under this scheme adding 1 to
0x7FFFFFFF (2147483647)
gives
0x80000000
Which is 2's complement for -2147483648. Some languages perform overflow checking for arithmetic operations (e.g. Ada will throw an exception). Java, with it's C heritage does not check for overflow. CPUs typically set an overflow flag when an arithmetic operation overflows or underflows. Language runtimes can check this flag, although this introduces additional overhead, which some feel is unnecessary.
The third example doesn't compile since the compiler checks literal values against the range of their type, and gives a compiler error for values out of range. See JLS 3.10.1 - Integer Literals.
Then why does a code like this auto wraps to the negative side and prints -2147483648?
This is called overflow. Java does it because C does it. C does it because most processors do it. In some languages this does not happen. For example some languages will throw an exception, in others the type will change to something that can hold the result.
My question is , as per the second code sample why can't the first and third sample auto wrap to the other side?
Regarding the first program: Math.pow returns a double and does not overflow. When the double is converted to an integer it is truncated.
Regarding your third program: Overflow is rarely a desirable property and is often a sign that your program is no longer working. If the compiler can see that it gets an overflow just from evaluating a constant that is almost certainly an error in the code. If you wanted a large negative number, why would you write a large positive one?

Categories