Why does the below code prints 2147483647, the actual value being 2147483648?
i = (int)Math.pow(2,31) ;
System.out.println(i);
I understand that the max positive value that a int can hold is 2147483647. Then why does a code like this auto wraps to the negative side and prints -2147483648?
i = (int)Math.pow(2,31) +1 ;
System.out.println(i);
i is of type Integer. If the second code sample (addition of two integers) can wrap to the negative side if the result goes out of the positive range,why can't the first sample wrap?
Also ,
i = 2147483648 +1 ;
System.out.println(i);
which is very similar to the second code sample throws compile error saying the first literal is out of integer range?
My question is , as per the second code sample why can't the first and third sample auto wrap to the other side?
For the first code sample, the result is narrowed from a double to an int. the JLS 5.1.3 describes how narrowing conversions for doubles to ints are performed.
The relevant part is:
The value must be too large (a
positive value of large magnitude or
positive infinity), and the result of
the first step is the largest
representable value of type int or
long.
This is why 2^31 (2147483648) is reduced to Integer.MAX_VALUE (2147483647). The same is true for
i = (int)(Math.pow(2,31)+100.0) ; // addition note the parentheses
and
i = (int)10000000000.0d; // == 2147483647
When the addition is done without parentheses, as in your second example, we are then dealing with integer addition. Integral types use 2's complement to represent values. Under this scheme adding 1 to
0x7FFFFFFF (2147483647)
gives
0x80000000
Which is 2's complement for -2147483648. Some languages perform overflow checking for arithmetic operations (e.g. Ada will throw an exception). Java, with it's C heritage does not check for overflow. CPUs typically set an overflow flag when an arithmetic operation overflows or underflows. Language runtimes can check this flag, although this introduces additional overhead, which some feel is unnecessary.
The third example doesn't compile since the compiler checks literal values against the range of their type, and gives a compiler error for values out of range. See JLS 3.10.1 - Integer Literals.
Then why does a code like this auto wraps to the negative side and prints -2147483648?
This is called overflow. Java does it because C does it. C does it because most processors do it. In some languages this does not happen. For example some languages will throw an exception, in others the type will change to something that can hold the result.
My question is , as per the second code sample why can't the first and third sample auto wrap to the other side?
Regarding the first program: Math.pow returns a double and does not overflow. When the double is converted to an integer it is truncated.
Regarding your third program: Overflow is rarely a desirable property and is often a sign that your program is no longer working. If the compiler can see that it gets an overflow just from evaluating a constant that is almost certainly an error in the code. If you wanted a large negative number, why would you write a large positive one?
Related
This is for Java. I understand these terms and how they go over a variable storage limit it wraps around and becomes positive if the number was negative and vice versa.
I am having trouble getting these exceptions to be thrown.
this is the method.
// computes a + band saves result in answer
public void add (int a, int b)
I've tried adding 2,147,483,647 + 1 and-2,147,483,648 -1
and even dividing it but it doesnt give me an exception.
does anyone know what I am doing wrong?
I understand these terms and how they go over a variable storage limit it wraps around and becomes positive if the number was negative and vice versa.
Yes. That is what happens. When you add 1 to the largest int value (231 - 1) you get the smallest int value (-231). You get analogous behavior for byte, short, long and even char.
In the case of float and double overflow and underflow results in an "infinity" value.
I am having trouble getting these exceptions to be thrown.
Firstly, "those" exceptions don't exist.
Secondly, no exceptions are thrown when integer or floating point arithmetic overflows or underflows.
The only cases where arithmetic gives an exception are integer division by zero and remainder zero ... both of which throw an ArithmeticException.
but my lab question asks "so that it causes an Overflow exception to be thrown when the sum of two positive integers is negative . Write the corresponding exception class Overflow"
It is asking you to declare a custom OverflowException class, and code your addition method to throw that exception when the result overflows. You have to code the logic for overflow detection yourself.
This unusual result comes about due to the way the data type int is stored in Java and how Java handles calculations that go beyond the storage capacity.
Recall that the range of type int is -2,147,483,648 to +2,147,483,647 inclusive.You might assume that the addition 2,147,483,647 + 1 causes a runtime (integer overflow) error. This is not so because Java uses a technique called two's complement to represent integers; So numbers larger than 2,147,483,647 "wrap around" to negative values, while numbers smaller than -2,147,483,648 "wrap around" the other way to positive values.
That is 2,147,483,647 + 1 = -2,147,483,648 and -2,147,483,648 - 1 = 2,147,483,647 .
Consequently, integer addition never throws a runtime exception. In some situations, it might be preferable if integer addition did throw overflow and underflow exceptions. Otherwise, a logical bug might go undetected.
Please note I am NOT looking for code to cast or narrow a double to int.
As per JLS - $ 5.1.3 Narrowing Primitive Conversion
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T.
So, when I try to narrow a 260 (binary representation as 100000100) to a byte then result is 4 because the lowest 8 bits is 00000100 which is a decimal 4 OR a long value 4294967296L (binary representation 100000000000000000000000000000000) to a byte then result is 0.
Now, why I want to know the rule for narrowing rule from double to int, byte etc. is when I narrow a double value 4294967296.0 then result is 2147483647 but when I narrow a long 4294967296L value then result is 0.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
... I want to understand the why and how part.
The JLS (JLS 5.1.3) specifies what the result is. A simplified version (for int) is:
a NaN becomes zero
an Inf becomes "max-int" or "min-int"
otherwise:
round towards zero to get a mathematical integer
if the rounded number is too big for an int, the result becomes "min-int" or "max-int"
"How" is implementation specific. For examples of how it could be implemented, look at the Hotspot source code (OpenJDK version) or get the JIT compiler to dump some native code for you to look at. (I imagine that the native code maps uses a single instruction to do the actual conversion .... but I haven't checked.)
"Why" is unknowable ... unless you can ask one of the original Java designers / spec authors. A plausible explanation is a combination of:
it is easy to understand
it is consistent with C / C++,
it can be implemented efficiently on common hardware platforms, and
it is better than (hypothetical) alternatives that the designers considered.
(For example, throwing an exception for NaN, Inf, out-of-range would be inconsistent with other primitive conversions, and could be more expensive to implement.)
Result is Integer.MAX_VALUE when converting a double to an integer, and the value exceeds the range of an integer. Integer.MAX_VALUE is 2^31 - 1.
When you start with the double value 4294967296.0, it is greater than the greatest long value which is 2147483647 so the following rule is applied (from the page you cited) : The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long and you get 0x7FFFFFF = 2147483647
But when you try to convert 4294967296L = 0x100000000, you start from an integral type, so the rule is : A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits so if n is less than 32 (8 bytes) you just get a 0.
System.out.println(Integer.MAX_VALUE + 1 == Integer.MIN_VALUE);
is true.
I understand that integer in Java is 32 bit and can't go above 231-1, but I can't understand why adding 1 to its MAX_VALUE results in MIN_VALUE and not in some kind of exception. Not mentioning something like transparent conversion to a bigger type, like Ruby does.
Is this behavior specified somewhere? Can I rely on it?
Because the integer overflows. When it overflows, the next value is Integer.MIN_VALUE. Relevant JLS
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
The integer storage gets overflowed and that is not indicated in any way, as stated in JSL 3rd Ed.:
The built-in integer operators do not indicate overflow or underflow in any way. Integer operators can throw a NullPointerException if unboxing conversion (§5.1.8) of a null reference is required. Other than that, the only integer operators that can throw an exception (§11) are the integer divide operator / (§15.17.2) and the integer remainder operator % (§15.17.3), which throw an ArithmeticException if the right-hand operand is zero, and the increment and decrement operators ++(§15.15.1, §15.15.2) and --(§15.14.3, §15.14.2), which can throw an OutOfMemoryError if boxing conversion (§5.1.7) is required and there is not sufficient memory available to perform the conversion.
Example in a 4-bits storage:
MAX_INT: 0111 (7)
MIN_INT: 1000 (-8)
MAX_INT + 1:
0111+
0001
----
1000
You must understand how integer values are represented in binary form, and how binary addition works. Java uses a representation called two's complement, in which the first bit of the number represents its sign. Whenever you add 1 to the largest java Integer, which has a bit sign of 0, then its bit sign becomes 1 and the number becomes negative.
This links explains with more details: http://www.cs.grinnell.edu/~rebelsky/Espresso/Readings/binary.html#integers-in-java
--
The Java Language Specification treats this behavior here: http://docs.oracle.com/javase/specs/jls/se6/html/expressions.html#15.18.2
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
Which means that you can rely on this behavior.
On most processors, the arithmetic instructions have no mode to fault on an overflow. They set a flag that must be checked. That's an extra instruction so probably slower. In order for the language implementations to be as fast as possible, the languages are frequently specified to ignore the error and continue. For Java the behaviour is specified in the JLS. For C, the language does not specify the behaviour, but modern processors will behave as Java.
I believe there are proposals for (awkward) Java SE 8 libraries to throw on overflow, as well as unsigned operations. A behaviour, I believe popular in the DSP world, is clamp the values at the maximums, so Integer.MAX_VALUE + 1 == Integer.MAX_VALUE [not Java].
I'm sure future languages will use arbitrary precision ints, but not for a while yet. Requires more expensive compiler design to run quickly.
The same reason why the date changes when you cross the international date line: there's a discontinuity there. It's built into the nature of binary addition.
This is a well known issue related to the fact that Integers are represented as two's complement down at the binary layer. When you add 1 to the max value of a two's complement number you get the min value. Honestly, all integers behaved this way before java existed, and changing this behavior for the Java language would have added more overhead to integer math, and confused programmers coming from other languages.
When you add 3 (in binary 11) to 1 (in binary 1), you must change to 0 (in binary 0) all binary 1 starting from the right, until you got 0, which you should change to 1. Integer.MAX_VALUE has all places filled up with 1 so there remain only 0s.
Easy to understand with byte example=>
byte a=127;//max value for byte
byte b=1;
byte c=(byte) (a+b);//assigns -128
System.out.println(c);//prints -128
Here we are forcing addition and casting it to be treated as byte.
So what will happen is that when we reach 127 (largest possible value for a byte) and we add plus 1 then the value flips (as shown in image) from 127 and it becomes -128.
The value starts circling around the type.
Same is for integer.
Also integer + integer stays integer ( unlike byte + byte which gets converted to int [unless casted forcefully as above]).
int int1=Integer.MAX_VALUE+1;
System.out.println(int1); //prints -2147483648
System.out.println(Integer.MIN_VALUE); //prints -2147483648
//below prints 128 as converted to int as not forced with casting
System.out.println(Byte.MAX_VALUE+1);
Cause overflow and two-compliant nature count goes on "second loop", we was on far most right position 2147483647 and after summing 1, we appeared at far most left position -2147483648, next incrementing goes -2147483647, -2147483646, -2147483645, ... and so forth to the far most right again and on and on, its nature of summing machine on this bit depth.
Some examples:
int a = 2147483647;
System.out.println(a);
gives: 2147483647
System.out.println(a+1);
gives: -2147483648 (cause overflow and two-compliant nature count goes on "second loop", we was on far most right position 2147483647 and after summing 1, we appeared at far most left position -2147483648, next incrementing goes -2147483648, -2147483647, -2147483646, ... and so fores to the far most right again and on and on, its nature of summing machine on this bit depth)
System.out.println(2-a);
gives:-2147483645 (-2147483647+2 seems mathematical logical)
System.out.println(-2-a);
gives: 2147483647 (-2147483647-1 -> -2147483648, -2147483648-1 -> 2147483647 some loop described in previous answers)
System.out.println(2*a);
gives: -2 (2147483647+2147483647 -> -2147483648+2147483646 again mathematical logical)
System.out.println(4*a);
gives: -4 (2147483647+2147483647+2147483647+2147483647 -> -2147483648+2147483646+2147483647+2147483647 -> -2-2 (according to last answer) -> -4)`
i have a doubt with the range of int value
int x=2147483647; /*NO Error--this number is the maximum range
of int value no error*/
int y=2147483648; /*Error--one more than the
maximum range of int*/
int z=2147483647+1; /*No Error even though it is one more
than the maximum range value*/
why ?
Here is an explanation in terms of the Java Language Specification.
The section on integer literals (JLS 3.10.1) says this:
The largest decimal literal of type int is 2147483648 (231). All decimal literals from 0 to 2147483647 may appear anywhere an int literal may appear, but the literal 2147483648 may appear only as the operand of the unary negation operator -.
So ...
The first statement is an assignment of a legal integer literal value. No compilation error.
The second statement is a compilation error because 2147483648 is not preceded by the unary negation operator.
The third statement does not contain an integer literal that is out-of-range, so it is not a compilation error from that perspective.
Instead, the third statement is a binary addition expression as described in JLS 15.18.2. This states the following about the integer case:
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
Thus, 2147483647 + 1 overflows and wraps around to -2147483648.
#Peter Lawrey's suggests (flippantly?) that the third statement could be "rewritten by the compiler" as +2147483648, resulting in a compilation error.
This is not correct.
There is nothing in the JLS that says that a constant expression can have a different meaning to a non-constant expression. On the contrary, in cases like 1 / 0 the JLS flips things around and says that the expression is NOT a constant expression BECAUSE it terminates abnormally. (It is in JLS 15.28)
The JLS tries very hard to avoid cases where some Java construct means different things, depending on the compiler. For instance, it is very particular about the "definite assignment" rules, to avoid the case where only a smart compiler can deduce that variable is always initialized before it is used. This is a GOOD THING from the perspective of code portability.
The only significant area where there is "wiggle room" for compiler implementers to do platform specific things is in the areas of concurrency and the Java memory model. And there is a sound pragmatic reason for that - to allow multi-threaded Java applications to run fast on multi-core / multi-processor hardware.
int ranges from Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647).
However, only int literals are checked against the range.
Java does not check that any given constant value expression fits within the range.
Calculations are "allowed" to pass those boundaries, but that will result in an overflow (i.e. only the lower bits of the resulting value will be stored). Therefore, the calculation 2147483647 + 1 is well-defined within int calculations, and it's -2147483648.
Because the third one is called integer overflow. You are doing computations and you overflow. The other ones are just constants.
The first two cases seem obvious. The third case will silently overflow. So in such cases you should always handle that in your calling code.
Because
int z=2147483647+1;
would be overflowed , which isn't equal to 2147483648
the third expression is an int-based addition, hence it will cast the result to a value within int's range.
The range for int is Integer.MIN_VALUE to Integer.MAX_VALUE. Java sliently overflows so the result of a calcuation is not detected by the compiler. (But might be detected by your IDE)
One of the most surprising overflow operations is -Integer.MIN_VALUE
I have to do an operation with integers, very simple:
a=b/c*d
where all the variables are integer, but the result is ZERO whatever is the value of the parameters. I guess that it's a problem with the operation with this type of data (int).
I solved the problem converting first in float and then in integer, but I was wondering if there is a better method.
The / operator, when used with integers, does integer division which I suspect is not what you want here. In particular, 2/5 is zero.
The way to work around this, as you say, is to cast one or more of your operands to e.g. a float, and then turn the resulting floating point value back into an integer using Math.floor, Math.round or Math.ceil. This isn't really a bad solution; you have a bunch of integers but you really do want a floating-point calculation. The output might not be an integer, so it's up to you to specify how you want to convert it back.
More importantly, I'm not aware of any syntax to do this that would be more concise and readable than (for example):
a = Math.round((float)b / c * d)
In this case, you can reorder the expression so division is performed last:
a = (b*d)/c
Be careful that b*d won't ever be large enough to overflow an int. If it might be, you could cast one of them to long:
a = (int)(((long)b*d)/c)