I am beginner in Java. I cannot understand this line even after a long try.
byte num=(byte)135;
this line gives result -121 why it is in signed number ?
Can any one elaborate it ?
In Java, bytes are always signed, and they are in the range -128 to 127. When the int literal 135 is downcasted to a byte, the result is a negative number because the 8th bit is set.
1000 0111
Specifically, the JLS, Section 5.1.3, states:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
When you cast an int literal such as 135 to a byte, that is a narrowing primitive conversion.
Related
Example
byte x;
x=(byte)2355;
System.out.println(x);
so, how can I calculate the value which will be in x;
The 2355 literal value is interpreted as an int, which in Java is represented by the following 32 bits:
00000000000000000000100100110011
A byte has only 8 bits, so you lose the leading 24 bits:
00110011
Converted back to decimal, this leaves you with a value of 51.
You can find the bit sizes of the various primitive data types here. Also keep in mind that you need to take two's complement into account when dealing with signed primitives.
The range of the byte data type is -128 to 127 (inclusive). So if you want to deal with numbers outside that range then you can try casting the data type to short, int or long.
I am storing an integer value in long variable ut if I am giving value greater than int range then it is saying that "literal of type int is greater than range".
The range of integer is 2147483648 to 2147483647
So when I am storing
long l=2147483647;
then it is running fine
But when I am storing
long l=2147483648;
then it is giving compile time error as "literal of type int is greater than range"
So I want to know that if I am storing long l=2147483647;
i.e. value of int range in long variable then does it uses 32 bit or 64 bit to store it.
Also if it uses 64 bit then why it is giving error for long l=2147483648;
You seem to think that when a long stores a value that is within the range of int, it will use 32 bits to store it. This is not true.
Java Language Specification Section 4.2 Primitive Types and Values
The integral types are byte, short, int, and long, whose values are 8-bit, 16-bit, 32-bit and 64-bit signed two's-complement integers, respectively.
You got the compiler error because the integer literal 2147483648 cannot be used in that context. The error has nothing to do with the size of long.
Section 3.10
All decimal literals from 0 to 2147483647 may appear anywhere an int
literal may appear. The decimal literal 2147483648 may appear only as
the operand of the unary minus operator - (§15.15.4).
It is a compile-time error if the decimal literal 2147483648 appears
anywhere other than as the operand of the unary minus operator; or if
a decimal literal of type int is larger than 2147483648
Is there any difference between how type conversion happens in case of positive and negative numbers?
For example, if we have
short a = 100;
and put it to
int b = a;
if we change '100' to '-100', does it make any difference?
I tried to find it compiling in IDEA, but didn't find difference, but I have this questions from my mentor.
Disclaimer: Since this is a homework question, what I say here might not be the "expected" answer.
There are two conversions involved here. The first one is a narrowing primitive conversion from int (the literal 100 evaluates to a value of type int) to short. The second one is a widening primitive conversion from short to int.
The second conversion will never lose information, as per the JLS §5.1.2:
A widening primitive conversion does not lose information about the
overall magnitude of a numeric value in the following cases, where the
numeric value is preserved exactly:
from an integral type to another integral type
from byte, short, or char to a floating point type
from int to double
from float to double in a strictfp expression (§15.4)
The first conversion is done like this, according to the JLS §5.1.3
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T. In addition to a possible loss of
information about the magnitude of the numeric value, this may cause
the sign of the resulting value to differ from the sign of the input
value.
Both -100 and 100 is representable with short, whose range is -65536...65535, so no information is lost here either.
In short, it doesn't matter whether you use 100 or -100, the result will be that b will store the value of 100 or -100 respectively.
Please note I am NOT looking for code to cast or narrow a double to int.
As per JLS - $ 5.1.3 Narrowing Primitive Conversion
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T.
So, when I try to narrow a 260 (binary representation as 100000100) to a byte then result is 4 because the lowest 8 bits is 00000100 which is a decimal 4 OR a long value 4294967296L (binary representation 100000000000000000000000000000000) to a byte then result is 0.
Now, why I want to know the rule for narrowing rule from double to int, byte etc. is when I narrow a double value 4294967296.0 then result is 2147483647 but when I narrow a long 4294967296L value then result is 0.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
I have understood the long narrowing to int, byte etc. (discards all but the n lowest order bits) but I want to know what is going under the hoods in case of double narrowing.
... I want to understand the why and how part.
The JLS (JLS 5.1.3) specifies what the result is. A simplified version (for int) is:
a NaN becomes zero
an Inf becomes "max-int" or "min-int"
otherwise:
round towards zero to get a mathematical integer
if the rounded number is too big for an int, the result becomes "min-int" or "max-int"
"How" is implementation specific. For examples of how it could be implemented, look at the Hotspot source code (OpenJDK version) or get the JIT compiler to dump some native code for you to look at. (I imagine that the native code maps uses a single instruction to do the actual conversion .... but I haven't checked.)
"Why" is unknowable ... unless you can ask one of the original Java designers / spec authors. A plausible explanation is a combination of:
it is easy to understand
it is consistent with C / C++,
it can be implemented efficiently on common hardware platforms, and
it is better than (hypothetical) alternatives that the designers considered.
(For example, throwing an exception for NaN, Inf, out-of-range would be inconsistent with other primitive conversions, and could be more expensive to implement.)
Result is Integer.MAX_VALUE when converting a double to an integer, and the value exceeds the range of an integer. Integer.MAX_VALUE is 2^31 - 1.
When you start with the double value 4294967296.0, it is greater than the greatest long value which is 2147483647 so the following rule is applied (from the page you cited) : The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long and you get 0x7FFFFFF = 2147483647
But when you try to convert 4294967296L = 0x100000000, you start from an integral type, so the rule is : A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits so if n is less than 32 (8 bytes) you just get a 0.
This loop will continue indefinitely:
char a = 100;
for(a=100; a>=0;--a)
System.out.println(a);
Does it happen because a gets promoted to an int value for arithmetic operations and gets widened to 32 bits from 16 bit char value and hence will always stay positive?
It will indeed loop indefinitely -- and the reason you stated is close. It happens because a can't represent any number that doesn't satisfy a >= 0 -- char is unsigned. Arithmetic underflow is well-defined in Java and unindicated. See the below relevant parts of the specification.
§4.2.2
The integer operators do not indicate overflow or underflow in any way.
This means there is no indication of overflow/underflow other than just comparing the values... e.g. if a <= --a, then that means an underflow occured.
§15.15.2
Before the subtraction, binary numeric promotion (§5.6.2) is performed on the value 1 and the value of the variable. If necessary, the difference is narrowed by a narrowing primitive conversion (§5.1.3) and/or subjected to boxing conversion (§5.1.7) to the type of the variable before it is stored. The value of the prefix decrement expression is the value of the variable after the new value is stored.
So, we can see that there are two major steps here: binary numeric promotion, followed by a narrowing pritimive conversion.
§5.6.2
Widening primitive conversion (§5.1.2) is applied to convert either or both operands as specified by the following rules:
If either operand is of type double, the other is converted to double.
Otherwise, if either operand is of type float, the other is converted to float.
Otherwise, if either operand is of type long, the other is converted to long.
Otherwise, both operands are converted to type int.
We can see that the decrement expression works with a treated as int, thus performing a widening conversion. This allows it to represent the value -1.
§5.1.3
A narrowing primitive conversion may lose information about the overall magnitude of a numeric value and may also lose precision and range.
...
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
Keeping only the n lowest order bits means that only the lowest 16 bits of the int expression a - 1 are kept. Since -1 is 0b11111111 11111111 11111111 11111111 here, only the lower 0b11111111 11111111 is saved. Since char is unsigned, all of these bits contribute to the result, giving 65535.
Noticing something here? Essentially, this all means that Java integer arithmetic is modular; in this case, the modulus is 2^16, or 65536, because char is a 16-bit datatype. -1 (mod 65536) ≡ 65535, so the decrement will wrap back around.
Nope. char values are unsigned - when they go below 0, they come back around to 65535.
Swap char with byte - then it'll work.
As others have said, the char type in Java is unsigned, so a >= 0 will always be true. When a hits 0 and then is decremented once more, it becomes 65535. If you just want to be perverse, you can write your loop to terminate after 101 iterations this way:
char a = 100;
for(a=100; a<=100;--a)
System.out.println(a);
Code reviewers can then have a field day tearing you apart for writing such a horrible thing. :)