Understanding Java data types - java

1) Why is the following assignment not allowed:
byte b = 0b11111111; // 8 bits or 1 byte
but this assignment is allowed:
int i = 0b11111111111111111111111111111111; //32 bits or 4 bytes
Both types are signed, and I would expect b and i were -1.
2) Why doesn't the Integer MIN_VALUE have a sign?
public static final int MIN_VALUE = 0x80000000;
but the Byte MIN_VALUE does have a sign?
public static final byte MIN_VALUE = -128;

Question 1)
This is because 0b11111111 is an int literal, whose value is 255. This value doesn't fit into a byte. See http://docs.oracle.com/javase/7/docs/technotes/guides/language/binary-literals.html for more details on this.
Question 2)
When we write binary or hexadecimal literals, we never put a sign. The literal 0x80000000 is actually a negative value, even though we don't write it as such.
There's no really good reason why the makers of the JDK chose to use a decimal literal for -128 but a hexadecimal literal for 0x80000000; except that in each case, it's probably a whole lot clearer that way what is intended.

All integer literals have type int (unless suffixed by an L or l). Thus, in the first case, you're storing an int into a byte. A narrowing conversion like this is not allowed without a cast, except that if the right side is a constant, it's allowed if the value is in range, which is -128 to 127. 0b11111111 is 255, though, which is not in range.
As for why int i = 0b11111111111111111111111111111111 is allowed: it's pretty much "because the JLS says so". In fact, that specific example appears in JLS 3.10.1. There's a rule that decimal literals of type int cannot exceed 214743647 (except in the specific case -2147483648), but there's no rule about binary literals except that they have to fit into 32 bits.
As I mentioned in a comment, the second question is really a question about the style preference of the programmers who wrote the code, and it's impossible to answer.

Related

I have to cast the first two octets from int to byte but not the other octets. Why?

byte[] ipAddr = new byte[] {(byte) 142, (byte) 250,68,46};
I am getting to know the various java net functions and I have to cast the first two octets to a byte in order for it to compile.
Otherwise I get this error
java: incompatible types: possible lossy conversion from int to byte
Any idea why I have to cast specifically the first octets and not all? Why does java take it as an int instead of a byte?
In java, bytes are 8-bit signed datatypes, so the value ranges from -128 to +127. Your first two values are greater than the maximum so you need to manually allow the conversion (by casting, in your case).
Those two octets just happen to be larger than the maximum value allowed in a byte which is 127 (2^7-1). Any value greater than 127 will have to be cast (or dealt with more carefully) and you'll lose data in a straight cast due to the size difference. See here for more: https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html#:~:text=Primitive%20values%20do%20not%20share,value%20of%20127%20(inclusive).
#Dennis is on the right track, but the explanation is a bit more complicated than that.
Normally, an int valued expression cannot be assigned to a byte variable without a cast.
When the int valued expression is a constant expression, AND when the value of the expression is within the range of byte, then the assignment to a byte variable is allowed without a cast.
However, this only applies in assignment contexts, and only for constant expressions that satisfy the JLS definition.
In your example, the integer literals are all constant expressions, but the first two are not in the required range for lossless assignment ot a byte; i.e -128 to +127.

byte byteVar = 100 works but int intVar = 100L results in compilation error. Why?

Integer literals can be assigned to byte or short variables as long as the value of the literal is within the range of byte/short.
But when long literal is assigned to int variable, compilation error is reported even when the value of long literal is within the range of int.
What is the logic explaining this?
Example,
The below line gets compiled successfully
byte byteVar = 100; //works, here 100 is integer literal.
but
int intVar = 100L; // fails, here 100L is long literal
results in compile time error.
Can someone please explain the underlying logic that drives this.
The actual reason is a bit more complicated than some of the other answers suggest.
JLS 5.2 states the following about the conversions allowed in an assignment context.
In addition, if the expression is a constant expression (§15.28) of
type byte, short, char, or int:
A narrowing primitive conversion may be used if the variable is of type byte, short, or char, and the value of the constant expression is
representable in the type of the variable.
A narrowing primitive conversion followed by a boxing conversion may be used if the variable is of type Byte, Short, or Character, and
the value of the constant expression is representable in the type
byte, short, or char respectively.
The declaration / initialization
byte byteVar = 100; // OK
works because all of the prerequisites are safisfied:
100 is a constant expression
its type is int
its value is in the range of byte; i.e. it is representable as a byte
it is being assigned to a byte variable.
The declaration / initialization
byte byteVar = 100L; // FAIL
fails because the type of 100L is long rather than int.
The logic for
int intVar = 100L;
not compiling is simply "why would you say L explicitly if you want it to be int? Probably a mistake somewhere, but we don't know if it's the type or the value which is wrong".
The more interesting part is why
byte byteVar = 100;
compiles instead of requiring you to write something like 100b. And I believe there are at least two reasons:
the right part may be a constant expression, not just a literal: in
byte byteVar = SOME_CONST + 3;
you couldn't use a suffix, and the right-hand side is int even if SOME_CONST is byte.
simply that C++ didn't have it and Java inherited a lot from C++.
L means 64-bit int integer primitive.
int intVar = 100L;
The following line will fail to compile because you are trying to place a 64-bit long primitive literal into a 32-bit int variable. It will give : // BAD — compiler fails — Cannot place a 64-bit long int primitive in a 32-bit int variable.
The byte format is an 8-bit integer format that can accept signed integer values in the range -128 to 127. Because the integer value 100 fits within this range,
byte byteVar = 100;
compiles; the integer literal is of a valid scale for what you are trying to store it in. https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html lists default values for the primitive data types and the table shows that byte, short and int all share the same value structure; an integer with nothing else associated to the literal.
The reason the int and 100L does not work is because the L suffix creates a long or Int64 literal. int is Int32, the 32-bit 2's complement signed integer format. By specifying that the value 100 should be of the 64-bit 2's complement signed integer format, this will not fit within 32 bits without an overflow, which is why the compiler does not allow that assignment.

Equivalent of BigInteger value of C++ long long

I am migrating some code base from C++ to Java where I encountered a long long value in the C++ code which i need to migrate.
On some research i found out I should be using BigInteger to represent the long long of C++.
I looked at couple of examples and found out the syntax to be :
static BigInteger flag1 = BigInteger.valueOf(0x00000001);
Here i noticed the value used in the argument for BigInteger.valueOf is not the same as original long long value which was 0x0000000000000001LL
Original value had 16 digits and this one had 8 digits and does not include LL suffix at the end. Can someone explain what is going on ?
Also If they can suggest the value of 0x0000000000000200LL in similar terms.
Please note: all those zeros ... don't matter. There is no difference between 0x1; and 0x001, and so on. As long as we are talking about numbers.
It would be a different thing if those were represented as strings; then of coursre "0x1" is not the same string as "0x01". But well, they aren't.
All of your values are number literals; and they are all in a range that would even fit into ordinary long values in Java.
In other words: leading zero digits do not matter for numbers (except for an example like 010, which is something else than 10; as starting 0 indicate octal numbers).
The more interesting question would actually would be: what literal value the compiler puts into the java bytecode for that.
0x0000000000000001LL == 0x00000001 == 0x1 == 1 (dec)
0x0000000000000200LL == 0x00000200 == 0x200 = 512 (dec)
Those are small values and can be represented as a regular int.
You can also use BigInteger is you need.
There are a number of things to learn here:
You probably don't need to use BigInteger here at all. The C++ long long type is a signed 64 bit integer on most systems (see http://en.cppreference.com/w/cpp/language/types). But Java has a 64 bit signed integer type - long. So unless you are porting C++ code that was designed for an architecture where long long is greater than 64 bits (!), a Java long is what you need to use.
Leading zeros don't matter in hexadecimal literals (i.e. 0x...) in Java.
(They matter for decimal literals though, because a leading zero turns a "decimal" literal into an octal literal ... which alters its value. For instance, the literal 010 represents the number eight!)
If you actually do need 64 bit integer literal in Java, then put an L on the right hand end. Integer literals are assumed to be 32 bit.
In a context like this where you are trying to use BigInteger(long), a 32 bit integer literal would be widened to 64 bits anyway.
So in your case:
static BigInteger flag1 = BigInteger.valueOf(0x00000001);
static BigInteger flag1 = BigInteger.valueOf(0x0000000000000001);
static BigInteger flag1 = BigInteger.valueOf(0x1);
static BigInteger flag1 = BigInteger.valueOf(1);
static BigInteger flag1 = BigInteger.valueOf(1L);
are all saying the same thing. This is saying the same thing too ...
static BigInteger flag1 = BigInteger.valueOf(01);
... but it is a bad idea. It only works because "1" octal and "1" decimal are the same number.
Someone asked:
The more interesting question would actually would be: what literal value the compiler puts into the java bytecode for that.
I don't think that the JLS specifies this, but it would use a long literal because that is what the JVM spec requires.

why explicit type casting required from double to float but not from int to byte?

Consider following statement:
byte by = 5; //works fine
literal '5' is of type int and small enough to fit into a variable of type byte. Compiler does the implicit type casting here (from int to byte).
Now consider following scenario:
float fl = 5.5; //compilation error
literal '5.5' is of type double, also small enough to fit into a variable of
type float. Why do we need to explicitly type cast like this:
float fl = (float) 5.5; //works fine
Why compiler is not doing the casting for us in case of floating points?
In the integer version, the compiler knows that all the data in the number 5 can be stored in a byte. No information is lost. That's not always true for floating point values. For example, 0.1f isn't equal to 0.1d.
Now for the example, you've given, the decimal value 5.5 is exactly represented in both float and double, so you could argue that in that case, no information is lost - but it would be pretty odd for the language specification to have to make this valid:
float f = 5.5;
but this invalid:
float f = 5.6;
The language specification is happy to talk about whether a number fits within the range of float/double (although even that isn't as simple as you might expect) but when it comes to whether a literal can be exactly represented, I don't think it ever goes into detail.
The easy answer is, because the specification says so (compile-time constants of type integer can be assigned to smaller types as long as they fit).
With floating-point however, there is not so much determining whether the constant fits, but rather the loss of precision that comes along with it. E.g. assigning 1.23456789123 to a double is fine, but to a float is not. It's not so obvious why, in this case, though, at least to some programmers. I'd definitely count it as a surprise when some floating-point constants work while others won't and the reason isn't as clear as with integral types (where the limits are often second nature to most).
Note that even with doubles there sometimes is lost information. You can make your constants as precise as you want, but you won't always get the exact value you stated in the variable.
Agreed with Jon, However, I would like to add that
byte by = 5; //works fine until the number is less than 128
This is because one byte can only hold upto -128 to 127. Once you will try to enter number above 127, you will get the same error like you get when storing double value into float.
byte by = 128; //compilation error
So for agreeing the lost of the conversion data, you need to perform the explicit conversion.
byte by = (byte) 128; // work fine
Perhaps the most significant reason that Java makes allowance for implicit narrowing conversions of literals of type int to short and byte, but does not do so for conversions of literal double values to float is that Java includes float literals, but does not allow literals of types byte and short.
Personally, I really dislike Java's numerical conversion rules, but the allowance for storing integer constants to short and byte makes those types at least somewhat bearable.

Byte comparision in java

what actually happens here, when byte - byte is occur?
suppose,
byteResult = byte1 - byte2;
where,
byte1 = 0xff;
byte2 = 0x01;
then,
is byte1 turns into integer with value 255 and and byte2 1 and byteResult assigned to 254 then converted into byte with 0xFE? And then the if condition is checked? Please a detail help will be very helpful for me. Sorry if my question is ambiguous!
Here, I found something but not what exactly I want.
Java Byte comparison
No the byte will not be converted into an int.
From the JLS 5.2 Assignment Conversion:
In addition, if the expression is a constant expression (§15.28) of
type byte, short, char, or int: - A narrowing primitive conversion may
be used if the type of the variable is byte, short, or char, and the
value of the constant expression is representable in the type of the
variable.
Also check subtracting 2 bytes makes int?
This is a basic premise of Java programming. All integers are of type int unless specifically cast to another type. Therefore, any arithmetic done with integers automatically 'promotes' all the operands to int type from the narrower type (byte, short), and the result of arithmetic with int operands is always int. (I think I've beaten that to death now).
If you want the short result of arithmetic with two bytes, do this:
short result = (short) (byte1 - byte2)
This explicit cast makes it the programmer's responsibility for throwing away the extra bits if they aren't needed. Otherwise, integer arithmetic is done in 32 bits.

Categories