Byte arithmetic: How to subtract to a byte variable? [duplicate] - java

This question already has answers here:
Promotion in Java?
(5 answers)
Closed 9 years ago.
I'm getting an error when I'm trying to do somethink like this:
byte a = 23;
a = a - 1;
The compiler gives this error:
Test.java:8: possible loss of precision found : int required: byte
a = a - 1;
^
1 error
Casting doesn't solve the error...
Why the compiler don't let me do it?
Should I need to transform the variable 'a' into an int?

Do like this.
a = (byte)(a - 1);
When you subtract 1 from a then its integer value. So to get assign the result in byte you need to do explicit type casting.

In Java math, everything is promoted to at least an int before the computation. This is called Binary Numeric Promotion (JLS 5.6.2). So that's why the compiler found an int. To resolve this, cast the result of the entire expression back to byte:
a = (byte) (a - 1);

a = a - 1; // here before subtraction a is promoted to int data type and result of 'a-1' becomes int which can't be stored in byte as (byte = 8bits and int = 32 bits).
Thats why you'll have to cast it to a byte as follows :
a = (byte) (a - 1);

Do this:
a -= 1;
You even don't need explicit cast, compiler/JVM will do it for you.
Should you change the variable type to int nobody can say, having only information you provided.
A variable type is defined by the task you are planning to perform with it.
If your variable a counts fingers on someone's hands, why would you use int? Type byte is more than enough for that.

Related

Lossy conversion int to double method call [duplicate]

New Java programmers are often confused by compilation error messages like:
"incompatible types: possible lossy conversion from double to int"
for this line of code:
int squareRoot = Math.sqrt(i);
In general, what does the "possible lossy conversion" error message mean, and how do you fix it?
First of all, this is a compilation error. If you ever see it in an exception message at runtime, it is because you have have run a program with compilation errors1.
The general form of the message is this:
"incompatible types: possible lossy conversion from <type1> to <type2>"
where <type1> and <type2> are both primitive numeric types; i.e. one of byte, char, short, int, long, float or double.
This error happens when your code attempts to do an implicit conversion from <type1> to <type2> but the conversion could be lossy.
In the example in the question:
int squareRoot = Math.sqrt(i);
the sqrt method produces a double, but a conversion from double to int is potentially lossy.
What does "potentially lossy" mean?
Well lets look at a couple of examples.
A conversion of a long to an int is a potentially lossy conversion because there are long values that do not have a corresponding int value. For example, any long value that is greater than 2^31 - 1 is too large to be represented as an int. Similarly, any number less than -2^31 is too small.
A conversion of an int to a long is NOT lossy conversion because every int value has a corresponding long value.
A conversion of a float to an long is a potentially lossy conversion because there float values that are outside of the range that can be represented as long values. Such numbers are (lossily) convert into Long.MAX_VALUE or Long.MIN_VALUE, as are NaN and Inf values.
A conversion of an long to a float is NOT lossy conversion because every long value has a corresponding float value. (The converted value may be less precise, but "lossiness" doesn't mean that ... in this context.)
These are all the conversions that are potentially lossy:
short to byte or char
char to byte or short
int to byte, short or char
long to byte, short, char or int
float to byte, short, char, int or long
double to byte, short, char, int, long or float.
How do you fix the error?
The way to make the compilation error go away is to add a typecast. For example;
int i = 47;
int squareRoot = Math.sqrt(i); // compilation error!
becomes
int i = 47;
int squareRoot = (int) Math.sqrt(i); // no compilation error
But is that really a fix? Consider that the square root of 47 is 6.8556546004 ... but squareRoot will get the value 6. (The conversion will truncate, not round.)
And what about this?
byte b = (int) 512;
That results in b getting the value 0. Converting from a larger int type to a smaller int type is done by masking out the high order bits, and the low-order 8 bits of 512 are all zero.
In short, you should not simply add a typecast, because it might not do the correct thing for your application.
Instead, you need to understand why your code needs to do a conversion:
Is this happening because you have made some other mistake in your code?
Should the <type1> be a different type, so that a lossy conversion isn't needed here?
If a conversion is necessary, is the silent lossy conversion that the typecast will do the correct behavior?
Or should your code be doing some range checks and dealing with incorrect / unexpected values by throwing an exception?
"Possible lossy conversion" when subscripting.
First example:
for (double d = 0; d < 10.0; d += 1.0) {
System.out.println(array[d]); // <<-- possible lossy conversion
}
The problem here is that array index value must be int. So d has to be converted from double to int. In general, using a floating point value as an index doesn't make sense. Either someone is under the impression that Java arrays work like (say) Python dictionaries, or they have overlooked the fact that floating-point arithmetic is often inexact.
The solution is to rewrite the code to avoid using a floating point value as an array index. (Adding a type cast is probably an incorrect solution.)
Second example:
for (long l = 0; l < 10; l++) {
System.out.println(array[l]); // <<-- possible lossy conversion
}
This is a variation of the previous problem, and the solution is the same. The difference is that the root cause is that Java arrays are limited to 32 bit indexes. If you want an "array like" data structure which has more than 231 - 1 elements, you need to define or find a class to do it.
"Possible lossy conversion" in method or constructor calls
Consider this:
public class User {
String name;
short age;
int height;
public User(String name, short age, int height) {
this.name = name;
this.age = age;
this.height = height;
}
public static void main(String[] args) {
User user1 = new User("Dan", 20, 190);
}
}
Compiling the above with Java 11 gives the following:
$ javac -Xdiags:verbose User.java
User.java:20: error: constructor User in class User cannot be applied to given types;
User user1 = new User("Dan", 20, 190);
^
required: String,short,int
found: String,int,int
reason: argument mismatch; possible lossy conversion from int to short
1 error
The problem is that the literal 20 is an int, and the corresponding parameter in the constructor is declared as a short. Converting an int to a short is lossy.
"Possible lossy conversion" in a return statement.
Example:
public int compute() {
long result = 42L;
return result; // <<-- possible lossy conversion
}
A return (with a value / expression) could be thought of an an "assignment to the return value". But no matter how you think about it, it is necessary to convert the value supplied to the actual return type of the method. Possible solutions are adding a typecast (which says "I acknowledge the lossy-ness") or changing the method's return type.
"Possible lossy conversion" due to promotion in expressions
Consider this:
byte b1 = 0x01;
byte mask = 0x0f;
byte result = b1 & mask; // <<-- possible lossy conversion
This will tell you that you that there is a "possible lossy conversion from int to byte". This is actually a variation of the first example. The potentially confusing thing is understanding where the int comes from.
The answer to that is it comes from the & operator. In fact all of the arithmetic and bitwise operators for integer types will produce an int or long, depending on the operands. So in the above example, b1 & mask is actually producing an int, but we are trying to assign that to a byte.
To fix this example we must type-cast the expression result back to a byte before assigning it.
byte result = (byte) (b1 & mask);
"Possible lossy conversion" when assigning literals
Consider this:
int a = 21;
byte b1 = a; // <<-- possible lossy conversion
byte b2 = 21; // OK
What is going on? Why is one version allowed but the other one isn't? (After all they "do" the same thing!)
First of all, the JLS states that 21 is an numeric literal whose type is int. (There are no byte or short literals.) So in both cases we are assigning an int to a byte.
In the first case, the reason for the error is that not all int values will fit into a byte.
In the second case, the compiler knows that 21 is a value that will always fit into a byte.
The technical explanation is that in an assignment context, it is permissible to perform a primitive narrowing conversion to a byte, char or short if the following are all true:
The value is the result of a compile time constant expression (which includes literals).
The type of the expression is byte, short, char or int.
The constant value being assigned is representable (without loss) in the domain of the "target" type.
Note that this only applies with assignment statements, or more technically in assignment contexts. Thus:
Byte b4 = new Byte(21); // incorrect
gives a compilation error.
1 - For instance, the Eclipse IDE has an option which allows you to ignore compilation errors and run the code anyway. If you select this, the IDE's compiler will create a .class file where the method with the error will throw an unchecked exception if it is called. The exception message will mention the compilation error message.

Assigning int to byte in java [duplicate]

This question already has an answer here:
Implicit narrowing when summing constants vs explicit narrowing when summing variables
(1 answer)
Closed 1 year ago.
In java, it is fine to have:
byte b = (int) 2;
where java automatically convert int to byte. On the other hand, if we do:
int a = 2;
byte b = a;
this will give an error saying that the required type is byte but int is provided.
May I ask how to understand the reason why automatic conversion works when literal number of type int is assigning to a variable of type byte while it doesn't work when the literal number is replaced by a variable of type int?
Thanks in advance!
It is fine to have
byte b = (int) 2;
Because 2 is casted into int and than it is the same as
byte b = 2;
The following also works
int a = 2;
byte b = (byte) a;

Inconsistent "Required type: byte provided: int" in Java

I stored integers in byte arrays but suddenly i got a "Required type: byte provided: int" error and some lines above not. So i tried to find out what different was, tests below:
byte b;
int integer = 12;
final int finalInteger = 12;
final int finalIntegerLO = 128; // 1000 0000
b = integer; //Required type byte provided int
b = finalInteger; //OK
b = finalIntegerLO; //Required type byte provided int
I guess having a final int with no '1' in the 2^7 place is Ok? It gave me an idea what happens if you combine it with bitwise operators and now it makes much less sense to me..
b = finalIntegerLO & 0xf; //OK
Is now ok..
but
b = integer & 0xf; //Required type byte provided int
not??
Can someone explain me why it acts so different?
Let's break every line
case 1:
b = integer
Here we can see we are trying to convert int into a byte, so compiler asks as to explicitly typecast like so. (value of integer may exceed byte range.)
b = (byte) integer;
case 2:
b = finalInteger;
This case is slightly different since finalInteger is a constant whose value is fixed and the compiler can tell beforehand whether it lies within the range of byte or not. If it lies within the range compiler is smart enough to convert int to byte without us to explicitly typecast it.
case 3:
b = finalIntegerLO;
Range of byte is -128 to 127 clearly we cannot convert int to a byte and compiler sees that finalIntegerLO is a constant so it is impossible to carry out this conversion and we see an error
To remove this error we can explicitly typecase (DONT DO IT THOUGH) which will give use as b = -128
b = (byte) finalIntegerLO;
case 4:
b = finalIntegerLO & 0xf;
Here finalIntegerLO and 0xf both are constants and compiler can determine what will be the result it's 0 which is within the range of byte.
case 5:
b = integer & 0xf;
Here integer value can be changed before the execution of this line so the compiler is not sure if the result is within the range of int or not, so it asks us to explicitly typecast like so.
b = (byte) (integer & 0xf);
Again like case 3 you may get an unexpected result.
The error that is received while executing b = integer is "incompatible types: possible lossy conversion from int to byte". When this statement is being executed by the compiler, an implicit type conversion from a higher data type(int) to a lower data type(byte) is being carried out. This is not possible as there might be loss of information/precision with such conversions and hence, java tried to avoid them. An alternative would be to force/coerce the conversion using an explicit type conversion as follows:
byte b;
int integer = 12;
b = (byte) integer;
Moving on,
final int finalInteger = 12;
final int finalIntegerLO = 128;
b = finalInteger works because the final keyword ensures that the variable finalInteger does not change its value. Since, we are already telling our Java compiler that we cannot change the value of finalInteger variable and because finalInteger stores a value that is in the range of values that can be stored in a byte variable (-128 to 127), the conversion from int to byte is possible.
b = finalIntegerLO does not work because the value of finalIntegerLO exceeds the maximum value for a byte variable, i.e, -128 to 127.

Type mismatch with byte variable using ternary operator

int a = 10, b = 20;
byte x = (a>b) ? 10 : 20;
The preceding code produces a Compile-Time error saying: Type mismatch cannot convert from int to byte.
It's weird that when i replace the expression (a>b) with (true) the code successfully compiles!
Also when I replace the expression with literals (10>20) the code also works!!
Not only that but also when I explicitly type cast 10, 20 or even the whole ternary operator the code works too!
byte x = (a>b) ? (byte)10 : 20;
byte x = (a>b) ? 10 : (byte)20;
byte x = (byte)((a>b) ? 10 : 20);
What is exactly wrong with the expression (a>b)?
Note that equavalent code using if-else works fine.
int a = 10, b = 20;
byte x;
if(a>b) {
x = 10;
} else {
x = 20;
}
Because when the expression is directly specified with numbers like (10>20), then expression is evaluated at compile time itself and result value is assigned to byte variable. If you use any IDE you can see the warning of Dead code in this expression
byte x = (20>10) ? 10 : 20 (Dead code); // because compiler know 20 greater than 10 and assigns 10 to x
But while using variables a,b compiler doesn't know those values at compile time and the expression is evaluated at runtime. Since in java by default numeric values are represented as int it is asking explicit type casting
byte x = (a>b) ? 10 : 20; //Type mismatch: cannot convert from int to byte
And i would suggest to read this for type casting with ternary operator
For the same reason you can do this:
int a = 10;
byte x = 10;
System.out.println("a: " + x);
but you can't do this:
int a = 10;
byte x = a; <-- java: incompatible types: possible lossy conversion from int to byte
System.out.println("a: " + x);
Direct assignment of an int to a byte using the literal is a narrowing conversion. It doesn't violate the type system because Java will happily lop the significant bits off a literal 10 and present you with a byte.
But if you tell it a is an int, it can't be a byte (because a byte isn't and int).
Kind a similar to this, but not exactly.
I tried going through language specification and few other resources, but there was no purely exact answer, on how the operand types are choosen by the compiler.
But assembiling all the info it seems that the:
byte x = (a>b) ? 10 : 20;
produces the error, because there's an implicit convertion of type int to byte.
In newer java versions the compilator shows:
error: incompatible types: possible lossy conversion from int to byte
This is because the second and third operand in (a>b) ? 10 : 20 is considered as int. The basic numeric constants are always evaluated to int and they need to be explicitly downcasted to byte.
Casting the return (second or third) operands or the whole statement to byte prevents the error, because it explicitly shows to the compiler that data loss of casting int to byte shouldn't be taken into consiteration.
Casting second or third operand to byte explicitly says that every return from that ternary operator should be treated as byte.
Writing something like:
int aa = 100000000;
byte zz = aa;
//or
double dd = 10.11d;
long xyx = dd;
will result in same kind of an error.
The reason behind all of this is in that, the promoting of smaller primitive to upper one doesn't impact the deterministic result of a program, but downcasting (i.e. int to short) or dropping the floating point can output in different results.
See that, such declarations doesn't produce any error:
byte ooo = 100; //it throws an error if the value of ooo is higher than 127
//cause the `127` is the max value for byte type
int iii = ooo;
The line like:
byte x = (20>10) ? 10 : 20;
doesn't output in any runtime/compiller error, because while having explicit values provided to the condition compiler can simply evaluate condition which results in: byte x = 10.
The dead code, aka unreachable statement in Java could be produced with:
try {
throw new Exception();
} catch (Exception e) {
throw new Exception(e);
System.out.println(); //compiler shows this line as unreachable
}
So, for a short conclusion, the number literals/constants are evaluated as int type if they aren't explicitly assigned to the specific type. The byte x = 10; works, because it explicitly says to assign 10 to byte type, and 10 is in scope of byte type and doesn't leads to any data loss (assigning -129 or 10.1 throws an error).
The thing with byte x = (a>b) ? 10 : 20; is that, the whole ternary expression isn't evaluated on fly, the compiler doesn't know if the values of a and b aren't being changed somewhere else. Stating explicit numbers or just true/false in condition of ternary operator makes the result of an expression obvious to the compiler (and to developer eyes).
After a better look into spec of conditional operator it says:
The conditional operator ? : uses the boolean value of one expression to decide which of two other expressions should be evaluated.
Having this stated, the choosen result operator is evaluatd ONLY when the condition operator is evaluated (1st result if true, 2nd if false).
Explicit condition like 20>10 or true is evaluated at compile time, so the exact, explicit value is assigned in case of byte x = ....
Why does something such small as:
int a = 10, b = 20;
byte x = (a>b) ? 10 : 20;
isn't being evaluated at compile time and throws an error?
As already stated number literals are evaluated to int and in above assignment to x variable isn't explicit (reminding the choosen result operator are evaluated after the condition is evaluated).
The compiler isn't something like a full static code analyzer, trying to request a compiler do to so could result in overcomplicated byte code.
Imagine some more complex example where the values of a and b are initialized in code but there are several if statements which could change the values assigned to a or b. Then compiler must first check if any of if statement can be evaluated at compile time, to determine if there is a compiletime value for one of those variables and later produce conditions for the ternary/conditional operator based on that if one of those values have changed. And in the result providing a lot of much complex code then it would do without such analyzes.
This is a very simple example, so for the dev it can look like wtf, but for compiler preventing such case would be too much overhead, becasuse the compiler can't tell if the provided code is simple or not and cannot evaluate the variable values for conditions at compile time.
This may be redundant with all the other answers, but it's how I convinced myself. a.compareTo(b) can clearly not be evaluated at compile time, and integer literals are always default int:
String a = "a", b = "b";
byte x = (a.compareTo(b) > 1) ? 10 : 20;
produces the same compile time error:
error: incompatible types: possible lossy conversion from int to byte
byte x = (a.compareTo(b) > 1) ? 10 : 20;
^
Floating points are default double, so this also produces a similar error:
float x = (a.compareTo(b) > 1) ? 10.0 : 20.0;

Possible lossy conversion from double to int when squaring an integer [duplicate]

New Java programmers are often confused by compilation error messages like:
"incompatible types: possible lossy conversion from double to int"
for this line of code:
int squareRoot = Math.sqrt(i);
In general, what does the "possible lossy conversion" error message mean, and how do you fix it?
First of all, this is a compilation error. If you ever see it in an exception message at runtime, it is because you have have run a program with compilation errors1.
The general form of the message is this:
"incompatible types: possible lossy conversion from <type1> to <type2>"
where <type1> and <type2> are both primitive numeric types; i.e. one of byte, char, short, int, long, float or double.
This error happens when your code attempts to do an implicit conversion from <type1> to <type2> but the conversion could be lossy.
In the example in the question:
int squareRoot = Math.sqrt(i);
the sqrt method produces a double, but a conversion from double to int is potentially lossy.
What does "potentially lossy" mean?
Well lets look at a couple of examples.
A conversion of a long to an int is a potentially lossy conversion because there are long values that do not have a corresponding int value. For example, any long value that is greater than 2^31 - 1 is too large to be represented as an int. Similarly, any number less than -2^31 is too small.
A conversion of an int to a long is NOT lossy conversion because every int value has a corresponding long value.
A conversion of a float to an long is a potentially lossy conversion because there float values that are outside of the range that can be represented as long values. Such numbers are (lossily) convert into Long.MAX_VALUE or Long.MIN_VALUE, as are NaN and Inf values.
A conversion of an long to a float is NOT lossy conversion because every long value has a corresponding float value. (The converted value may be less precise, but "lossiness" doesn't mean that ... in this context.)
These are all the conversions that are potentially lossy:
short to byte or char
char to byte or short
int to byte, short or char
long to byte, short, char or int
float to byte, short, char, int or long
double to byte, short, char, int, long or float.
How do you fix the error?
The way to make the compilation error go away is to add a typecast. For example;
int i = 47;
int squareRoot = Math.sqrt(i); // compilation error!
becomes
int i = 47;
int squareRoot = (int) Math.sqrt(i); // no compilation error
But is that really a fix? Consider that the square root of 47 is 6.8556546004 ... but squareRoot will get the value 6. (The conversion will truncate, not round.)
And what about this?
byte b = (int) 512;
That results in b getting the value 0. Converting from a larger int type to a smaller int type is done by masking out the high order bits, and the low-order 8 bits of 512 are all zero.
In short, you should not simply add a typecast, because it might not do the correct thing for your application.
Instead, you need to understand why your code needs to do a conversion:
Is this happening because you have made some other mistake in your code?
Should the <type1> be a different type, so that a lossy conversion isn't needed here?
If a conversion is necessary, is the silent lossy conversion that the typecast will do the correct behavior?
Or should your code be doing some range checks and dealing with incorrect / unexpected values by throwing an exception?
"Possible lossy conversion" when subscripting.
First example:
for (double d = 0; d < 10.0; d += 1.0) {
System.out.println(array[d]); // <<-- possible lossy conversion
}
The problem here is that array index value must be int. So d has to be converted from double to int. In general, using a floating point value as an index doesn't make sense. Either someone is under the impression that Java arrays work like (say) Python dictionaries, or they have overlooked the fact that floating-point arithmetic is often inexact.
The solution is to rewrite the code to avoid using a floating point value as an array index. (Adding a type cast is probably an incorrect solution.)
Second example:
for (long l = 0; l < 10; l++) {
System.out.println(array[l]); // <<-- possible lossy conversion
}
This is a variation of the previous problem, and the solution is the same. The difference is that the root cause is that Java arrays are limited to 32 bit indexes. If you want an "array like" data structure which has more than 231 - 1 elements, you need to define or find a class to do it.
"Possible lossy conversion" in method or constructor calls
Consider this:
public class User {
String name;
short age;
int height;
public User(String name, short age, int height) {
this.name = name;
this.age = age;
this.height = height;
}
public static void main(String[] args) {
User user1 = new User("Dan", 20, 190);
}
}
Compiling the above with Java 11 gives the following:
$ javac -Xdiags:verbose User.java
User.java:20: error: constructor User in class User cannot be applied to given types;
User user1 = new User("Dan", 20, 190);
^
required: String,short,int
found: String,int,int
reason: argument mismatch; possible lossy conversion from int to short
1 error
The problem is that the literal 20 is an int, and the corresponding parameter in the constructor is declared as a short. Converting an int to a short is lossy.
"Possible lossy conversion" in a return statement.
Example:
public int compute() {
long result = 42L;
return result; // <<-- possible lossy conversion
}
A return (with a value / expression) could be thought of an an "assignment to the return value". But no matter how you think about it, it is necessary to convert the value supplied to the actual return type of the method. Possible solutions are adding a typecast (which says "I acknowledge the lossy-ness") or changing the method's return type.
"Possible lossy conversion" due to promotion in expressions
Consider this:
byte b1 = 0x01;
byte mask = 0x0f;
byte result = b1 & mask; // <<-- possible lossy conversion
This will tell you that you that there is a "possible lossy conversion from int to byte". This is actually a variation of the first example. The potentially confusing thing is understanding where the int comes from.
The answer to that is it comes from the & operator. In fact all of the arithmetic and bitwise operators for integer types will produce an int or long, depending on the operands. So in the above example, b1 & mask is actually producing an int, but we are trying to assign that to a byte.
To fix this example we must type-cast the expression result back to a byte before assigning it.
byte result = (byte) (b1 & mask);
"Possible lossy conversion" when assigning literals
Consider this:
int a = 21;
byte b1 = a; // <<-- possible lossy conversion
byte b2 = 21; // OK
What is going on? Why is one version allowed but the other one isn't? (After all they "do" the same thing!)
First of all, the JLS states that 21 is an numeric literal whose type is int. (There are no byte or short literals.) So in both cases we are assigning an int to a byte.
In the first case, the reason for the error is that not all int values will fit into a byte.
In the second case, the compiler knows that 21 is a value that will always fit into a byte.
The technical explanation is that in an assignment context, it is permissible to perform a primitive narrowing conversion to a byte, char or short if the following are all true:
The value is the result of a compile time constant expression (which includes literals).
The type of the expression is byte, short, char or int.
The constant value being assigned is representable (without loss) in the domain of the "target" type.
Note that this only applies with assignment statements, or more technically in assignment contexts. Thus:
Byte b4 = new Byte(21); // incorrect
gives a compilation error.
1 - For instance, the Eclipse IDE has an option which allows you to ignore compilation errors and run the code anyway. If you select this, the IDE's compiler will create a .class file where the method with the error will throw an unchecked exception if it is called. The exception message will mention the compilation error message.

Categories