Lossy conversion int to double method call [duplicate] - java

New Java programmers are often confused by compilation error messages like:
"incompatible types: possible lossy conversion from double to int"
for this line of code:
int squareRoot = Math.sqrt(i);
In general, what does the "possible lossy conversion" error message mean, and how do you fix it?

First of all, this is a compilation error. If you ever see it in an exception message at runtime, it is because you have have run a program with compilation errors1.
The general form of the message is this:
"incompatible types: possible lossy conversion from <type1> to <type2>"
where <type1> and <type2> are both primitive numeric types; i.e. one of byte, char, short, int, long, float or double.
This error happens when your code attempts to do an implicit conversion from <type1> to <type2> but the conversion could be lossy.
In the example in the question:
int squareRoot = Math.sqrt(i);
the sqrt method produces a double, but a conversion from double to int is potentially lossy.
What does "potentially lossy" mean?
Well lets look at a couple of examples.
A conversion of a long to an int is a potentially lossy conversion because there are long values that do not have a corresponding int value. For example, any long value that is greater than 2^31 - 1 is too large to be represented as an int. Similarly, any number less than -2^31 is too small.
A conversion of an int to a long is NOT lossy conversion because every int value has a corresponding long value.
A conversion of a float to an long is a potentially lossy conversion because there float values that are outside of the range that can be represented as long values. Such numbers are (lossily) convert into Long.MAX_VALUE or Long.MIN_VALUE, as are NaN and Inf values.
A conversion of an long to a float is NOT lossy conversion because every long value has a corresponding float value. (The converted value may be less precise, but "lossiness" doesn't mean that ... in this context.)
These are all the conversions that are potentially lossy:
short to byte or char
char to byte or short
int to byte, short or char
long to byte, short, char or int
float to byte, short, char, int or long
double to byte, short, char, int, long or float.
How do you fix the error?
The way to make the compilation error go away is to add a typecast. For example;
int i = 47;
int squareRoot = Math.sqrt(i); // compilation error!
becomes
int i = 47;
int squareRoot = (int) Math.sqrt(i); // no compilation error
But is that really a fix? Consider that the square root of 47 is 6.8556546004 ... but squareRoot will get the value 6. (The conversion will truncate, not round.)
And what about this?
byte b = (int) 512;
That results in b getting the value 0. Converting from a larger int type to a smaller int type is done by masking out the high order bits, and the low-order 8 bits of 512 are all zero.
In short, you should not simply add a typecast, because it might not do the correct thing for your application.
Instead, you need to understand why your code needs to do a conversion:
Is this happening because you have made some other mistake in your code?
Should the <type1> be a different type, so that a lossy conversion isn't needed here?
If a conversion is necessary, is the silent lossy conversion that the typecast will do the correct behavior?
Or should your code be doing some range checks and dealing with incorrect / unexpected values by throwing an exception?
"Possible lossy conversion" when subscripting.
First example:
for (double d = 0; d < 10.0; d += 1.0) {
System.out.println(array[d]); // <<-- possible lossy conversion
}
The problem here is that array index value must be int. So d has to be converted from double to int. In general, using a floating point value as an index doesn't make sense. Either someone is under the impression that Java arrays work like (say) Python dictionaries, or they have overlooked the fact that floating-point arithmetic is often inexact.
The solution is to rewrite the code to avoid using a floating point value as an array index. (Adding a type cast is probably an incorrect solution.)
Second example:
for (long l = 0; l < 10; l++) {
System.out.println(array[l]); // <<-- possible lossy conversion
}
This is a variation of the previous problem, and the solution is the same. The difference is that the root cause is that Java arrays are limited to 32 bit indexes. If you want an "array like" data structure which has more than 231 - 1 elements, you need to define or find a class to do it.
"Possible lossy conversion" in method or constructor calls
Consider this:
public class User {
String name;
short age;
int height;
public User(String name, short age, int height) {
this.name = name;
this.age = age;
this.height = height;
}
public static void main(String[] args) {
User user1 = new User("Dan", 20, 190);
}
}
Compiling the above with Java 11 gives the following:
$ javac -Xdiags:verbose User.java
User.java:20: error: constructor User in class User cannot be applied to given types;
User user1 = new User("Dan", 20, 190);
^
required: String,short,int
found: String,int,int
reason: argument mismatch; possible lossy conversion from int to short
1 error
The problem is that the literal 20 is an int, and the corresponding parameter in the constructor is declared as a short. Converting an int to a short is lossy.
"Possible lossy conversion" in a return statement.
Example:
public int compute() {
long result = 42L;
return result; // <<-- possible lossy conversion
}
A return (with a value / expression) could be thought of an an "assignment to the return value". But no matter how you think about it, it is necessary to convert the value supplied to the actual return type of the method. Possible solutions are adding a typecast (which says "I acknowledge the lossy-ness") or changing the method's return type.
"Possible lossy conversion" due to promotion in expressions
Consider this:
byte b1 = 0x01;
byte mask = 0x0f;
byte result = b1 & mask; // <<-- possible lossy conversion
This will tell you that you that there is a "possible lossy conversion from int to byte". This is actually a variation of the first example. The potentially confusing thing is understanding where the int comes from.
The answer to that is it comes from the & operator. In fact all of the arithmetic and bitwise operators for integer types will produce an int or long, depending on the operands. So in the above example, b1 & mask is actually producing an int, but we are trying to assign that to a byte.
To fix this example we must type-cast the expression result back to a byte before assigning it.
byte result = (byte) (b1 & mask);
"Possible lossy conversion" when assigning literals
Consider this:
int a = 21;
byte b1 = a; // <<-- possible lossy conversion
byte b2 = 21; // OK
What is going on? Why is one version allowed but the other one isn't? (After all they "do" the same thing!)
First of all, the JLS states that 21 is an numeric literal whose type is int. (There are no byte or short literals.) So in both cases we are assigning an int to a byte.
In the first case, the reason for the error is that not all int values will fit into a byte.
In the second case, the compiler knows that 21 is a value that will always fit into a byte.
The technical explanation is that in an assignment context, it is permissible to perform a primitive narrowing conversion to a byte, char or short if the following are all true:
The value is the result of a compile time constant expression (which includes literals).
The type of the expression is byte, short, char or int.
The constant value being assigned is representable (without loss) in the domain of the "target" type.
Note that this only applies with assignment statements, or more technically in assignment contexts. Thus:
Byte b4 = new Byte(21); // incorrect
gives a compilation error.
1 - For instance, the Eclipse IDE has an option which allows you to ignore compilation errors and run the code anyway. If you select this, the IDE's compiler will create a .class file where the method with the error will throw an unchecked exception if it is called. The exception message will mention the compilation error message.

Related

Finding the absolute value of a Byte variable in Kotlin or Java?

Why isn't there any function in the standard library of Kotlin/Java for taking the absolute value of a Byte/byte variable? I'm I missing something?
Math.abs() is only defined for int, long, double and float.
For context: in the audio world you can run easily into byte arrays representing the amplitude. I'm interested in calculating the average of the absolute values of a byte array. For e.g see this listener related to Visualizer in Android.
I know I can cast it to an integer and take the absolute value of that, but I would still be interested why is this not predefined.
The operations in java.lang.Math are in line with all other arithmetic operations in Java. Integer operations always work in either, 64 bit long or 32 bit int.
As stated in JLS, §4.2.2. Integer Operations
If an integer operator other than a shift operator has at least one operand of type long, then the operation is carried out using 64-bit precision, and the result of the numerical operator is of type long. If the other operand is not long, it is first widened (§5.1.5) to type long by numeric promotion (§5.6).
Otherwise, the operation is carried out using 32-bit precision, and the result of the numerical operator is of type int. If either operand is not an int, it is first widened to type int by numeric promotion.
In other words, not even the following, equivalent to abs, would compile:
byte a = 42, absA = a < 0? -a: a;
as the numeric operation -a will promote a to int before negating.
It’s important that a cast of the result to byte would not be a lossless operation here. The byte datatype has a value range from -128 to +127, so if the value is -128, its absolute value +128 is outside the byte value range and a cast to byte would cause an overflow to -127.
Therefore, to have a correct and efficient calculation, you should do as always in Java when it comes to byte, short, or char calculations, calculate everything using int and only cast the final result back to your data type. When you want to calculate the average, you have to calculate the sum using int anyway (or even long if you have more than 16777215 array elements).
byte[] array // e.g. test case: = { 1, -1, -128, 127 };
int sum = 0;
for(byte b: array) sum += Math.abs(b);
int average = sum/array.length;
// if you really need a byte result
byte byteAverage = average == 128? 127: (byte)average;
I don’t know about Kotlin, but in Java, the automatic promotion to int also works if the operand is of type Byte, so you don’t need to “cast it to an integer” to call Math.abs(int). You only have to deal with the fact that the result will be an int, as with all arithmetic operations on byte, short, char, or their wrapper types.
In java byte is signed between -128 and 127, corresponding as (unsigned) int: 0xFF & b between 128 .. 255, and 0 .. 127.
Math.abs is irrelevant here as probably unsigned byte values are assumed.
int[] bytesToInt(byte[] bs) {
int[] is = new int[bs.length];
Arrays.fill(is, i -> bs[i] & 0xFF);
return is;
}
byte byteAbs(byte b) {
return b >= 0? b : b == -128? 127 : -b;
}
byteAbs - given for completeness - reduces the range to 7 bits, and has the artefact that -128 maps to 127, as there is no 128.

Type mismatch with byte variable using ternary operator

int a = 10, b = 20;
byte x = (a>b) ? 10 : 20;
The preceding code produces a Compile-Time error saying: Type mismatch cannot convert from int to byte.
It's weird that when i replace the expression (a>b) with (true) the code successfully compiles!
Also when I replace the expression with literals (10>20) the code also works!!
Not only that but also when I explicitly type cast 10, 20 or even the whole ternary operator the code works too!
byte x = (a>b) ? (byte)10 : 20;
byte x = (a>b) ? 10 : (byte)20;
byte x = (byte)((a>b) ? 10 : 20);
What is exactly wrong with the expression (a>b)?
Note that equavalent code using if-else works fine.
int a = 10, b = 20;
byte x;
if(a>b) {
x = 10;
} else {
x = 20;
}
Because when the expression is directly specified with numbers like (10>20), then expression is evaluated at compile time itself and result value is assigned to byte variable. If you use any IDE you can see the warning of Dead code in this expression
byte x = (20>10) ? 10 : 20 (Dead code); // because compiler know 20 greater than 10 and assigns 10 to x
But while using variables a,b compiler doesn't know those values at compile time and the expression is evaluated at runtime. Since in java by default numeric values are represented as int it is asking explicit type casting
byte x = (a>b) ? 10 : 20; //Type mismatch: cannot convert from int to byte
And i would suggest to read this for type casting with ternary operator
For the same reason you can do this:
int a = 10;
byte x = 10;
System.out.println("a: " + x);
but you can't do this:
int a = 10;
byte x = a; <-- java: incompatible types: possible lossy conversion from int to byte
System.out.println("a: " + x);
Direct assignment of an int to a byte using the literal is a narrowing conversion. It doesn't violate the type system because Java will happily lop the significant bits off a literal 10 and present you with a byte.
But if you tell it a is an int, it can't be a byte (because a byte isn't and int).
Kind a similar to this, but not exactly.
I tried going through language specification and few other resources, but there was no purely exact answer, on how the operand types are choosen by the compiler.
But assembiling all the info it seems that the:
byte x = (a>b) ? 10 : 20;
produces the error, because there's an implicit convertion of type int to byte.
In newer java versions the compilator shows:
error: incompatible types: possible lossy conversion from int to byte
This is because the second and third operand in (a>b) ? 10 : 20 is considered as int. The basic numeric constants are always evaluated to int and they need to be explicitly downcasted to byte.
Casting the return (second or third) operands or the whole statement to byte prevents the error, because it explicitly shows to the compiler that data loss of casting int to byte shouldn't be taken into consiteration.
Casting second or third operand to byte explicitly says that every return from that ternary operator should be treated as byte.
Writing something like:
int aa = 100000000;
byte zz = aa;
//or
double dd = 10.11d;
long xyx = dd;
will result in same kind of an error.
The reason behind all of this is in that, the promoting of smaller primitive to upper one doesn't impact the deterministic result of a program, but downcasting (i.e. int to short) or dropping the floating point can output in different results.
See that, such declarations doesn't produce any error:
byte ooo = 100; //it throws an error if the value of ooo is higher than 127
//cause the `127` is the max value for byte type
int iii = ooo;
The line like:
byte x = (20>10) ? 10 : 20;
doesn't output in any runtime/compiller error, because while having explicit values provided to the condition compiler can simply evaluate condition which results in: byte x = 10.
The dead code, aka unreachable statement in Java could be produced with:
try {
throw new Exception();
} catch (Exception e) {
throw new Exception(e);
System.out.println(); //compiler shows this line as unreachable
}
So, for a short conclusion, the number literals/constants are evaluated as int type if they aren't explicitly assigned to the specific type. The byte x = 10; works, because it explicitly says to assign 10 to byte type, and 10 is in scope of byte type and doesn't leads to any data loss (assigning -129 or 10.1 throws an error).
The thing with byte x = (a>b) ? 10 : 20; is that, the whole ternary expression isn't evaluated on fly, the compiler doesn't know if the values of a and b aren't being changed somewhere else. Stating explicit numbers or just true/false in condition of ternary operator makes the result of an expression obvious to the compiler (and to developer eyes).
After a better look into spec of conditional operator it says:
The conditional operator ? : uses the boolean value of one expression to decide which of two other expressions should be evaluated.
Having this stated, the choosen result operator is evaluatd ONLY when the condition operator is evaluated (1st result if true, 2nd if false).
Explicit condition like 20>10 or true is evaluated at compile time, so the exact, explicit value is assigned in case of byte x = ....
Why does something such small as:
int a = 10, b = 20;
byte x = (a>b) ? 10 : 20;
isn't being evaluated at compile time and throws an error?
As already stated number literals are evaluated to int and in above assignment to x variable isn't explicit (reminding the choosen result operator are evaluated after the condition is evaluated).
The compiler isn't something like a full static code analyzer, trying to request a compiler do to so could result in overcomplicated byte code.
Imagine some more complex example where the values of a and b are initialized in code but there are several if statements which could change the values assigned to a or b. Then compiler must first check if any of if statement can be evaluated at compile time, to determine if there is a compiletime value for one of those variables and later produce conditions for the ternary/conditional operator based on that if one of those values have changed. And in the result providing a lot of much complex code then it would do without such analyzes.
This is a very simple example, so for the dev it can look like wtf, but for compiler preventing such case would be too much overhead, becasuse the compiler can't tell if the provided code is simple or not and cannot evaluate the variable values for conditions at compile time.
This may be redundant with all the other answers, but it's how I convinced myself. a.compareTo(b) can clearly not be evaluated at compile time, and integer literals are always default int:
String a = "a", b = "b";
byte x = (a.compareTo(b) > 1) ? 10 : 20;
produces the same compile time error:
error: incompatible types: possible lossy conversion from int to byte
byte x = (a.compareTo(b) > 1) ? 10 : 20;
^
Floating points are default double, so this also produces a similar error:
float x = (a.compareTo(b) > 1) ? 10.0 : 20.0;

Possible lossy conversion from double to int when squaring an integer [duplicate]

New Java programmers are often confused by compilation error messages like:
"incompatible types: possible lossy conversion from double to int"
for this line of code:
int squareRoot = Math.sqrt(i);
In general, what does the "possible lossy conversion" error message mean, and how do you fix it?
First of all, this is a compilation error. If you ever see it in an exception message at runtime, it is because you have have run a program with compilation errors1.
The general form of the message is this:
"incompatible types: possible lossy conversion from <type1> to <type2>"
where <type1> and <type2> are both primitive numeric types; i.e. one of byte, char, short, int, long, float or double.
This error happens when your code attempts to do an implicit conversion from <type1> to <type2> but the conversion could be lossy.
In the example in the question:
int squareRoot = Math.sqrt(i);
the sqrt method produces a double, but a conversion from double to int is potentially lossy.
What does "potentially lossy" mean?
Well lets look at a couple of examples.
A conversion of a long to an int is a potentially lossy conversion because there are long values that do not have a corresponding int value. For example, any long value that is greater than 2^31 - 1 is too large to be represented as an int. Similarly, any number less than -2^31 is too small.
A conversion of an int to a long is NOT lossy conversion because every int value has a corresponding long value.
A conversion of a float to an long is a potentially lossy conversion because there float values that are outside of the range that can be represented as long values. Such numbers are (lossily) convert into Long.MAX_VALUE or Long.MIN_VALUE, as are NaN and Inf values.
A conversion of an long to a float is NOT lossy conversion because every long value has a corresponding float value. (The converted value may be less precise, but "lossiness" doesn't mean that ... in this context.)
These are all the conversions that are potentially lossy:
short to byte or char
char to byte or short
int to byte, short or char
long to byte, short, char or int
float to byte, short, char, int or long
double to byte, short, char, int, long or float.
How do you fix the error?
The way to make the compilation error go away is to add a typecast. For example;
int i = 47;
int squareRoot = Math.sqrt(i); // compilation error!
becomes
int i = 47;
int squareRoot = (int) Math.sqrt(i); // no compilation error
But is that really a fix? Consider that the square root of 47 is 6.8556546004 ... but squareRoot will get the value 6. (The conversion will truncate, not round.)
And what about this?
byte b = (int) 512;
That results in b getting the value 0. Converting from a larger int type to a smaller int type is done by masking out the high order bits, and the low-order 8 bits of 512 are all zero.
In short, you should not simply add a typecast, because it might not do the correct thing for your application.
Instead, you need to understand why your code needs to do a conversion:
Is this happening because you have made some other mistake in your code?
Should the <type1> be a different type, so that a lossy conversion isn't needed here?
If a conversion is necessary, is the silent lossy conversion that the typecast will do the correct behavior?
Or should your code be doing some range checks and dealing with incorrect / unexpected values by throwing an exception?
"Possible lossy conversion" when subscripting.
First example:
for (double d = 0; d < 10.0; d += 1.0) {
System.out.println(array[d]); // <<-- possible lossy conversion
}
The problem here is that array index value must be int. So d has to be converted from double to int. In general, using a floating point value as an index doesn't make sense. Either someone is under the impression that Java arrays work like (say) Python dictionaries, or they have overlooked the fact that floating-point arithmetic is often inexact.
The solution is to rewrite the code to avoid using a floating point value as an array index. (Adding a type cast is probably an incorrect solution.)
Second example:
for (long l = 0; l < 10; l++) {
System.out.println(array[l]); // <<-- possible lossy conversion
}
This is a variation of the previous problem, and the solution is the same. The difference is that the root cause is that Java arrays are limited to 32 bit indexes. If you want an "array like" data structure which has more than 231 - 1 elements, you need to define or find a class to do it.
"Possible lossy conversion" in method or constructor calls
Consider this:
public class User {
String name;
short age;
int height;
public User(String name, short age, int height) {
this.name = name;
this.age = age;
this.height = height;
}
public static void main(String[] args) {
User user1 = new User("Dan", 20, 190);
}
}
Compiling the above with Java 11 gives the following:
$ javac -Xdiags:verbose User.java
User.java:20: error: constructor User in class User cannot be applied to given types;
User user1 = new User("Dan", 20, 190);
^
required: String,short,int
found: String,int,int
reason: argument mismatch; possible lossy conversion from int to short
1 error
The problem is that the literal 20 is an int, and the corresponding parameter in the constructor is declared as a short. Converting an int to a short is lossy.
"Possible lossy conversion" in a return statement.
Example:
public int compute() {
long result = 42L;
return result; // <<-- possible lossy conversion
}
A return (with a value / expression) could be thought of an an "assignment to the return value". But no matter how you think about it, it is necessary to convert the value supplied to the actual return type of the method. Possible solutions are adding a typecast (which says "I acknowledge the lossy-ness") or changing the method's return type.
"Possible lossy conversion" due to promotion in expressions
Consider this:
byte b1 = 0x01;
byte mask = 0x0f;
byte result = b1 & mask; // <<-- possible lossy conversion
This will tell you that you that there is a "possible lossy conversion from int to byte". This is actually a variation of the first example. The potentially confusing thing is understanding where the int comes from.
The answer to that is it comes from the & operator. In fact all of the arithmetic and bitwise operators for integer types will produce an int or long, depending on the operands. So in the above example, b1 & mask is actually producing an int, but we are trying to assign that to a byte.
To fix this example we must type-cast the expression result back to a byte before assigning it.
byte result = (byte) (b1 & mask);
"Possible lossy conversion" when assigning literals
Consider this:
int a = 21;
byte b1 = a; // <<-- possible lossy conversion
byte b2 = 21; // OK
What is going on? Why is one version allowed but the other one isn't? (After all they "do" the same thing!)
First of all, the JLS states that 21 is an numeric literal whose type is int. (There are no byte or short literals.) So in both cases we are assigning an int to a byte.
In the first case, the reason for the error is that not all int values will fit into a byte.
In the second case, the compiler knows that 21 is a value that will always fit into a byte.
The technical explanation is that in an assignment context, it is permissible to perform a primitive narrowing conversion to a byte, char or short if the following are all true:
The value is the result of a compile time constant expression (which includes literals).
The type of the expression is byte, short, char or int.
The constant value being assigned is representable (without loss) in the domain of the "target" type.
Note that this only applies with assignment statements, or more technically in assignment contexts. Thus:
Byte b4 = new Byte(21); // incorrect
gives a compilation error.
1 - For instance, the Eclipse IDE has an option which allows you to ignore compilation errors and run the code anyway. If you select this, the IDE's compiler will create a .class file where the method with the error will throw an unchecked exception if it is called. The exception message will mention the compilation error message.

Why is long casted to double in Java?

SSCCE:
public class Test {
public static void main(String[] args) {
Long a = new Long(1L);
new A(a);
}
static class A {
A(int i) {
System.out.println("int");
}
A(double d) {
System.out.println("double");
}
}
}
Output:
double
There will be no compilation error printed, it works fine and calls double-parameter constructor. But why?
It's down to the rules of type promotion: a long is converted to a double in preference to an int.
A long can always fit into a double, although precision could be lost if the long is larger than the 53rd power of 2. So your compiler picks the double constructor as a better fit than the int one.
(The compiler doesn't make a dynamic check in the sense that 1L does fit into an int).
Converting long to int is a narrowing primitive conversion because it can lose the overall magnitude of the value. Converting long to double is a widening primitive conversion.
The compiler will automatically generate assignment context conversion for arguments. That includes widening primitive conversion, but not narrowing primitive conversion. Because the method with an int argument would require a narrowing conversion, it is not applicable to the call.
int is of 4 bytes where as long and double are of 8 bytes
So, it is quite obvious that there is a chance for loss of 4 bytes of data if it is casted to an int. Datatypes are always up casted. As the comment from #Bathsheba mentioned, there is a chance of data loss even in case of using double, but the loss is much smaller when compared with int.
As you can see, double uses 52 bits for storing significant digits. Where as if it chooses int, the variable will have 32 bits available to it. Hence jvm chooses the double instead of int.
Source: Wikipedia
Because a long doesn't "fit" in an int.
Check https://docs.oracle.com/javase/specs/jls/se7/html/jls-5.html

Byte arithmetic: How to subtract to a byte variable? [duplicate]

This question already has answers here:
Promotion in Java?
(5 answers)
Closed 9 years ago.
I'm getting an error when I'm trying to do somethink like this:
byte a = 23;
a = a - 1;
The compiler gives this error:
Test.java:8: possible loss of precision found : int required: byte
a = a - 1;
^
1 error
Casting doesn't solve the error...
Why the compiler don't let me do it?
Should I need to transform the variable 'a' into an int?
Do like this.
a = (byte)(a - 1);
When you subtract 1 from a then its integer value. So to get assign the result in byte you need to do explicit type casting.
In Java math, everything is promoted to at least an int before the computation. This is called Binary Numeric Promotion (JLS 5.6.2). So that's why the compiler found an int. To resolve this, cast the result of the entire expression back to byte:
a = (byte) (a - 1);
a = a - 1; // here before subtraction a is promoted to int data type and result of 'a-1' becomes int which can't be stored in byte as (byte = 8bits and int = 32 bits).
Thats why you'll have to cast it to a byte as follows :
a = (byte) (a - 1);
Do this:
a -= 1;
You even don't need explicit cast, compiler/JVM will do it for you.
Should you change the variable type to int nobody can say, having only information you provided.
A variable type is defined by the task you are planning to perform with it.
If your variable a counts fingers on someone's hands, why would you use int? Type byte is more than enough for that.

Categories