Why is long casted to double in Java? - java

SSCCE:
public class Test {
public static void main(String[] args) {
Long a = new Long(1L);
new A(a);
}
static class A {
A(int i) {
System.out.println("int");
}
A(double d) {
System.out.println("double");
}
}
}
Output:
double
There will be no compilation error printed, it works fine and calls double-parameter constructor. But why?

It's down to the rules of type promotion: a long is converted to a double in preference to an int.
A long can always fit into a double, although precision could be lost if the long is larger than the 53rd power of 2. So your compiler picks the double constructor as a better fit than the int one.
(The compiler doesn't make a dynamic check in the sense that 1L does fit into an int).

Converting long to int is a narrowing primitive conversion because it can lose the overall magnitude of the value. Converting long to double is a widening primitive conversion.
The compiler will automatically generate assignment context conversion for arguments. That includes widening primitive conversion, but not narrowing primitive conversion. Because the method with an int argument would require a narrowing conversion, it is not applicable to the call.

int is of 4 bytes where as long and double are of 8 bytes
So, it is quite obvious that there is a chance for loss of 4 bytes of data if it is casted to an int. Datatypes are always up casted. As the comment from #Bathsheba mentioned, there is a chance of data loss even in case of using double, but the loss is much smaller when compared with int.
As you can see, double uses 52 bits for storing significant digits. Where as if it chooses int, the variable will have 32 bits available to it. Hence jvm chooses the double instead of int.
Source: Wikipedia

Because a long doesn't "fit" in an int.
Check https://docs.oracle.com/javase/specs/jls/se7/html/jls-5.html

Related

Lossy conversion int to double method call [duplicate]

New Java programmers are often confused by compilation error messages like:
"incompatible types: possible lossy conversion from double to int"
for this line of code:
int squareRoot = Math.sqrt(i);
In general, what does the "possible lossy conversion" error message mean, and how do you fix it?
First of all, this is a compilation error. If you ever see it in an exception message at runtime, it is because you have have run a program with compilation errors1.
The general form of the message is this:
"incompatible types: possible lossy conversion from <type1> to <type2>"
where <type1> and <type2> are both primitive numeric types; i.e. one of byte, char, short, int, long, float or double.
This error happens when your code attempts to do an implicit conversion from <type1> to <type2> but the conversion could be lossy.
In the example in the question:
int squareRoot = Math.sqrt(i);
the sqrt method produces a double, but a conversion from double to int is potentially lossy.
What does "potentially lossy" mean?
Well lets look at a couple of examples.
A conversion of a long to an int is a potentially lossy conversion because there are long values that do not have a corresponding int value. For example, any long value that is greater than 2^31 - 1 is too large to be represented as an int. Similarly, any number less than -2^31 is too small.
A conversion of an int to a long is NOT lossy conversion because every int value has a corresponding long value.
A conversion of a float to an long is a potentially lossy conversion because there float values that are outside of the range that can be represented as long values. Such numbers are (lossily) convert into Long.MAX_VALUE or Long.MIN_VALUE, as are NaN and Inf values.
A conversion of an long to a float is NOT lossy conversion because every long value has a corresponding float value. (The converted value may be less precise, but "lossiness" doesn't mean that ... in this context.)
These are all the conversions that are potentially lossy:
short to byte or char
char to byte or short
int to byte, short or char
long to byte, short, char or int
float to byte, short, char, int or long
double to byte, short, char, int, long or float.
How do you fix the error?
The way to make the compilation error go away is to add a typecast. For example;
int i = 47;
int squareRoot = Math.sqrt(i); // compilation error!
becomes
int i = 47;
int squareRoot = (int) Math.sqrt(i); // no compilation error
But is that really a fix? Consider that the square root of 47 is 6.8556546004 ... but squareRoot will get the value 6. (The conversion will truncate, not round.)
And what about this?
byte b = (int) 512;
That results in b getting the value 0. Converting from a larger int type to a smaller int type is done by masking out the high order bits, and the low-order 8 bits of 512 are all zero.
In short, you should not simply add a typecast, because it might not do the correct thing for your application.
Instead, you need to understand why your code needs to do a conversion:
Is this happening because you have made some other mistake in your code?
Should the <type1> be a different type, so that a lossy conversion isn't needed here?
If a conversion is necessary, is the silent lossy conversion that the typecast will do the correct behavior?
Or should your code be doing some range checks and dealing with incorrect / unexpected values by throwing an exception?
"Possible lossy conversion" when subscripting.
First example:
for (double d = 0; d < 10.0; d += 1.0) {
System.out.println(array[d]); // <<-- possible lossy conversion
}
The problem here is that array index value must be int. So d has to be converted from double to int. In general, using a floating point value as an index doesn't make sense. Either someone is under the impression that Java arrays work like (say) Python dictionaries, or they have overlooked the fact that floating-point arithmetic is often inexact.
The solution is to rewrite the code to avoid using a floating point value as an array index. (Adding a type cast is probably an incorrect solution.)
Second example:
for (long l = 0; l < 10; l++) {
System.out.println(array[l]); // <<-- possible lossy conversion
}
This is a variation of the previous problem, and the solution is the same. The difference is that the root cause is that Java arrays are limited to 32 bit indexes. If you want an "array like" data structure which has more than 231 - 1 elements, you need to define or find a class to do it.
"Possible lossy conversion" in method or constructor calls
Consider this:
public class User {
String name;
short age;
int height;
public User(String name, short age, int height) {
this.name = name;
this.age = age;
this.height = height;
}
public static void main(String[] args) {
User user1 = new User("Dan", 20, 190);
}
}
Compiling the above with Java 11 gives the following:
$ javac -Xdiags:verbose User.java
User.java:20: error: constructor User in class User cannot be applied to given types;
User user1 = new User("Dan", 20, 190);
^
required: String,short,int
found: String,int,int
reason: argument mismatch; possible lossy conversion from int to short
1 error
The problem is that the literal 20 is an int, and the corresponding parameter in the constructor is declared as a short. Converting an int to a short is lossy.
"Possible lossy conversion" in a return statement.
Example:
public int compute() {
long result = 42L;
return result; // <<-- possible lossy conversion
}
A return (with a value / expression) could be thought of an an "assignment to the return value". But no matter how you think about it, it is necessary to convert the value supplied to the actual return type of the method. Possible solutions are adding a typecast (which says "I acknowledge the lossy-ness") or changing the method's return type.
"Possible lossy conversion" due to promotion in expressions
Consider this:
byte b1 = 0x01;
byte mask = 0x0f;
byte result = b1 & mask; // <<-- possible lossy conversion
This will tell you that you that there is a "possible lossy conversion from int to byte". This is actually a variation of the first example. The potentially confusing thing is understanding where the int comes from.
The answer to that is it comes from the & operator. In fact all of the arithmetic and bitwise operators for integer types will produce an int or long, depending on the operands. So in the above example, b1 & mask is actually producing an int, but we are trying to assign that to a byte.
To fix this example we must type-cast the expression result back to a byte before assigning it.
byte result = (byte) (b1 & mask);
"Possible lossy conversion" when assigning literals
Consider this:
int a = 21;
byte b1 = a; // <<-- possible lossy conversion
byte b2 = 21; // OK
What is going on? Why is one version allowed but the other one isn't? (After all they "do" the same thing!)
First of all, the JLS states that 21 is an numeric literal whose type is int. (There are no byte or short literals.) So in both cases we are assigning an int to a byte.
In the first case, the reason for the error is that not all int values will fit into a byte.
In the second case, the compiler knows that 21 is a value that will always fit into a byte.
The technical explanation is that in an assignment context, it is permissible to perform a primitive narrowing conversion to a byte, char or short if the following are all true:
The value is the result of a compile time constant expression (which includes literals).
The type of the expression is byte, short, char or int.
The constant value being assigned is representable (without loss) in the domain of the "target" type.
Note that this only applies with assignment statements, or more technically in assignment contexts. Thus:
Byte b4 = new Byte(21); // incorrect
gives a compilation error.
1 - For instance, the Eclipse IDE has an option which allows you to ignore compilation errors and run the code anyway. If you select this, the IDE's compiler will create a .class file where the method with the error will throw an unchecked exception if it is called. The exception message will mention the compilation error message.

Java trunc() method equivalent

The java.lang.Math class has ceil(), floor(), round() methods, but does not have trunc() one.
At the same time I see on the practice that the .intValue() method (which does actually (int) cast) does exactly what I expect from trunc() in its standard meaning.
However I cannot find any concrete documentation which confirms that intValue() is a full equivalent of trunc() and this is strange from many points of view, for example:
The description "Returns the value of this Double as an int (by
casting to type int)" from
https://docs.oracle.com/javase/7/docs/api/java/lang/Double.html does
not say anything that it "returns the integer part of the fractional
number" or like that.
The article
What is .intValue() in Java?
does not say anything that it behaves like trunc().
All my searches for "Java trunc method" or like that didn't give
anything as if I am the only one who searches for trunc() and as if I
don't know something very common that everyone knows.
Can I get somehow the confirmation that I can safely use intValue() in order to get fractional numbers rounded with "trunc" mode?
So the question becomes: Is casting a double to a int equal to truncation?
The Java Language Specification may have the answer. I'll quote:
specific conversions on primitive types are called the narrowing
primitive conversions:
[...]
float to byte, short, char, int, or long
double to byte, short, char, int, long, or float
A narrowing primitive conversion may lose information about the
overall magnitude of a numeric value and may also lose precision and
range.
[...]
A narrowing conversion of a floating-point number to an integral type
T takes two steps:
In the first step, the floating-point number is converted either to [...] an int, if T is byte, short, char, or int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward
zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are
two cases:
If T is long, and this integer value can be represented as a long, then the result of the first step is the long value V.
Otherwise, if this integer value can be represented as an int, then the result of the first step is the int value V.
Which is described in IEEE 754-1985.
You can use floor and ceil to implement trunc
public static double trunc(double value) {
return value<0 ? Math.ceil(value) : Math.floor(value);
}
With Google Guava DoubleMath#roundToInt() you can convert that result into an int:
public static int roundToInt(double x, RoundingMode mode) {
double z = roundIntermediate(x, mode);
checkInRangeForRoundingInputs(
z > MIN_INT_AS_DOUBLE - 1.0 & z < MAX_INT_AS_DOUBLE + 1.0, x, mode);
return (int) z;
}
private static final double MIN_INT_AS_DOUBLE = -0x1p31;
private static final double MAX_INT_AS_DOUBLE = 0x1p31 - 1.0;

Possible lossy conversion from double to int when squaring an integer [duplicate]

New Java programmers are often confused by compilation error messages like:
"incompatible types: possible lossy conversion from double to int"
for this line of code:
int squareRoot = Math.sqrt(i);
In general, what does the "possible lossy conversion" error message mean, and how do you fix it?
First of all, this is a compilation error. If you ever see it in an exception message at runtime, it is because you have have run a program with compilation errors1.
The general form of the message is this:
"incompatible types: possible lossy conversion from <type1> to <type2>"
where <type1> and <type2> are both primitive numeric types; i.e. one of byte, char, short, int, long, float or double.
This error happens when your code attempts to do an implicit conversion from <type1> to <type2> but the conversion could be lossy.
In the example in the question:
int squareRoot = Math.sqrt(i);
the sqrt method produces a double, but a conversion from double to int is potentially lossy.
What does "potentially lossy" mean?
Well lets look at a couple of examples.
A conversion of a long to an int is a potentially lossy conversion because there are long values that do not have a corresponding int value. For example, any long value that is greater than 2^31 - 1 is too large to be represented as an int. Similarly, any number less than -2^31 is too small.
A conversion of an int to a long is NOT lossy conversion because every int value has a corresponding long value.
A conversion of a float to an long is a potentially lossy conversion because there float values that are outside of the range that can be represented as long values. Such numbers are (lossily) convert into Long.MAX_VALUE or Long.MIN_VALUE, as are NaN and Inf values.
A conversion of an long to a float is NOT lossy conversion because every long value has a corresponding float value. (The converted value may be less precise, but "lossiness" doesn't mean that ... in this context.)
These are all the conversions that are potentially lossy:
short to byte or char
char to byte or short
int to byte, short or char
long to byte, short, char or int
float to byte, short, char, int or long
double to byte, short, char, int, long or float.
How do you fix the error?
The way to make the compilation error go away is to add a typecast. For example;
int i = 47;
int squareRoot = Math.sqrt(i); // compilation error!
becomes
int i = 47;
int squareRoot = (int) Math.sqrt(i); // no compilation error
But is that really a fix? Consider that the square root of 47 is 6.8556546004 ... but squareRoot will get the value 6. (The conversion will truncate, not round.)
And what about this?
byte b = (int) 512;
That results in b getting the value 0. Converting from a larger int type to a smaller int type is done by masking out the high order bits, and the low-order 8 bits of 512 are all zero.
In short, you should not simply add a typecast, because it might not do the correct thing for your application.
Instead, you need to understand why your code needs to do a conversion:
Is this happening because you have made some other mistake in your code?
Should the <type1> be a different type, so that a lossy conversion isn't needed here?
If a conversion is necessary, is the silent lossy conversion that the typecast will do the correct behavior?
Or should your code be doing some range checks and dealing with incorrect / unexpected values by throwing an exception?
"Possible lossy conversion" when subscripting.
First example:
for (double d = 0; d < 10.0; d += 1.0) {
System.out.println(array[d]); // <<-- possible lossy conversion
}
The problem here is that array index value must be int. So d has to be converted from double to int. In general, using a floating point value as an index doesn't make sense. Either someone is under the impression that Java arrays work like (say) Python dictionaries, or they have overlooked the fact that floating-point arithmetic is often inexact.
The solution is to rewrite the code to avoid using a floating point value as an array index. (Adding a type cast is probably an incorrect solution.)
Second example:
for (long l = 0; l < 10; l++) {
System.out.println(array[l]); // <<-- possible lossy conversion
}
This is a variation of the previous problem, and the solution is the same. The difference is that the root cause is that Java arrays are limited to 32 bit indexes. If you want an "array like" data structure which has more than 231 - 1 elements, you need to define or find a class to do it.
"Possible lossy conversion" in method or constructor calls
Consider this:
public class User {
String name;
short age;
int height;
public User(String name, short age, int height) {
this.name = name;
this.age = age;
this.height = height;
}
public static void main(String[] args) {
User user1 = new User("Dan", 20, 190);
}
}
Compiling the above with Java 11 gives the following:
$ javac -Xdiags:verbose User.java
User.java:20: error: constructor User in class User cannot be applied to given types;
User user1 = new User("Dan", 20, 190);
^
required: String,short,int
found: String,int,int
reason: argument mismatch; possible lossy conversion from int to short
1 error
The problem is that the literal 20 is an int, and the corresponding parameter in the constructor is declared as a short. Converting an int to a short is lossy.
"Possible lossy conversion" in a return statement.
Example:
public int compute() {
long result = 42L;
return result; // <<-- possible lossy conversion
}
A return (with a value / expression) could be thought of an an "assignment to the return value". But no matter how you think about it, it is necessary to convert the value supplied to the actual return type of the method. Possible solutions are adding a typecast (which says "I acknowledge the lossy-ness") or changing the method's return type.
"Possible lossy conversion" due to promotion in expressions
Consider this:
byte b1 = 0x01;
byte mask = 0x0f;
byte result = b1 & mask; // <<-- possible lossy conversion
This will tell you that you that there is a "possible lossy conversion from int to byte". This is actually a variation of the first example. The potentially confusing thing is understanding where the int comes from.
The answer to that is it comes from the & operator. In fact all of the arithmetic and bitwise operators for integer types will produce an int or long, depending on the operands. So in the above example, b1 & mask is actually producing an int, but we are trying to assign that to a byte.
To fix this example we must type-cast the expression result back to a byte before assigning it.
byte result = (byte) (b1 & mask);
"Possible lossy conversion" when assigning literals
Consider this:
int a = 21;
byte b1 = a; // <<-- possible lossy conversion
byte b2 = 21; // OK
What is going on? Why is one version allowed but the other one isn't? (After all they "do" the same thing!)
First of all, the JLS states that 21 is an numeric literal whose type is int. (There are no byte or short literals.) So in both cases we are assigning an int to a byte.
In the first case, the reason for the error is that not all int values will fit into a byte.
In the second case, the compiler knows that 21 is a value that will always fit into a byte.
The technical explanation is that in an assignment context, it is permissible to perform a primitive narrowing conversion to a byte, char or short if the following are all true:
The value is the result of a compile time constant expression (which includes literals).
The type of the expression is byte, short, char or int.
The constant value being assigned is representable (without loss) in the domain of the "target" type.
Note that this only applies with assignment statements, or more technically in assignment contexts. Thus:
Byte b4 = new Byte(21); // incorrect
gives a compilation error.
1 - For instance, the Eclipse IDE has an option which allows you to ignore compilation errors and run the code anyway. If you select this, the IDE's compiler will create a .class file where the method with the error will throw an unchecked exception if it is called. The exception message will mention the compilation error message.

Why casting of primitive data types truncate floating-point values?

Casts of primitive types are most often used to convert floating-point values to integers. When we do this, the fractional part of the floating-point value is simply truncated.Why?
public class Chaz {
public static void main(String[] args) {
double x = 1234.5678;
long g = (long)x;
System.out.println(g);
}
}
From docs,
The long data type is a 64-bit two's complement integer. The signed
long has a minimum value of -2^63 and a maximum value of 2^63-1
It doesnot store decimal precision values. Please see here
long stands for long integer. I understand that the problem is data loss, but an integer can not store non-integers.
One can decide to store a multiplication of the number, like this:
public class Chaz {
public static void main(String[] args) {
double x = 1234.5678;
long g = (long)(x * 10000);
System.out.println(g);
}
}
and then know that g is actually 10000x the real value. Or one can round it if the nearest integer is needed. We can't tell without additional information, but the thing we actually CAN tell, that this is a normal and expected behavior.

int to double or double to int

So I basically want to know, does java automatically turn int to a double or a double to a int.
To make this more clear look at this example:
say where given this heading
public static void doThis (int n, double x)
and in the main method we declare something like this :
doThis(0.5,2);
Is this a invalid call?
how about this?
doThis(3L,4);
these last two
doThis(3,4);
doThis(2.0,1.5);
Thanks
Values of type int can be implicitly converted to double, but not vice versa.
Given this declaration:
public static void doThis (int n, double x)
these are legal:
doThis(1, 0.5); // call doThis with an int and a double
doThis(1, (double)2); // second int is converted to a double, then passed
doThis(1, 2); // same as above; the compiler automatically inserts the cast
doThis((int)3.5, 0.5); // first double is converted to an int, then passed
doThis((int)3.5, 42); // the double is converted to an int, and the int is converted to a double
and these are not:
doThis(5.1, 0.5); // the compiler will not automatically convert double to int
doThis(5L, 0.5); // nor will it convert long to int
The intention behind the automatic conversion rules is that the compiler will not automatically convert types if doing so could lose data. For example, (int)4.5 is 4, not 4.5 - the fractional part is ignored.
You can pass an int literal where a double is required, but not vice versa. If you try to compile the following example, you can see for yourself.
public class MWE{
public static void doThis (int n, double x) {
}
public static void main(String[] args) {
doThis(0.5,2); // Compiler error
doThis(3L,4);// Compiler error
doThis(3,4); // No Error
doThis(2.0,1.5); // Compiler error
}
}
You can do that, you can pass int when double is needed(it get casted by the compiler).
But you cant pass double in place of int.
More info over here.
When passing values of different types, there are a couple of rules of thumb:
Floating points cannot be substituted for integers, but the reverse is possible.
Primitives of the same type (integer or floating point) with a higher but depth cannot be substituted for ones with lower bit depth, but the reverse is possible.
Substitution is just an automatic cast by the Java compiler. You can always cast manually.
To clarify, the floating point types are float and double, while the integer types are byte, short, int, and long.
In this case, double is a floating point and int is an integer. You can substitute int in place of a double, but not vice versa.
In a method invocation, each argument value can have any of these conversions applied to it (JLS 5.3):
5.3. Method Invocation Conversion
Method invocation contexts allow the use of one of the following:
an identity conversion (§5.1.1)
a widening primitive conversion (§5.1.2)
a widening reference conversion (§5.1.5)
a boxing conversion (§5.1.7) optionally followed by widening reference
conversion
an unboxing conversion (§5.1.8) optionally followed by a widening
primitive conversion.
In this example, we're dealing with a primitive widening conversion:
5.1.2. Widening Primitive Conversion
19 specific conversions on primitive types are called the widening
primitive conversions:
byte to short, int, long, float, or double
short to int, long, float, or double
char to int, long, float, or double
int to long, float, or double
long to float or double
float to double
As you can see in the list, going from a int to double is perfectly fine, but the opposite is not true.

Categories