I was writing my own implementation of the power function and I discovered some weird results that occur at around Integer.MAX_VALUE, which I'm not sure why they occur.
This is my implementation:
public static long power(long x, long y) {
int result = 1;
while (y > 0) {
if ((y & 1) == 0) {
x *= x;
y >>>= 1;
} else {
result *= x;
y--;
}
}
return result;
}
The the following code is run,
System.out.println(fastPower(2, 31));
System.out.println(Math.pow(2, 31);
System.out.println((long)Math.pow(2, 31));
System.out.println((int)Math.pow(2, 31));
The results as follows, which I do not understand.
-2147483648
2.147483648E9
2147483648
2147483647
This further confuses me when shorts are used:
System.out.println(fastPower(2, 15));
System.out.println(Math.pow(2, 15));
System.out.println((int)Math.pow(2, 15));
System.out.println((short)Math.pow(2,15));
32768
32768.0
32768
-32768
These are the answers that I would expect, but they seem inconsistent with the results from ints.
The first three outputs from both int and short are easy to explain:
-2147483648 // your method returns an int, so overflows
2.147483648E9 // Math.pow returns a double, so formatted like this
2147483648 // double casted to a long, 2147483648 inside the possible range for long
32768 // your method returns an int, 32768 is inside the possible range for int
32768.0 // Math.pow returns a double, so formatted like this
32768 // double casted to an int, 32768 is inside the possible range for int
The hard to explain bit is the fourth result. Shouldn't System.out.println((int)Math.pow(2, 31)); print -2147483648 as well?
The trick here is how Java does a conversion from double to int. According to the spec, this is known as a narrowing primitive conversion (§5.1.3):
22 specific conversions on primitive types are called the narrowing
primitive conversions:
short to byte or char
char to byte or short
int to byte, short, or char
long to byte, short, char, or int
float to byte, short, char, int, or long
double to byte, short, char, int, long, or float
This is how a double to int conversion is carried out (bolded by me):
1. In the first step, the floating-point number is converted either to a long, if T is long, or to an int, if T is byte, short, char, or int,
as follows:
If the floating-point number is NaN(§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward
zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are
two cases:
a. If T is long, and this integer value can be represented as a long,
then the result of the first step is the long value V. b. Otherwise,
if this integer value can be represented as an int, then the result of
the first step is the int value V.
Otherwise, one of the following two cases must be true: a. The value must be too small (a negative value of large magnitude or negative
infinity), and the result of the first step is the smallest
representable value of type int or long. b. The value must be too
large (a positive value of large magnitude or positive infinity), and
the result of the first step is the largest representable value of
type int or long.
In the second step:
If T is int or long, the result of the conversion is the result of the first step.
If T is byte, char, or short, the result of the conversion is the result of a narrowing conversion to type T (§5.1.3) of the result of
the first step.
The first step changes the double to the largest representable value of int - 2147483647. This is why in the int case, 2147483647 is printed. In the short case, the second step changes the int value of 2147483647 to a short, like this:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T.
This is why the short overflew, but the int did not!
Assuming power() and fastPower() are the same, fastPower(2, 31) returns -2147483648 because result variable is int, even though parameters and return type are all long.
Math.pow() returns a double, so casting of result to integral type (long, int, short, byte, char) follows the rules of JLS 5.1.3. Narrowing Primitive Conversion, quoted below.
Math.pow(2, 31) is 2147483648.0. When cast to long, it's the same value, i.e. 2147483648. When cast to int however, the value is too large so result is Integer.MAX_VALUE, i.e. 2147483647, as highlighted in the quote below.
Math.pow(2, 15) is 32768.0. When cast to int, it's the same value, i.e. 32768. When cast to short however, the value is first narrowed to int, then narrowed to short by discarding higher bits (see second quote below), resulting in numeric overflow to -32768.
A narrowing conversion of a floating-point number to an integral type T takes two steps:
In the first step, the floating-point number is converted either to a long, if T is long, or to an int, if T is byte, short, char, or int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are two cases:
If T is long, and this integer value can be represented as a long, then the result of the first step is the long value V.
Otherwise, if this integer value can be represented as an int, then the result of the first step is the int value V.
Otherwise, one of the following two cases must be true:
The value must be too small (a negative value of large magnitude or negative infinity), and the result of the first step is the smallest representable value of type int or long.
The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long.
In the second step:
If T is int or long, the result of the conversion is the result of the first step.
If T is byte, char, or short, the result of the conversion is the result of a narrowing conversion to type T (§5.1.3) of the result of the first step.
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
Related
Why isn't there any function in the standard library of Kotlin/Java for taking the absolute value of a Byte/byte variable? I'm I missing something?
Math.abs() is only defined for int, long, double and float.
For context: in the audio world you can run easily into byte arrays representing the amplitude. I'm interested in calculating the average of the absolute values of a byte array. For e.g see this listener related to Visualizer in Android.
I know I can cast it to an integer and take the absolute value of that, but I would still be interested why is this not predefined.
The operations in java.lang.Math are in line with all other arithmetic operations in Java. Integer operations always work in either, 64 bit long or 32 bit int.
As stated in JLS, §4.2.2. Integer Operations
If an integer operator other than a shift operator has at least one operand of type long, then the operation is carried out using 64-bit precision, and the result of the numerical operator is of type long. If the other operand is not long, it is first widened (§5.1.5) to type long by numeric promotion (§5.6).
Otherwise, the operation is carried out using 32-bit precision, and the result of the numerical operator is of type int. If either operand is not an int, it is first widened to type int by numeric promotion.
In other words, not even the following, equivalent to abs, would compile:
byte a = 42, absA = a < 0? -a: a;
as the numeric operation -a will promote a to int before negating.
It’s important that a cast of the result to byte would not be a lossless operation here. The byte datatype has a value range from -128 to +127, so if the value is -128, its absolute value +128 is outside the byte value range and a cast to byte would cause an overflow to -127.
Therefore, to have a correct and efficient calculation, you should do as always in Java when it comes to byte, short, or char calculations, calculate everything using int and only cast the final result back to your data type. When you want to calculate the average, you have to calculate the sum using int anyway (or even long if you have more than 16777215 array elements).
byte[] array // e.g. test case: = { 1, -1, -128, 127 };
int sum = 0;
for(byte b: array) sum += Math.abs(b);
int average = sum/array.length;
// if you really need a byte result
byte byteAverage = average == 128? 127: (byte)average;
I don’t know about Kotlin, but in Java, the automatic promotion to int also works if the operand is of type Byte, so you don’t need to “cast it to an integer” to call Math.abs(int). You only have to deal with the fact that the result will be an int, as with all arithmetic operations on byte, short, char, or their wrapper types.
In java byte is signed between -128 and 127, corresponding as (unsigned) int: 0xFF & b between 128 .. 255, and 0 .. 127.
Math.abs is irrelevant here as probably unsigned byte values are assumed.
int[] bytesToInt(byte[] bs) {
int[] is = new int[bs.length];
Arrays.fill(is, i -> bs[i] & 0xFF);
return is;
}
byte byteAbs(byte b) {
return b >= 0? b : b == -128? 127 : -b;
}
byteAbs - given for completeness - reduces the range to 7 bits, and has the artefact that -128 maps to 127, as there is no 128.
on executing :
int p=-2147483648;
p-=Math.pow(1,0);
System.out.println(p);
p-=1;
System.out.println(p);
Output: -2147483648
2147483647
So why doesn't Math.pow() overflow the number?
We start the discussion by observing that -2147483648 == Integer.MIN_VALUE (= -(2³¹)).
The expression p -= Math.pow(1,0) has an implicit cast from double to int since Math.pow(...) returns a double. The expression with an explicit cast looks like this
p = (int) (p - Math.pow(1,0))
Ideone demo
Even more spread out, we get
double d = p - Math.pow(1,0);
p = (int) d;
Ideone demo
As we can see, d has the value -2.147483649E9 (= -2147483649.0) < Integer.MIN_VALUE.
The behaviour of the cast is governed by Java 14 JLS, §5.1.3:
5.1.3. Narrowing Primitive Conversion
...
A narrowing conversion of a floating-point number to an integral type T takes two steps:
In the first step, the floating-point number is converted either to a long, if T is long, or to an int, if T is byte, short, char, or int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are two cases:
If T is long, and this integer value can be represented as a long, then the result of the first step is the long value V.
Otherwise, if this integer value can be represented as an int, then the result of the first step is the int value V.
Otherwise, one of the following two cases must be true:
The value must be too small (a negative value of large magnitude or negative infinity), and the result of the first step is the smallest representable value of type int or long.
The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long.
In the second step:
If T is int or long, the result of the conversion is the result of the first step.
...
Please note that Math.pow() operates with arguments of type Double and returns a double. Casting it to int will result in the expected output:
public class MyClass {
public static void main(String args[]) {
int p=-2147483648;
p-=(int)Math.pow(1,0);
System.out.println(p);
p-=1;
System.out.println(p);
}
}
The above produces the following output:
2147483647
2147483646
The java.lang.Math class has ceil(), floor(), round() methods, but does not have trunc() one.
At the same time I see on the practice that the .intValue() method (which does actually (int) cast) does exactly what I expect from trunc() in its standard meaning.
However I cannot find any concrete documentation which confirms that intValue() is a full equivalent of trunc() and this is strange from many points of view, for example:
The description "Returns the value of this Double as an int (by
casting to type int)" from
https://docs.oracle.com/javase/7/docs/api/java/lang/Double.html does
not say anything that it "returns the integer part of the fractional
number" or like that.
The article
What is .intValue() in Java?
does not say anything that it behaves like trunc().
All my searches for "Java trunc method" or like that didn't give
anything as if I am the only one who searches for trunc() and as if I
don't know something very common that everyone knows.
Can I get somehow the confirmation that I can safely use intValue() in order to get fractional numbers rounded with "trunc" mode?
So the question becomes: Is casting a double to a int equal to truncation?
The Java Language Specification may have the answer. I'll quote:
specific conversions on primitive types are called the narrowing
primitive conversions:
[...]
float to byte, short, char, int, or long
double to byte, short, char, int, long, or float
A narrowing primitive conversion may lose information about the
overall magnitude of a numeric value and may also lose precision and
range.
[...]
A narrowing conversion of a floating-point number to an integral type
T takes two steps:
In the first step, the floating-point number is converted either to [...] an int, if T is byte, short, char, or int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward
zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are
two cases:
If T is long, and this integer value can be represented as a long, then the result of the first step is the long value V.
Otherwise, if this integer value can be represented as an int, then the result of the first step is the int value V.
Which is described in IEEE 754-1985.
You can use floor and ceil to implement trunc
public static double trunc(double value) {
return value<0 ? Math.ceil(value) : Math.floor(value);
}
With Google Guava DoubleMath#roundToInt() you can convert that result into an int:
public static int roundToInt(double x, RoundingMode mode) {
double z = roundIntermediate(x, mode);
checkInRangeForRoundingInputs(
z > MIN_INT_AS_DOUBLE - 1.0 & z < MAX_INT_AS_DOUBLE + 1.0, x, mode);
return (int) z;
}
private static final double MIN_INT_AS_DOUBLE = -0x1p31;
private static final double MAX_INT_AS_DOUBLE = 0x1p31 - 1.0;
I am trying to convert a BigInteger number into binary. I use a while loop to reduce the BigInteger until it is equal to 1, taking the remainder as the loop runs.
The conditional for the loop is: (decimalNum.intValue()>1).
But the program only goes through the loop once and then thinks that the BigInteger is less/equal to 1 while in reality it is around 55193474935748.
Why is this happening?
("inBinary" is an ArrayList to hold the remainders from the loop.)
Here is the while loop:
while (decimalNum.intValue()>1){
inBinary.add(0, decimalNum.mod(new BigInteger("2")).intValue()); //Get remainder (0 or 1)
decimalNum = decimalNum.divide(new BigInteger("2")); //Reduce decimalNum
}
55,193,474,935,748 doesn't fit into an int: the largest int value is 231 - 1, i.e. 2,147,483,647, which is much smaller. So you get an integer overflow.
This is explained in the javadoc, BTW:
Converts this BigInteger to an int. This conversion is analogous to a narrowing primitive conversion from long to int as defined in section 5.1.3 of The Java™ Language Specification: if this BigInteger is too big to fit in an int, only the low-order 32 bits are returned. Note that this conversion can lose information about the overall magnitude of the BigInteger value as well as return a result with the opposite sign.
If you want to compare a BigInteger to 1, then use
decimalNum.compareTo(BigInteger.ONE) > 0
To get the binary string value of your BigInteger, you could just do
bigInteger.toString(2);
EDIT : As mentionned in the comments by #VinceEmigh, converting BigInteger to int might lead to overflow.
Byte byte1=new Byte((byte) 20);
Short short1=new Short((short) 20);
why i am bound to use cast operator in Byte and Short but i am not using cast operator in other DataType
Integer integer=new Integer(20);
Long long1=new Long(20);
Double double1=new Double(20);
Float float1=new Float(20);
It's because the second snippet results in widening primitive conversions in accordance with JLS §5.1.2:
19 specific conversions on primitive types are called the widening primitive conversions:
byte to short, int, long, float, or double
short to int, long, float, or double
char to int, long, float, or double
int to long, float, or double
long to float or double
float to double
Whereas the first does not; notice that there is no conversion from int to short or byte.
The literal "20" is handled as an int by the compiler.
Integer, Long, Float and Double can handle number ranges which are greater or equal than the range of int so the compiler can do an implizit cast. Short and Byte do have smaller ranges which prevents implizit casts. Casting explizitly may result in a ClassCastException if the number is not representable by Byte or Short.
The constructor for Byte requires a byte if you're constructing it like that, and the same is true for the constructor of Short.
Numbers without a cast or a type literal are always seen as int.
The Constructor of byte is defined as
public Byte(byte value) {
this.value = value;
}
It expects byte , since you are passing integer you need to cast explicitly, Same holds true for short
Byte byte1=new Byte((byte) 20);
The byte data type is an 8-bit signed two's complement integer. It has
a minimum value of -128 and a maximum value of 127 (inclusive).
But, 20 above is an integer.
The int data type is a 32-bit signed two's complement integer. It has
a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647
(inclusive).
So, byte cannot take any value out of the range. You will lose data (bits) when you try assigning an integer say 130 rather than 20 because 130 is out of byte range. By casting you are telling the compiler that you are aware of the conversion.