Java trunc() method equivalent - java

The java.lang.Math class has ceil(), floor(), round() methods, but does not have trunc() one.
At the same time I see on the practice that the .intValue() method (which does actually (int) cast) does exactly what I expect from trunc() in its standard meaning.
However I cannot find any concrete documentation which confirms that intValue() is a full equivalent of trunc() and this is strange from many points of view, for example:
The description "Returns the value of this Double as an int (by
casting to type int)" from
https://docs.oracle.com/javase/7/docs/api/java/lang/Double.html does
not say anything that it "returns the integer part of the fractional
number" or like that.
The article
What is .intValue() in Java?
does not say anything that it behaves like trunc().
All my searches for "Java trunc method" or like that didn't give
anything as if I am the only one who searches for trunc() and as if I
don't know something very common that everyone knows.
Can I get somehow the confirmation that I can safely use intValue() in order to get fractional numbers rounded with "trunc" mode?

So the question becomes: Is casting a double to a int equal to truncation?
The Java Language Specification may have the answer. I'll quote:
specific conversions on primitive types are called the narrowing
primitive conversions:
[...]
float to byte, short, char, int, or long
double to byte, short, char, int, long, or float
A narrowing primitive conversion may lose information about the
overall magnitude of a numeric value and may also lose precision and
range.
[...]
A narrowing conversion of a floating-point number to an integral type
T takes two steps:
In the first step, the floating-point number is converted either to [...] an int, if T is byte, short, char, or int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward
zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are
two cases:
If T is long, and this integer value can be represented as a long, then the result of the first step is the long value V.
Otherwise, if this integer value can be represented as an int, then the result of the first step is the int value V.
Which is described in IEEE 754-1985.

You can use floor and ceil to implement trunc
public static double trunc(double value) {
return value<0 ? Math.ceil(value) : Math.floor(value);
}
With Google Guava DoubleMath#roundToInt() you can convert that result into an int:
public static int roundToInt(double x, RoundingMode mode) {
double z = roundIntermediate(x, mode);
checkInRangeForRoundingInputs(
z > MIN_INT_AS_DOUBLE - 1.0 & z < MAX_INT_AS_DOUBLE + 1.0, x, mode);
return (int) z;
}
private static final double MIN_INT_AS_DOUBLE = -0x1p31;
private static final double MAX_INT_AS_DOUBLE = 0x1p31 - 1.0;

Related

Why does decrementing Integer.MIN_VALUE by Math.pow() return the same value?

on executing :
int p=-2147483648;
p-=Math.pow(1,0);
System.out.println(p);
p-=1;
System.out.println(p);
Output: -2147483648
2147483647
So why doesn't Math.pow() overflow the number?
We start the discussion by observing that -2147483648 == Integer.MIN_VALUE (= -(2³¹)).
The expression p -= Math.pow(1,0) has an implicit cast from double to int since Math.pow(...) returns a double. The expression with an explicit cast looks like this
p = (int) (p - Math.pow(1,0))
Ideone demo
Even more spread out, we get
double d = p - Math.pow(1,0);
p = (int) d;
Ideone demo
As we can see, d has the value -2.147483649E9 (= -2147483649.0) < Integer.MIN_VALUE.
The behaviour of the cast is governed by Java 14 JLS, §5.1.3:
5.1.3. Narrowing Primitive Conversion
...
A narrowing conversion of a floating-point number to an integral type T takes two steps:
In the first step, the floating-point number is converted either to a long, if T is long, or to an int, if T is byte, short, char, or int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are two cases:
If T is long, and this integer value can be represented as a long, then the result of the first step is the long value V.
Otherwise, if this integer value can be represented as an int, then the result of the first step is the int value V.
Otherwise, one of the following two cases must be true:
The value must be too small (a negative value of large magnitude or negative infinity), and the result of the first step is the smallest representable value of type int or long.
The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long.
In the second step:
If T is int or long, the result of the conversion is the result of the first step.
...
Please note that Math.pow() operates with arguments of type Double and returns a double. Casting it to int will result in the expected output:
public class MyClass {
public static void main(String args[]) {
int p=-2147483648;
p-=(int)Math.pow(1,0);
System.out.println(p);
p-=1;
System.out.println(p);
}
}
The above produces the following output:
2147483647
2147483646

Java Power function answers

I was writing my own implementation of the power function and I discovered some weird results that occur at around Integer.MAX_VALUE, which I'm not sure why they occur.
This is my implementation:
public static long power(long x, long y) {
int result = 1;
while (y > 0) {
if ((y & 1) == 0) {
x *= x;
y >>>= 1;
} else {
result *= x;
y--;
}
}
return result;
}
The the following code is run,
System.out.println(fastPower(2, 31));
System.out.println(Math.pow(2, 31);
System.out.println((long)Math.pow(2, 31));
System.out.println((int)Math.pow(2, 31));
The results as follows, which I do not understand.
-2147483648
2.147483648E9
2147483648
2147483647
This further confuses me when shorts are used:
System.out.println(fastPower(2, 15));
System.out.println(Math.pow(2, 15));
System.out.println((int)Math.pow(2, 15));
System.out.println((short)Math.pow(2,15));
32768
32768.0
32768
-32768
These are the answers that I would expect, but they seem inconsistent with the results from ints.
The first three outputs from both int and short are easy to explain:
-2147483648 // your method returns an int, so overflows
2.147483648E9 // Math.pow returns a double, so formatted like this
2147483648 // double casted to a long, 2147483648 inside the possible range for long
32768 // your method returns an int, 32768 is inside the possible range for int
32768.0 // Math.pow returns a double, so formatted like this
32768 // double casted to an int, 32768 is inside the possible range for int
The hard to explain bit is the fourth result. Shouldn't System.out.println((int)Math.pow(2, 31)); print -2147483648 as well?
The trick here is how Java does a conversion from double to int. According to the spec, this is known as a narrowing primitive conversion (§5.1.3):
22 specific conversions on primitive types are called the narrowing
primitive conversions:
short to byte or char
char to byte or short
int to byte, short, or char
long to byte, short, char, or int
float to byte, short, char, int, or long
double to byte, short, char, int, long, or float
This is how a double to int conversion is carried out (bolded by me):
1. In the first step, the floating-point number is converted either to a long, if T is long, or to an int, if T is byte, short, char, or int,
as follows:
If the floating-point number is NaN(§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward
zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are
two cases:
a. If T is long, and this integer value can be represented as a long,
then the result of the first step is the long value V. b. Otherwise,
if this integer value can be represented as an int, then the result of
the first step is the int value V.
Otherwise, one of the following two cases must be true: a. The value must be too small (a negative value of large magnitude or negative
infinity), and the result of the first step is the smallest
representable value of type int or long. b. The value must be too
large (a positive value of large magnitude or positive infinity), and
the result of the first step is the largest representable value of
type int or long.
In the second step:
If T is int or long, the result of the conversion is the result of the first step.
If T is byte, char, or short, the result of the conversion is the result of a narrowing conversion to type T (§5.1.3) of the result of
the first step.
The first step changes the double to the largest representable value of int - 2147483647. This is why in the int case, 2147483647 is printed. In the short case, the second step changes the int value of 2147483647 to a short, like this:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T.
This is why the short overflew, but the int did not!
Assuming power() and fastPower() are the same, fastPower(2, 31) returns -2147483648 because result variable is int, even though parameters and return type are all long.
Math.pow() returns a double, so casting of result to integral type (long, int, short, byte, char) follows the rules of JLS 5.1.3. Narrowing Primitive Conversion, quoted below.
Math.pow(2, 31) is 2147483648.0. When cast to long, it's the same value, i.e. 2147483648. When cast to int however, the value is too large so result is Integer.MAX_VALUE, i.e. 2147483647, as highlighted in the quote below.
Math.pow(2, 15) is 32768.0. When cast to int, it's the same value, i.e. 32768. When cast to short however, the value is first narrowed to int, then narrowed to short by discarding higher bits (see second quote below), resulting in numeric overflow to -32768.
A narrowing conversion of a floating-point number to an integral type T takes two steps:
In the first step, the floating-point number is converted either to a long, if T is long, or to an int, if T is byte, short, char, or int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are two cases:
If T is long, and this integer value can be represented as a long, then the result of the first step is the long value V.
Otherwise, if this integer value can be represented as an int, then the result of the first step is the int value V.
Otherwise, one of the following two cases must be true:
The value must be too small (a negative value of large magnitude or negative infinity), and the result of the first step is the smallest representable value of type int or long.
The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long.
In the second step:
If T is int or long, the result of the conversion is the result of the first step.
If T is byte, char, or short, the result of the conversion is the result of a narrowing conversion to type T (§5.1.3) of the result of the first step.
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.

Extract int part of a BigDecimal?

In Java, I'm working with the BigDecimal class and part of my code requires me to extract the int part from it. BigDecimal does not appear to have any built in methods to help me get the number before the decimal point of a BigDecimal.
For example:
BigDecimal bd = new BigDecimal("23452.4523434");
I want to extract the 23452 from the number represented above. What's the best way to do it?
Depends on what you mean by "extract". What is the type of the result of the extraction? Another BigDecimal, a BigInteger, an int, a long, a String, or something else?
Here's code for them all:
BigDecimal result1 = bd.setScale(0, RoundingMode.DOWN);
BigInteger result2 = bd.toBigInteger();
int result3 = bd.intValue(); // Overflow may occur
long result4 = bd.longValue(); // Overflow may occur
String result5 = bd.toBigInteger().toString();
String result6 = bd.setScale(0, RoundingMode.DOWN).toString();
NumberFormat fmt = new DecimalFormat("0");
fmt.setRoundingMode(RoundingMode.DOWN);
String result7 = fmt.format(bd);
Explanation of roundings:
RoundingMode.DOWN - Rounding mode to round towards zero. Never increments the digit prior to a discarded fraction (i.e., truncates). Note that this rounding mode never increases the magnitude of the calculated value.
toBigInteger() - Converts this BigDecimal to a BigInteger. This conversion is analogous to the narrowing primitive conversion from double to long as defined in section 5.1.3 of The Java™ Language Specification: any fractional part of this BigDecimal will be discarded. Note that this conversion can lose information about the precision of the BigDecimal value.
intValue() / longValue() - Converts this BigDecimal to an int / long. This conversion is analogous to the narrowing primitive conversion from double to int / long as defined in section 5.1.3 of The Java™ Language Specification: any fractional part of this BigDecimal will be discarded, and if the resulting "BigInteger" is too big to fit in an int, only the low-order 32 / 64 bits are returned.
As can be seen from the descriptions, all 4 discards fractional part, i.e. rounds towards zero, aka truncates the value.
bd.toBigInteger()
See the docs at https://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html#toBigInteger()

Why is long casted to double in Java?

SSCCE:
public class Test {
public static void main(String[] args) {
Long a = new Long(1L);
new A(a);
}
static class A {
A(int i) {
System.out.println("int");
}
A(double d) {
System.out.println("double");
}
}
}
Output:
double
There will be no compilation error printed, it works fine and calls double-parameter constructor. But why?
It's down to the rules of type promotion: a long is converted to a double in preference to an int.
A long can always fit into a double, although precision could be lost if the long is larger than the 53rd power of 2. So your compiler picks the double constructor as a better fit than the int one.
(The compiler doesn't make a dynamic check in the sense that 1L does fit into an int).
Converting long to int is a narrowing primitive conversion because it can lose the overall magnitude of the value. Converting long to double is a widening primitive conversion.
The compiler will automatically generate assignment context conversion for arguments. That includes widening primitive conversion, but not narrowing primitive conversion. Because the method with an int argument would require a narrowing conversion, it is not applicable to the call.
int is of 4 bytes where as long and double are of 8 bytes
So, it is quite obvious that there is a chance for loss of 4 bytes of data if it is casted to an int. Datatypes are always up casted. As the comment from #Bathsheba mentioned, there is a chance of data loss even in case of using double, but the loss is much smaller when compared with int.
As you can see, double uses 52 bits for storing significant digits. Where as if it chooses int, the variable will have 32 bits available to it. Hence jvm chooses the double instead of int.
Source: Wikipedia
Because a long doesn't "fit" in an int.
Check https://docs.oracle.com/javase/specs/jls/se7/html/jls-5.html

Loss of precision - int -> float or double

I have an exam question I am revising for and the question is for 4 marks.
"In java we can assign a int to a double or a float". Will this ever lose information and why?
I have put that because ints are normally of fixed length or size - the precision for storing data is finite, where storing information in floating point can be infinite, essentially we lose information because of this
Now I am a little sketchy as to whether or not I am hitting the right areas here. I very sure it will lose precision but I can't exactly put my finger on why. Can I get some help, please?
In Java Integer uses 32 bits to represent its value.
In Java a FLOAT uses a 23 bit mantissa, so integers greater than 2^23 will have their least significant bits truncated. For example 33554435 (or 0x200003) will be truncated to around 33554432 +/- 4
In Java a DOUBLE uses a 52 bit mantissa, so will be able to represent a 32bit integer without lost of data.
See also "Floating Point" on wikipedia
It's not necessary to know the internal layout of floating-point numbers. All you need is the pigeonhole principle and the knowledge that int and float are the same size.
int is a 32-bit type, for which every bit pattern represents a distinct integer, so there are 2^32 int values.
float is a 32-bit type, so it has at most 2^32 distinct values.
Some floats represent non-integers, so there are fewer than 2^32 float values that represent integers.
Therefore, different int values will be converted to the same float (=loss of precision).
Similar reasoning can be used with long and double.
Here's what JLS has to say about the matter (in a non-technical discussion).
JLS 5.1.2 Widening primitive conversion
The following 19 specific conversions on primitive types are called the widening primitive conversions:
int to long, float, or double
(rest omitted)
Conversion of an int or a long value to float, or of a long value to double, may result in loss of precision -- that is, the result may lose some of the least significant bits of the value. In this case, the resulting floating-point value will be a correctly rounded version of the integer value, using IEEE 754 round-to-nearest mode.
Despite the fact that loss of precision may occur, widening conversions among primitive types never result in a run-time exception.
Here is an example of a widening conversion that loses precision:
class Test {
public static void main(String[] args) {
int big = 1234567890;
float approx = big;
System.out.println(big - (int)approx);
}
}
which prints:
-46
thus indicating that information was lost during the conversion from type int to type float because values of type float are not precise to nine significant digits.
No, float and double are fixed-length too - they just use their bits differently. Read more about how exactly they work in the Floating-Poing Guide .
Basically, you cannot lose precision when assigning an int to a double, because double has 52 bits of precision, which is enough to hold all int values. But float only has 23 bits of precision, so it cannot exactly represent all int values that are larger than about 2^23.
Your intuition is correct, you MAY loose precision when converting int to float. However it not as simple as presented in most other answers.
In Java a FLOAT uses a 23 bit mantissa, so integers greater than 2^23 will have their least significant bits truncated. (from a post on this page)
Not true.
Example: here is an integer that is greater than 2^23 that converts to a float with no loss:
int i = 33_554_430 * 64; // is greater than 2^23 (and also greater than 2^24); i = 2_147_483_520
float f = i;
System.out.println("result: " + (i - (int) f)); // Prints: result: 0
System.out.println("with i:" + i + ", f:" + f);//Prints: with i:2_147_483_520, f:2.14748352E9
Therefore, it is not true that integers greater than 2^23 will have their least significant bits truncated.
The best explanation I found is here:
A float in Java is 32-bit and is represented by:
sign * mantissa * 2^exponent
sign * (0 to 33_554_431) * 2^(-125 to +127)
Source: http://www.ibm.com/developerworks/java/library/j-math2/index.html
Why is this an issue?
It leaves the impression that you can determine whether there is a loss of precision from int to float just by looking at how large the int is.
I have especially seen Java exam questions where one is asked whether a large int would convert to a float with no loss.
Also, sometimes people tend to think that there will be loss of precision from int to float:
when an int is larger than: 1_234_567_890 not true (see counter-example above)
when an int is larger than: 2 exponent 23 (equals: 8_388_608) not true
when an int is larger than: 2 exponent 24 (equals: 16_777_216) not true
Conclusion
Conversions from sufficiently large ints to floats MAY lose precision.
It is not possible to determine whether there will be loss just by looking at how large the int is (i.e. without trying to go deeper into the actual float representation).
Possibly the clearest explanation I've seen:
http://www.ibm.com/developerworks/java/library/j-math2/index.html
the ULP or unit of least precision defines the precision available between any two float values. As these values increase the available precision decreases.
For example: between 1.0 and 2.0 inclusive there are 8,388,609 floats, between 1,000,000 and 1,000,001 there are 17. At 10,000,000 the ULP is 1.0, so above this value you soon have multiple integeral values mapping to each available float, hence the loss of precision.
There are two reasons that assigning an int to a double or a float might lose precision:
There are certain numbers that just can't be represented as a double/float, so they end up approximated
Large integer numbers may contain too much precision in the lease-significant digits
For these examples, I'm using Java.
Use a function like this to check for loss of precision when casting from int to float
static boolean checkPrecisionLossToFloat(int val)
{
if(val < 0)
{
val = -val;
}
// 8 is the bit-width of the exponent for single-precision
return Integer.numberOfLeadingZeros(val) + Integer.numberOfTrailingZeros(val) < 8;
}
Use a function like this to check for loss of precision when casting from long to double
static boolean checkPrecisionLossToDouble(long val)
{
if(val < 0)
{
val = -val;
}
// 11 is the bit-width for the exponent in double-precision
return Long.numberOfLeadingZeros(val) + Long.numberOfTrailingZeros(val) < 11;
}
Use a function like this to check for loss of precision when casting from long to float
static boolean checkPrecisionLossToFloat(long val)
{
if(val < 0)
{
val = -val;
}
// 8 + 32
return Long.numberOfLeadingZeros(val) + Long.numberOfTrailingZeros(val) < 40;
}
For each of these functions, returning true means that casting that integral value to the floating point value will result in a loss of precision.
Casting to float will lose precision if the integral value has more than 24 significant bits.
Casting to double will lose precision if the integral value has more than 53 significant bits.
You can assign double as int without losing precision.

Categories