Extract int part of a BigDecimal? - java

In Java, I'm working with the BigDecimal class and part of my code requires me to extract the int part from it. BigDecimal does not appear to have any built in methods to help me get the number before the decimal point of a BigDecimal.
For example:
BigDecimal bd = new BigDecimal("23452.4523434");
I want to extract the 23452 from the number represented above. What's the best way to do it?

Depends on what you mean by "extract". What is the type of the result of the extraction? Another BigDecimal, a BigInteger, an int, a long, a String, or something else?
Here's code for them all:
BigDecimal result1 = bd.setScale(0, RoundingMode.DOWN);
BigInteger result2 = bd.toBigInteger();
int result3 = bd.intValue(); // Overflow may occur
long result4 = bd.longValue(); // Overflow may occur
String result5 = bd.toBigInteger().toString();
String result6 = bd.setScale(0, RoundingMode.DOWN).toString();
NumberFormat fmt = new DecimalFormat("0");
fmt.setRoundingMode(RoundingMode.DOWN);
String result7 = fmt.format(bd);
Explanation of roundings:
RoundingMode.DOWN - Rounding mode to round towards zero. Never increments the digit prior to a discarded fraction (i.e., truncates). Note that this rounding mode never increases the magnitude of the calculated value.
toBigInteger() - Converts this BigDecimal to a BigInteger. This conversion is analogous to the narrowing primitive conversion from double to long as defined in section 5.1.3 of The Java™ Language Specification: any fractional part of this BigDecimal will be discarded. Note that this conversion can lose information about the precision of the BigDecimal value.
intValue() / longValue() - Converts this BigDecimal to an int / long. This conversion is analogous to the narrowing primitive conversion from double to int / long as defined in section 5.1.3 of The Java™ Language Specification: any fractional part of this BigDecimal will be discarded, and if the resulting "BigInteger" is too big to fit in an int, only the low-order 32 / 64 bits are returned.
As can be seen from the descriptions, all 4 discards fractional part, i.e. rounds towards zero, aka truncates the value.

bd.toBigInteger()
See the docs at https://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html#toBigInteger()

Related

BigDecimal not retaining rounded value when converting to/from float [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
I have a function that rounds a float to n number of digits using BigDecimal.setScale
private float roundPrice(float price, int numDigits) {
BigDecimal bd = BigDecimal.valueOf(price);
bd = bd.setScale(numDigits, RoundingMode.HALF_UP);
float roundedFloat = bd.floatValue();
return roundedFloat;
}
public void testRoundPrice() {
float numberToRound = 0.2658f;
System.out.println(numberToRound);
float roundedNumber = roundPrice(numberToRound, 5);
System.out.println(roundedNumber);
BigDecimal bd = BigDecimal.valueOf(roundedNumber);
System.out.println(bd);
}
Output:
0.2658
0.2658
0.26579999923706055
How can I prevent BigDecimal from adding all these extra digits at the end of my rounded value?
NOTE: I can't do the following, because I dont have access to the number of digits in the api call function.
System.out.println(bd.setScale(5, RoundingMode.CEILING));
It’s the other way around. BigDecimal is telling you the truth. 0.26579999923706055 is closer to the value that your float has got all the time, both before and after rounding. A float being a binary rather than a decimal number cannot hold 0.2658 precisely. Actually 0.265799999237060546875 is as close as we can get.
When you print the float, you don’t get the full value. Some rounding occurs, so in spite of the float having the aforementioned value, you only see 0.2658.
When you create a BigDecimal from the float, you are really first converting to a double (because this is what BigDecimal.valueOf() accepts). The double has the same value as the float, but would print as 0.26579999923706055, which is also the value that your BigDecimal gets.
If you want a BigDecimal having the printed value of the float rather than the exact value in it or something close, the following may work:
BigDecimal bd = new BigDecimal(String.valueOf(roundedNumber));
System.out.println(bd);
Output:
0.2658
You may get surprises with other values, though, since a float hasn’t got that great of a precision.
EDIT: you were effectively converting float -> double -> String -> BigDecimal.
These insightful comments by Dawood ibn Kareem got me researching a bit:
Actually 0.265799999237060546875.
Well, 0.26579999923706055 is the value returned by calling
toString on the double value. That's not the same as the number
actually represented by that double. That's why
BigDecimal.valueOf(double) doesn't in general return the same value
as new BigDecimal(double). It's really important to understand the
difference if you're going to be working with floating point values
and with BigDecimal.
So what really happened:
Your float internally had the value of 0.265799999237060546875 both before and after rounding.
When you are passing your float to BigDecimal.valueOf(double), you are effectively converting float -> double -> String -> BigDecimal.
The double has the same value as the float, 0.265799999237060546875.
The conversion to String rounds a little bit to "0.26579999923706055".
So your BigDecimal gets the value of 0.26579999923706055, the value you saw and asked about.
From the documentation of BigDecimal.valueOf(double):
Translates a double into a BigDecimal, using the double's
canonical string representation provided by the
Double.toString(double) method.
Links
Stack Overflow question: Is floating point math broken?
Documentation: BigDecimal.valueOf(double)
Stack Overflow question: BigDecimal - to use new or valueOf
I've decided to modify my program to use BigDecimal as the base type for my property price in my object instead of type float. Although tricky at first it is definitely the cleaner solution in the long run.
public class Order {
// float price; // old type
BigDecimal price; // new type
}

I do not understand why the value of the variable result is nonzero in this Java question

I have this question:
What do you think would be the value of the variable result after executing the following segment of Java code?
int i = 1234567890;
float f = i;
int result = i - (int)f;
The answer is nonzero
Bear in mind I am a beginner in java currently learning the absolute basics and frankly I do not understand why the answer is nonzero and what each line of the code actually means?
tl;dr
If you want accuracy in your fractional numbers, use BigDecimal class rather than the float floating-point type.
Floating-point is inaccurate
The floating-point technology used by float/Float and double/Double trade away accuracy for speed of execution. Never use these types where accuracy is important, such as money.
So converting an integer to a floating-point number and back again may not result in the same number.
This behavior is not specific to Java. Java implements the electrical engineering standards defining floating-point arithmetic behavior. Any programming language supporting standard floating-point will show the very same issue.
int i = 1234567890; // Create an integer number from literal input, and store as a primitive value in variable named `i`.
float f = i ; // Convert the integer `int` primitive to a fractional number represented using floating-point technology as a primitive value in variable named `f`.
int backAgain = (int)f ; // Cast (convert) from a `float` type to a `int` type. Data-loss may be involved, as any fraction is truncated.
int result = i - backAgain ; // Subtract one `int` primitive from the other `int` primitive. Store the integer result in a primitive `int` variable.
boolean isZero = ( result == 0 ) ; // Test if the result of our subtraction is zero.
See this code run live at IdeOne.com.
i: 1234567890
f: 1.23456794E9
backAgain: 1234567936
result: -46
isZero: false
BigDecimal
If you want accuracy rather than speed when working with fractional numbers, use BigDecimal class.
int i = 1234567890;
BigDecimal bd = new BigDecimal( i ) ;
int backAgain = bd.intValueExact() ;
int result = i - backAgain ;
boolean isZero = ( result == 0 ) ;
See this code run live at IdeOne.com.
isZero: true
i: 1234567890
bd: 1234567890
backAgain: 1234567890
result: 0
isZero: true

Java trunc() method equivalent

The java.lang.Math class has ceil(), floor(), round() methods, but does not have trunc() one.
At the same time I see on the practice that the .intValue() method (which does actually (int) cast) does exactly what I expect from trunc() in its standard meaning.
However I cannot find any concrete documentation which confirms that intValue() is a full equivalent of trunc() and this is strange from many points of view, for example:
The description "Returns the value of this Double as an int (by
casting to type int)" from
https://docs.oracle.com/javase/7/docs/api/java/lang/Double.html does
not say anything that it "returns the integer part of the fractional
number" or like that.
The article
What is .intValue() in Java?
does not say anything that it behaves like trunc().
All my searches for "Java trunc method" or like that didn't give
anything as if I am the only one who searches for trunc() and as if I
don't know something very common that everyone knows.
Can I get somehow the confirmation that I can safely use intValue() in order to get fractional numbers rounded with "trunc" mode?
So the question becomes: Is casting a double to a int equal to truncation?
The Java Language Specification may have the answer. I'll quote:
specific conversions on primitive types are called the narrowing
primitive conversions:
[...]
float to byte, short, char, int, or long
double to byte, short, char, int, long, or float
A narrowing primitive conversion may lose information about the
overall magnitude of a numeric value and may also lose precision and
range.
[...]
A narrowing conversion of a floating-point number to an integral type
T takes two steps:
In the first step, the floating-point number is converted either to [...] an int, if T is byte, short, char, or int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the first step of the conversion is an int or long 0.
Otherwise, if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward
zero using IEEE 754 round-toward-zero mode (§4.2.3). Then there are
two cases:
If T is long, and this integer value can be represented as a long, then the result of the first step is the long value V.
Otherwise, if this integer value can be represented as an int, then the result of the first step is the int value V.
Which is described in IEEE 754-1985.
You can use floor and ceil to implement trunc
public static double trunc(double value) {
return value<0 ? Math.ceil(value) : Math.floor(value);
}
With Google Guava DoubleMath#roundToInt() you can convert that result into an int:
public static int roundToInt(double x, RoundingMode mode) {
double z = roundIntermediate(x, mode);
checkInRangeForRoundingInputs(
z > MIN_INT_AS_DOUBLE - 1.0 & z < MAX_INT_AS_DOUBLE + 1.0, x, mode);
return (int) z;
}
private static final double MIN_INT_AS_DOUBLE = -0x1p31;
private static final double MAX_INT_AS_DOUBLE = 0x1p31 - 1.0;

Java BigDecimal data converting to opposite sign long

According to the Java 7 documentation, the method longValue from class java.math.BigDecimal can return a result with the opposite sign.
Converts this BigDecimal to a long. This conversion is analogous to the narrowing primitive conversion from double to short as defined in section 5.1.3 of The Java™ Language Specification: any fractional part of this BigDecimal will be discarded, and if the resulting "BigInteger" is too big to fit in a long, only the low-order 64 bits are returned. Note that this conversion can lose information about the overall magnitude and precision of this BigDecimal value as well as return a result with the opposite sign.
In what case is it possible?
It is possible whenever the value of the BigDecimal is larger than what a long can hold.
Example:
BigDecimal num = new BigDecimal(Long.MAX_VALUE);
System.out.println(num); // prints: 9223372036854775807
System.out.println(num.longValue()); // prints: 9223372036854775807
num = num.add(BigDecimal.TEN); // num is now too large for long
System.out.println(num); // prints: 9223372036854775817
System.out.println(num.longValue()); // prints: -9223372036854775799
System.out.println(num.longValueExact()); // throws: ArithmeticException: Overflow
I will happen if the value is greater than the max value of long
BigDecimal dec = new BigDecimal(Long.MAX_VALUE +1);
System.out.println(dec.longValue());

Loss of precision - int -> float or double

I have an exam question I am revising for and the question is for 4 marks.
"In java we can assign a int to a double or a float". Will this ever lose information and why?
I have put that because ints are normally of fixed length or size - the precision for storing data is finite, where storing information in floating point can be infinite, essentially we lose information because of this
Now I am a little sketchy as to whether or not I am hitting the right areas here. I very sure it will lose precision but I can't exactly put my finger on why. Can I get some help, please?
In Java Integer uses 32 bits to represent its value.
In Java a FLOAT uses a 23 bit mantissa, so integers greater than 2^23 will have their least significant bits truncated. For example 33554435 (or 0x200003) will be truncated to around 33554432 +/- 4
In Java a DOUBLE uses a 52 bit mantissa, so will be able to represent a 32bit integer without lost of data.
See also "Floating Point" on wikipedia
It's not necessary to know the internal layout of floating-point numbers. All you need is the pigeonhole principle and the knowledge that int and float are the same size.
int is a 32-bit type, for which every bit pattern represents a distinct integer, so there are 2^32 int values.
float is a 32-bit type, so it has at most 2^32 distinct values.
Some floats represent non-integers, so there are fewer than 2^32 float values that represent integers.
Therefore, different int values will be converted to the same float (=loss of precision).
Similar reasoning can be used with long and double.
Here's what JLS has to say about the matter (in a non-technical discussion).
JLS 5.1.2 Widening primitive conversion
The following 19 specific conversions on primitive types are called the widening primitive conversions:
int to long, float, or double
(rest omitted)
Conversion of an int or a long value to float, or of a long value to double, may result in loss of precision -- that is, the result may lose some of the least significant bits of the value. In this case, the resulting floating-point value will be a correctly rounded version of the integer value, using IEEE 754 round-to-nearest mode.
Despite the fact that loss of precision may occur, widening conversions among primitive types never result in a run-time exception.
Here is an example of a widening conversion that loses precision:
class Test {
public static void main(String[] args) {
int big = 1234567890;
float approx = big;
System.out.println(big - (int)approx);
}
}
which prints:
-46
thus indicating that information was lost during the conversion from type int to type float because values of type float are not precise to nine significant digits.
No, float and double are fixed-length too - they just use their bits differently. Read more about how exactly they work in the Floating-Poing Guide .
Basically, you cannot lose precision when assigning an int to a double, because double has 52 bits of precision, which is enough to hold all int values. But float only has 23 bits of precision, so it cannot exactly represent all int values that are larger than about 2^23.
Your intuition is correct, you MAY loose precision when converting int to float. However it not as simple as presented in most other answers.
In Java a FLOAT uses a 23 bit mantissa, so integers greater than 2^23 will have their least significant bits truncated. (from a post on this page)
Not true.
Example: here is an integer that is greater than 2^23 that converts to a float with no loss:
int i = 33_554_430 * 64; // is greater than 2^23 (and also greater than 2^24); i = 2_147_483_520
float f = i;
System.out.println("result: " + (i - (int) f)); // Prints: result: 0
System.out.println("with i:" + i + ", f:" + f);//Prints: with i:2_147_483_520, f:2.14748352E9
Therefore, it is not true that integers greater than 2^23 will have their least significant bits truncated.
The best explanation I found is here:
A float in Java is 32-bit and is represented by:
sign * mantissa * 2^exponent
sign * (0 to 33_554_431) * 2^(-125 to +127)
Source: http://www.ibm.com/developerworks/java/library/j-math2/index.html
Why is this an issue?
It leaves the impression that you can determine whether there is a loss of precision from int to float just by looking at how large the int is.
I have especially seen Java exam questions where one is asked whether a large int would convert to a float with no loss.
Also, sometimes people tend to think that there will be loss of precision from int to float:
when an int is larger than: 1_234_567_890 not true (see counter-example above)
when an int is larger than: 2 exponent 23 (equals: 8_388_608) not true
when an int is larger than: 2 exponent 24 (equals: 16_777_216) not true
Conclusion
Conversions from sufficiently large ints to floats MAY lose precision.
It is not possible to determine whether there will be loss just by looking at how large the int is (i.e. without trying to go deeper into the actual float representation).
Possibly the clearest explanation I've seen:
http://www.ibm.com/developerworks/java/library/j-math2/index.html
the ULP or unit of least precision defines the precision available between any two float values. As these values increase the available precision decreases.
For example: between 1.0 and 2.0 inclusive there are 8,388,609 floats, between 1,000,000 and 1,000,001 there are 17. At 10,000,000 the ULP is 1.0, so above this value you soon have multiple integeral values mapping to each available float, hence the loss of precision.
There are two reasons that assigning an int to a double or a float might lose precision:
There are certain numbers that just can't be represented as a double/float, so they end up approximated
Large integer numbers may contain too much precision in the lease-significant digits
For these examples, I'm using Java.
Use a function like this to check for loss of precision when casting from int to float
static boolean checkPrecisionLossToFloat(int val)
{
if(val < 0)
{
val = -val;
}
// 8 is the bit-width of the exponent for single-precision
return Integer.numberOfLeadingZeros(val) + Integer.numberOfTrailingZeros(val) < 8;
}
Use a function like this to check for loss of precision when casting from long to double
static boolean checkPrecisionLossToDouble(long val)
{
if(val < 0)
{
val = -val;
}
// 11 is the bit-width for the exponent in double-precision
return Long.numberOfLeadingZeros(val) + Long.numberOfTrailingZeros(val) < 11;
}
Use a function like this to check for loss of precision when casting from long to float
static boolean checkPrecisionLossToFloat(long val)
{
if(val < 0)
{
val = -val;
}
// 8 + 32
return Long.numberOfLeadingZeros(val) + Long.numberOfTrailingZeros(val) < 40;
}
For each of these functions, returning true means that casting that integral value to the floating point value will result in a loss of precision.
Casting to float will lose precision if the integral value has more than 24 significant bits.
Casting to double will lose precision if the integral value has more than 53 significant bits.
You can assign double as int without losing precision.

Categories