I have to do an operation with integers, very simple:
a=b/c*d
where all the variables are integer, but the result is ZERO whatever is the value of the parameters. I guess that it's a problem with the operation with this type of data (int).
I solved the problem converting first in float and then in integer, but I was wondering if there is a better method.
The / operator, when used with integers, does integer division which I suspect is not what you want here. In particular, 2/5 is zero.
The way to work around this, as you say, is to cast one or more of your operands to e.g. a float, and then turn the resulting floating point value back into an integer using Math.floor, Math.round or Math.ceil. This isn't really a bad solution; you have a bunch of integers but you really do want a floating-point calculation. The output might not be an integer, so it's up to you to specify how you want to convert it back.
More importantly, I'm not aware of any syntax to do this that would be more concise and readable than (for example):
a = Math.round((float)b / c * d)
In this case, you can reorder the expression so division is performed last:
a = (b*d)/c
Be careful that b*d won't ever be large enough to overflow an int. If it might be, you could cast one of them to long:
a = (int)(((long)b*d)/c)
Related
Is there a checkstyle rule that will catch something like this:
double result = someInt / someOtherInt;
result is double (so clearly fractions are desired) yet the right-hand side would do integer division (rounding down).
Does something like this exist?
No, but findbugs can:
ICAST: Integral division result cast to double or float (ICAST_IDIV_CAST_TO_DOUBLE)
This code casts the result of an integral division (e.g., int or long division) operation to double or float. Doing division on integers truncates the result to the integer value closest to zero. The fact that the result was cast to double suggests that this precision should have been retained. What was probably meant was to cast one or both of the operands to double before performing the division.
There is nothing like this currently in Checkstyle.
You can always create your own check, but tracking variables may not come easy. See https://checkstyle.org/writingchecks.html
Also, Checkstyle isn't type aware tool. Knowing the actual type of the variables/fields may be impossible for it to know in certain situations. See https://checkstyle.org/writingchecks.html#Limitations
I am an experienced php developer just starting to learn Java. I am following some Lynda courses at the moment and I'm still really early stages. I'm writing sample programs that ask for user input and do simple calculation and stuff.
Yesterday I came across this situation:
double result = 1 / 2;
With my caveman brain I would think result == 0.5, but no, not in Java. Apparantly 1 / 2 == 0.0. Yes, I know that if I change one of the operands to a double the result would also be a double.
This scares me actually. I can't help but think that this is very broken. It is very naive to think that an integer division results in an integer. I think it is even rarely the case.
But, as Java is very widely used and searching for 'why is java's division broken?' doesn't yield any results, I am probably wrong.
My questions are:
Why does division behave like this?
Where else can I expect to find such magic/voodoo/unexpected behaviour?
Java is a strongly typed language so you should be aware of the types of the values in expressions. If not...
1 is an int (as 2), so 1/2 is the integer division of 1 and 2, so the result is 0 as an int. Then the result is converted to a corresponding double value, so 0.0.
Integer division is different than float division, as in math (natural numbers division is different than real numbers division).
You are thinking like a PHP developer; PHP is dynamically typed language. This means that types are deduced at run-time, so a fraction cannot logically produce a whole number, thus a double (or float) is implied from the division operation.
Java, C, C++, C# and many other languages are strongly typed languages, so when an integer is divided by an integer you get an integer back, 100/50 gives me back 2, just like 100/45 gives me 2, because 100/45 is actually 2.2222..., truncate the decimal to get a whole number (integer division) and you get 2.
In a strongly typed language, if you want a result to be what you expect, you need to be explicit (or implicit), which is why having one of your parameters in your division operation be a double or float will result in floating point division (which gives back fractions).
So in Java, you could do one of the following to get a fractional number:
double result = 1.0 / 2;
double result = 1f / 2;
double result = (float)1 / 2;
Going from a loosely typed, dynamic language to a strongly typed, static language can be jarring, but there's no need to be scared. Just understand that you have to take extra care with validation beyond input, you also have to validate types.
Going from PHP to Java, you should know you can not do something like this:
$result = "2.0";
$result = "1.0" / $result;
echo $result * 3;
In PHP, this would produce the output 1.5 (since (1/2)*3 == 1.5), but in Java,
String result = "2.0";
result = "1.0" / result;
System.out.println(result * 1.5);
This will result in an error because you cannot divide a string (it's not a number).
Hope that can help.
I'm by no means a professional on this, but I think it's because of how the operators are defined to do integer arithmetic. Java uses integer division in order to compute the result because it sees that both are integers. It takes as inputs to this "division" method two ints, and the division operator is overloaded, and performs this integer division. If this were not the case, then Java would have to perform a cast in the overloaded method to a double each time, which is in essence useless if you can perform the cast prior anyways.
If you try it with c++, you will see the result is the same.
The reason is that before assigning the value to the variable, you should calculate it. The numbers you typed (1 and 2) are integers, so their memory allocation should be as integers. Then, the division should done according to integers. After that it will cast it to double, which gives 0.0.
Why does division behave like this?
Because the language specification defines it that way.
Where else can I expect to find such magic/voodoo/unexpected behaviour?
Since you're basically calling "magic/voodoo" something which is perfectly defined in the language specification, the answer is "everywhere".
So the question is actually why there was this design decision in Java. From my point of view, int division resulting in int is a perfectly sound design decision for a strongly typed language. Pure int arithmetic is used very often, so would an int division result in float or double, you'd need a lot of rounding which would not be good.
package demo;
public class ChocolatesPurchased
{
public static void main(String args[])
{
float p = 3;
float cost = 2.5f;
p *= cost;
System.out.println(p);
}
}
It is well documented that using a double can lead to inaccuracies and that BigDecimal guarantees accuracy so long as there are no doubles in the mix.
However, is accuracy guaranteed if the double in question is a small whole number?
For example, although the following will be inaccurate/unsafe:
BigDecimal bdDouble = new BigDecimal(0.1d); // 0.1000000000000000055511151231257827021181583404541015625
will the following always be accurate/safe?
BigDecimal bdDouble = new BigDecimal(1.0d); // 1
Is it safe to assume that small whole number doubles are safe to use with BigDecimals - if so, what is the smallest whole number that would introduce an inaccuracy?
>> Additional info in response to initial answers:
Thanks for the answers. Very helpful.
Just to add a little more detail, I have a legacy interface which supplies doubles, but I can be certain that these doubles will represent whole numbers having being themselves converted from Strings to doubles via Double.parseDouble(String) where the String is a guaranteed whole number representation.
I do not want to create a new interface which passes me Strings or BigDecimals if I can avoid it.
I can immediately convert the double to a BigDecimal on my side of the interface and make all internal calculations using BigDecimal calls, but I want to be sure that is as safe as creating a new BigDecimal/String interface.
Given that in my original example using 0.1d does not accurately result in 0.1, as shown by the fact that the actual BigDecimal is 0.1000000000000000055511151231257827021181583404541015625, it appears that some fractions will introduce an inaccuracy.
On the other hand, given that in my original example using 1.0d does accurately results in 1, it appears that whole numbers retain accuarcy. It appears that this is guaranteed up to a value of 2^53, if I understand your answers correctly.
Is that a correct assumption?
The BigDecimal aspect isn't as relevant to this question as "what is the range of integers that can be exactly represented in double?" - in that every finite double value can be represented exactly by BigDecimal, and that's the value you'll get if you call the BigDecimal(double) constructor. So you can be confident that if the value you wish to represent is an integer which is exactly representable by a double, if you pass that double to the BigDecimal constructor, you'll get a BigDecimal which exactly represents the same integer.
The significand of a double is 52 bits. Due to normalization, that means you should expect to be able to store integer values in the range [-253, 253] exactly. Those are pretty large numbers.
Of course, if you're only in the business of representing integers, it's questionable as to why you're using double at all... and you need to make sure that any conversions you're using from original source data to double aren't losing any information loss - but purely on the matter of "what range of integers are exactly representable as double values" I believe the above is correct...
A short answer is no. Because of the way a floating point variable is stored in memory there is no "small" value 0.000001 uses the same number of bits as 100000, every value is represented in the same way 0.xxx..eyy
A better way to initialize a BigDecimal is to initialize it with a string.
BigDecimal bdDouble = new BigDecimal("0.1");
Double milisecondsInYear = 365*24*3600*1000;
It's 1.7e9
But if I used
Double milisecondsInYear = 365*24*3600*1000.;
I got correct answer 3.15E10
Because 365, 24, 3600 and 1000 are all int literals, the calculation is done using ints. The multiplication overflows because the true value exceeds Integer.MAX_VALUE. By putting a dot at the end you turn that last literal into a double literal. This is not a very robust way to correct it because the multiplication of the first 3 numbers is still carried out using ints. The best way to deal with this is to make the first number a long or double literal.
365L*24*3600*1000
or
365.0*24*3600*1000
Because of overflow. 365*24*3600*1000 does not fit in an int (which is a signed 32-bit value). If you write that as 365L*24*3600*1000 then the necessary promotions will happen in the proper order and the result will be a long, which can fit that number.
In the second line, you have an extra character, the dot at the end - this makes the number a floating-point number, thus you lose in precision but you can actually do the multiplications.
Numbers in Java are ints, unless you specify otherwise.
When you add ., you'll have double calculations (since 1000.0 is double) instead of int, which fits (unlike int).
The first is performing integer math, because all of the numbers are integers the result is an integer (which is then widened to a double). The range of an int isn't sufficient for the result. A double or a long is. So, you could also use
double millisecondsInYear = (365L * 24 * 3600 * 1000);
System.out.println(millisecondsInYear);
to widen to long first. The above also outputs "3.1536E10".
The order or evaluation differes depending on the types you are using. That's why you get different answers.
Hey all, I am a total newbie developing an android application, I've been reading 'Sams Teach Yourself Java in 24 hours' and it's a great book. But I have been stuck on a bit where I get the value of a decimal number only editTexts and use java maths to work out my end value.
Is there a way to have an editText input straight to a float or double variable rather than to a string and then from a string to a double?
Are there any real issues with converting between a string and a double or float or will the values remain the same and not be polluted.
Differences / pros and cons of using a doble as opposed to a float.
Best way to input a fraction value from the user?
Thanks for any help. Have a good day.
No, you can't.
Yes. If your string is, say, an ID and reads like "0029482", after you turn it into an integer it will read "29482" and probably will be invalid. Strings can be longer than doubles or floats, and if you have a value like "0.12345678901234567890123456789" in a string, you will lose a lot of precision by converting that to a double.
Doubles use double the number of bits (hence the name), and can therefore hold more precision.
Accept the denominator and numerator integers, and store them in a custom class.
No. You could write your own subclass that makes it seem like that is what's happening, but at some point somewhere in the chain you have to do a conversion from character/text data to numerical data.
Yes. Primitive floating-point types use IEEE-754 to encode decimal numbers in binary. The encoding provides very good precision, but it is not exact/cannot exactly represent many possible numbers. So if you parse from a string to a primitive floating-point type, and then back to string again, you may get something that is different from your input string.
A double uses twice as many bits to encode the number as a float, and thus is able to maintain a greater degree of precision. It will not, however, remove the issues discussed in #2. If you want to remove those issues, consider using something like BigDecimal to represent your numbers instead of primitive types like float or double.
Read the whole thing as a string, split() it on the '/' character, and then store each part as an integer (or BigInteger). If you need to display it as a decimal, use BigDecimal to perform the division.
I'd just like to add that if you are looking for an alternative to double or float that doesn't entail loss of precision when converting between strings and numeric form, look at these:
The standard java.math.BigDecimal class represents an arbitrary precision scaled number; i.e. an arbitrary precision integer multiplied (scaled) by a fixed integer power of 10.
The Apache dfp package contains implementations of decimal-based floating numbers.
However, I'd steer clear of both of this topic for now, and implement using float or double. (I take it that your real aim is to learn how to write Java, not to build the world's greatest calculator app.)