So long story short I've spent the last 2-3 days trouble shooting a bug in my graphing calculator that arose when I implemented a new window resize listener. The bug was when I resized my window slowly the functions wouldn't be transformed properly and if I moved it fast they would transform just fine. I looked at all of my formulas and algorithms and everything was spot on (the same as it was with my previous window resizing method). Whenever there was a change in width I'd take the difference it changed, divide it by 2 and move the graphs by that amount. Really simple.
double changeX = (newCanvasWidth - canvasWidth)/2;
double changeY = (newCanvasHeight - canvasHeight)/2;
This had worked fine and made all of the logical sense I needed to ignore this as the culprit for nearly 3 days. It was so innocent that I almost rewrote my entire program to try and fix this issue ranging from compensation algorithms to all new methods to predict these errors and fix them. It was becoming a nightmare and was incredibly annoying. Before giving up complete hope I decided to investigate the problem once again using a thorough trace of every single calculation related to this and outputting the results of these calculations and I found something odd. Whenever the difference between (newCanvasWidth - canvasWidth) was odd I was not getting the half at the end of the integer.
So if the difference between them was say 15, changeX would reflect 7. Most troubling is when the difference was 1, changeX would be 0.
Upon discovering this I of course tried the obvious thing and type casted the subtraction.
double changeX = (double)(newCanvasWidth - canvasWidth)/2;
double changeY = (double)(newCanvasHeight - canvasHeight)/2;
And lo and behold my issue was solved!
What I don't understand though is why this didn't happen automatically. Also if this is just something I'm going to have to make accommodations for all of the time where is the limit? Is there anyway to know when you're going to need to type cast simple calculations like this?
Java doesn't automatically expand integral expressions to floating-point because it's very expensive computationally to do so, and because you can lose precision. Yes, if you have an integral value that you want divided into a non-integral quotient, you'll always need to tell Java (and C/C++) that. The Java Language Specification has comprehensive rules about what type of value a math expression is.
A shortcut when using a numeric literal like this is to make the literal a floating-point type:
double changeX = (newCanvasWidth - canvasWidth) / 2.0;
It wasn't happening automatically because the calculation on the right-hand side (RHS) of the assignment, i.e. (newCanvasHeight - canvasHeight)/2 takes place as a separate operation before the assignment to changeY. Since all terms on the RHS are integers, the result is an integer with the decimal part truncated (not rounded), which is then stored as a double (so instead of 7.5 you get 7, which is stored as 7.0). Since you were using a constant term on the RHS, you could make it a double (like #Clown suggested), thereby making the result of the calculation a double before it is stored. If all terms on the RHS were variables, however, then you would cast.
So, yes, there is a way to know when you need to cast (or otherwise convert) in situations like these: when the most precise term of the RHS of the assignment is less precise than the LHS.
Because newCanvasWidth and canvasWidth is declared as int's you don't automatically get a decimal result when dividing with another whole number. If you don't want to cast you should have been using 2.0. Integer division in Java always discard decimals, unless you tell it otherwise.
If the result has a chance of becoming a decimal number, for example when using division, you should always make sure the result is a double. Here you would cast when necessary. But generally speaking, you use type casting a lot more often in different contexts, such as going from Object to something else, or, as an even better example, when going from a View to a TextView in Android.
Related
Just noticed that Python and JavaScript have exact comparison.
Example in Python:
>>> 2**1023+1 > 8.98846567431158E307
True
>>> 2**1023-1 < 8.98846567431158E307
True
And JavaScript:
> 2n**1023n+1n > 8.98846567431158E307
true
> 2n**1023n-1n < 8.98846567431158E307
true
Anything similar available for Java, except converting both arguments to
BigDecimal?
Preliminary answer, i.e. verbal solution sketch:
I am skeptical about a solution that would convert
to BigDecimal, since this conversion results in
a shift of the base from base=2 to base=10. As soon
as the exponent of the Java double floating point value
is different from the binary precision, this leads to additional digits and lengthy
pow() operations, which one can verify by inspecting
some open source BigDecimal(double)
constructor implementation.
One can get the mantissa via Double.doubleToRawLongBits(d).
If the Java double floating point value is not a sub-normal
all that needs to be done is (raw & DOUBLE_SNIF_MASK) +
(DOUBLE_SNIF_MASK+1) where 0x000fffffffffffffL This means
the integer Java primitive type long should be enough to
carry the mantissa. The challenge is now to perform a comparison
taking the exponent of the float also into account.
But I must admit I didn't have time yet to work
out some Java code. I have also in mind some
optimizations using bigLength() of the other
argument, which is in this setting a BigInteger.
The use of bitLength() would speed up the comparison.
A simple heuristic can implement a fast path, so that
the mantissa can be ignored. Already the exponent
of the double and the bitLength() of the BigInteger
give enough information for a comparison result.
As soon as I have time and a prototype running, I might
publish some Java code fragment here. But maybe somebody
faced the problem already. My general hypothesis is that a fast or
even ultra fast routine is possible. But I didn't have
much time to search the internet and to find an
implementation, thats why I defered the problem to
stack overflow, maybe somebody else had the same
problem as well and/or might point to a complete solution?
Why the inconsistency?
There is no inconsistency: the methods are simply designed to follow different specifications.
long round(double a)
Returns the closest long to the argument.
double floor(double a)
Returns the largest (closest to positive infinity) double value that is less than or equal to the argument and is equal to a mathematical integer.
Compare with double ceil(double a)
double rint(double a)
Returns the double value that is closest in value to the argument and is equal to a mathematical integer
So by design round rounds to a long and rint rounds to a double. This has always been the case since JDK 1.0.
Other methods were added in JDK 1.2 (e.g. toRadians, toDegrees); others were added in 1.5 (e.g. log10, ulp, signum, etc), and yet some more were added in 1.6 (e.g. copySign, getExponent, nextUp, etc) (look for the Since: metadata in the documentation); but round and rint have always had each other the way they are now since the beginning.
Arguably, perhaps instead of long round and double rint, it'd be more "consistent" to name them double round and long rlong, but this is argumentative. That said, if you insist on categorically calling this an "inconsistency", then the reason may be as unsatisfying as "because it's inevitable".
Here's a quote from Effective Java 2nd Edition, Item 40: Design method signatures carefully:
When in doubt, look to the Java library APIs for guidance. While there are plenty of inconsistencies -- inevitable, given the size and scope of these libraries -- there are also fair amount of consensus.
Distantly related questions
Why does int num = Integer.getInteger("123") throw NullPointerException?
Most awkward/misleading method in Java Base API ?
Most Astonishing Violation of the Principle of Least Astonishment
floor would have been chosen to match the standard c routine in math.h (rint, mentioned in another answer, is also present in that library, and returns a double, as in java).
but round was not a standard function in c at that time (it's not mentioned in C89 - c identifiers and standards; c99 does define round and it returns a double, as you would expect). it's normal for language designers to "borrow" ideas, so maybe it comes from some other language? fortran 77 doesn't have a function of that name and i am not sure what else would have been used back then as a reference. perhaps vb - that does have Round but, unfortunately for this theory, it returns a double (php too). interestingly, perl deliberately avoids defining round.
[update: hmmm. looks like smalltalk returns integers. i don't know enough about smalltalk to know if that is correct and/or general, and the method is called rounded, but it might be the source. smalltalk did influence java in some ways (although more conceptually than in details).]
if it's not smalltalk, then we're left with the hypothesis that someone simply chose poorly (given the implicit conversions possible in java it seems to me that returning a double would have been more useful, since then it can be used both while converting types and when doing floating point calculations).
in other words: functions common to java and c tend to be consistent with the c library standard at the time; the rest seem to be arbitrary, but this particular wrinkle may have come from smalltalk.
I agree, that it is odd that Math.round(double) returns long. If large double values are cast to long (which is what Math.round implicitly does), Long.MAX_VALUE is returned. An alternative is using Math.rint() in order to avoid that. However, Math.rint() has a somewhat strange rounding behavior: ties are settled by rounding to the even integer, i.e. 4.5 is rounded down to 4.0 but 5.5 is rounded up to 6.0). Another alternative is to use Math.floor(x+0.5). But be aware that 1.5 is rounded to 2 while -1.5 is rounded to -1, not -2. Yet another alternative is to use Math.round, but only if the number is in the range between Long.MIN_VALUE and Long.MAX_VALUE. Double precision floating point values outside this range are integers anyhow.
Unfortunately, why Math.round() returns long is unknown. Somebody made that decision, and he probably never gave an interview to tell us why. My guess is, that Math.round was designed to provide a better way (i.e., with rounding) for converting doubles to longs.
Like everyone else here I also don't know the answer, but thought someone might find this useful. I noticed that if you want to round a double to an int without casting, you can use the two round implementations long round(double) and int round(float) together:
double d = something;
int i = Math.round(Math.round(d));
Why are the "F" and "L" suffixes needed when declaring a long or float? According to the documentation:
An integer literal is of type long if it ends with the letter L or l; otherwise it is of type int.
A floating-point literal is of type float if it ends with the letter F or f; otherwise its type is double.
So, from that, obviously the compiler is treating the values as either an int data type or a double data type, by default. That doesn't quite explain things for me.
I dug a bit deeper and found a discussion where a user describes the conversion from a 64-bit double into a 32-bit float would result in data loss, and the designers didn't want to make assumptions.
Questions I still have:
Why would the compiler allow one to write byte myByte = 100;, and the compiler automatically convers 100, an int as described above, into a byte, but the compiler won't allow the long myLong = 3_000_000_000;? Why will it not auto-convert 3_000_000_000 into a long, despite it being well within the range of a long?
As discussed above, when designing Java, the designers won't allow a double to be assigned to a float because of the data loss. While this may be true for a value that is outside of the range of a float, obviously something like 3.14 is small-enough for a float. So then, why does the compiler throw an error with the assignment float myFloat = 3.14;?
Ultimately, I'm failing to fully understand why the suffixes are needed, and the rules surrounding automatic casting (if that's what's happening under-the-hood), etc.
I know this topic has been discussed before, but the answers given only raise more questions, so I am deciding to create a new post.
In answer to your specific questions:
The problem with long myLong = 3_000_000_000; is that 3_000_000_000 is not a legal int literal because 3,000,000,000 does not fit into 4 bytes. The fact that you want to promote it to a long in order to initialize myLong is irrelevant. (Yes, the language designers could have designed the language so that in this context 3_000_000_000 could have been parsed as a long, but they didn't, probably to keep the language simpler and to avoid ambiguities in other contexts.)
The problem with 3.14 is not a matter of range but of loss of precision. In particular, while 3.14 terminates in base 10 representation, it does not have a finite representation in binary floating point. So converting from a double to a float (in order to initialize myFloat) would involve truncating significant, non-zero bits of the representation. (But just to be clear: Java considers every narrowing conversion from double to float to be lossy, regardless of the actual values involved. So float myFloat = 3.0; would also fail. However, float myFloat = 3; succeeds because conversion from an int value to a float is considered a widening conversion.)
In both cases, the right thing to do is to indicate exactly to the compiler what you are trying to do by appending the appropriate suffix to the numeric literal.
Why would the compiler allow one to write byte myByte = 100;, and the compiler automatically convers 100, an int as described above, into a byte, but the compiler won't allow the long myLong = 3_000_000_000;?
Because the spec says so. Note that byte myByte = 100; does work, yes, but that is a special case, explicitly mentioned in the Java Language Specification; ordinarily, 100 as a literal in a .java file is always interpreted as an int first, and never silently converts itself to a byte, except in two cases, both explicitly mentioned in the JLS: The cast is 'implied' in modified assignment: someByteArr += anyNumber; always works and implies the cast (again, why? Because the spec says so), and the same explicit presumption is made when declaring a variable: byte b = 100;, assuming the int literal is in fact in byte range (-128 to +127).
The JLS does not make an explicit rule that such concepts are applied in a long x = veryLargeLiteral;. And that is where your quest really ought to end. The spec says so. End of story.
If you'd like to ask the question: "Surely whomever person or persons added this, or rather failed to add this explicit case to the JLS had their reasons for it, and these reasons are more technical and merit based than 'cuz they thought of it in a dream' or 'because they flipped a coin', and then we get to a pure guess (because you'd have to ask them, so probably James Gosling, about why he made a decision 25 years ago):
Because it would be considerably more complex to implement for the javac codebase.
Right now literals are first considered as an int and only then, much later in the process, if the code is structured such that the JLS says no cast is needed, they can be 'downcast'. Whereas with the long scenario this does not work: Once you try to treat 3_000_000_000 as an int, you already lost the game because that does not fit, thus the parser that parses this needs to create some sort of bizarro 'schrodinger's cat' style node, which represents 3_000_000_000 accurately, but nevertheless will downstream get turned into a parsing error UNLESS it is used in an explicit scenario where the silently-treat-as-long part is allowed. That's certainly possible, but slightly more complex.
Presumably the same argument applies to why, in 25 years, java has not seen an update. It could get that at some point in time, but I doubt it'll have high priority.
As discussed above, when designing Java, the designers won't allow a double to be assigned to a float because of the data loss.
This really isn't related at all. int -> long is lossy, but double -> float mostly isn't (it's floating point, you lose a little every time you do stuff with them pretty much, but that's sort of baked into the contract when you use them at all, so that should not stop you).
obviously something like 3.14 is small-enough for a float.
Long and int are easy: Ints go from about -2 billion to about +2 billion and longs go a lot further. But float/double is not like that. They represent roughly the same range (which is HUGE, 300+ digit numbers are fine), but their accuracy goes down as you get away from the 0, and for floats it goes down a lot faster. Almost every number, probably including 3.14, cannot be perfectly represented by either float or double, so we're just arguing on how much error is acceptable. Thus, java does not as a rule silently convert stuff to a float, because, hey, you picked double, presumably for a reason, so you need to explicitly tell the compiler: "Yup. I get it, I want you to convert and I will accept the potential loss, it is what I want", because once the compiler starts guessing at what you meant, that is an excellent source of hard to find bugs. Java has loads of places where it is designed like this. Contrast to languages like javascript or PHP where tons of code is legal even if it is bizarre and seems to make no sense, because the compiler will just try to guess at what you wanted.
Java is much better than that - it draws a line; once your code is sufficiently weird that the odds that javac knows what you wanted drop below a treshold, java will actively refuse to then take a wild stab in the dark at what you meant and will just flat out refuse and ask you to be more clear about it. In a 20 year coding career I cannot stress enough how useful that is :)
I know this topic has been discussed before, but the answers given only raise more questions, so I am deciding to create a new post.
And yet you asked the same question again instead of the 'more questions' than this raised. Shouldn't you have asked about those?
First, we need to understand how declaration happens in Java. Java is a statically-typed language, once we declare a variable, we can't change the data type of our variable after. Let's look up an examples:
long myLong = 3_000_000_000;
Integral types(byte,short,int,long) are "int" by default. The differences are sizes(byte<short<int<long).
When we declare a variable we're saying to java that "myLong" variable's type should be long(which is int but longer size). And after we're trying to equalize with "3_000_000_000"(Literal) which is int BUT int's max value is 3,147,483,647 so it's bigger. That's why we should write "L or l" to the end of the literal. After adding "l", now, our literal is long and we can equalize with declared long "myLong". => long myLong = 3_000_000_000l;
int myInt = 300L; => (Error will appears)
In this example our literal(300L) is long. As I mentioned before long's size is bigger than other integral types. When we delete "L" from end of the literal, "300" will be int.
Here is another example for FLoat and Double :
float myFloat = 5.5; (Error)
float myFloat = 5.5F; (Correct version)
Float and Double are "double" by default. The difference is, double bigger than float. myFloat is "float" in the begining, 5.5 is double so error will appear that we can't equalize. That is why we should add "F or f" to the end of the 5.5. We can use "D or d" for double but it's up on us, it's not necessary because there's no bigger floating type than double.
Hope it's clear :)
I'm trying to create a physical calculation program in Java. Therefore I used some formulas, but they always returned a wrong value. I split them and and found: (I used long so far.)
8 * 830584000 = -1945262592
which is obviously wrong. There are fractions and very high numbers in the formulas, such as 6.095E23 and 4.218E-10 for example.
So what datatype would fit best to get a correct result?
Unless you have a very good reason not to, double is the best type for physical calculations. It was good enough for the wormhole modelling in the film Interstellar so, dare I say it, is probably good enough for you. Note well though that, as a rough guide, it only gives you only 15 decimal significant figures of precision.
But you need to help the Java compiler:
Write 8.0 * 830584000 for that expression to be evaluated in double precision. 8.0 is a double literal and causes the other arguments to be promoted to a similar type.
Currently you are using integer arithmetic, and are observing wrap-around effects.
Reference: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
If you need perfect accuracy for large decimal numbers, BigDecimal is the way to go, though it will be at the cost of performance. If you know you numbers are not that large, you can use long instead which should be faster, but has a much more limited range and will require you to convert from and to decimal numbers.
As physics calculations involves a lot of floating point operations, float data type can be a good option in such calculations. I Hope it will help. :)
I'm working on some tool that gets to compute numbers that can get close to 1e-25 in the worst cases, and compare them together, in Java. I'm obviously using double precision.
I have read in another answer that I shouldn't expect more than 1e-15 to 1e-17 precision, and this other question deals with getting better precision when ordering operations in a "better" order.
Which double precision operations are more keen to loose precision along the way? Should I try to work with number as big as possible or as small as possible? Do divisions first before multiplications?
I'd rather not use the BigDecimal classes or equivalent, as the code is already slow enough ;) (unless they don't impact speed too much, of course).
Any information will be greatly appreciated!
EDIT: The fact that numbers are "small" in absolute value (1e-25) does not matter, as double can go down to 1e-324. But what matters is that, when they are very similar (both in 1e-25), I have to compare, let's say 4.64563824048517606458e-21 to 4.64563824048517606472e-21 (difference is the 19th and 20th digits). When computing these numbers, the difference is so small that I might hit the "rounding error", where remainder is filled with random numbers.
The question is: "how to order computation so that this loss of precision is minimized?". It might be doing divisions before multiplications, or additions first.
If it is important to get the correct answer, you should use BigDecimal. It is slower than double, but for most cases it is fast enough. I can't think of a lot of cases where you do a lot of calculations with such small numbers where it does not matter if the answer is correct - at least with Java.
If this is a super performance sensitive application, I would consider using a different language.
Thanks to #John for pointing out a very complete article about floating point arithmetics.
It turns out that, when precision is needed, operations should be re-ordered, and formulas adapted to avoid loss of precision, as explained in the Cancellation chapter: when comparing numbers that are very close to each other (which is my case), "catastrophic cancellation" may occur, inducing a huge loss of precision. Often, re-writing the formula, or re-ordering operations according to your à-priori knowledge of the operands values can lead to achieving greater accuracy in calculus.
What I'll remember from this article is:
be careful when substracting two nearly-identical quantities
try to re-arrange operations to avoid catastrophic cancellation
For the latter case, remember that computing (x - y) * (x + y) gives more accurate results than x * x - y * y.