Hi I have the following equation in a piece of java code:
double z = 0.002378 * (Math.pow((1 - (Math.pow(6.875, -6) * y)), 4.2561));
when I set y to be very large values, i.e 200000 I get Nan (Not a number) It's working okay at slightly lower values, 130000
Can anyone tell me why that is?
Additionally I've tried to port the above code from an original BASIC program:
.002378*(1-(6.875*10^-6*ALT))^4.2561
I may have done it wrong? The order of operations isn't very explicit in the BASIC code
Thanks
As the Javadoc for Math.pow explains:
If the first argument is finite and less than zero [… and] the second argument is finite and not an integer, then the result is NaN.
So whenever your y is great enough that 1 - (Math.pow(6.875, -6) * y is negative, you'll get NaN.
(This makes sense when you consider the underlying math. A negative number to a non-integer power is not a real number, and double has no way to represent complex numbers.)
Edited for updated question:
Your Basic code has 6.875*10^-6 (meaning 6.875 × 10−6), but your Java code has Math.pow(6.875, -6) (meaning 6.875−6), which is a somewhat greater value, so your Java code triggers this problem for somewhat smaller values of y. This may be why you're seeing this problem now. To match the Basic code, you should change Math.pow(6.875, -6) to 6.875e-6.
Raising a negative number to a non-integer power results in an imaginary number in complex number mathematics, a NaN in Java arithmetic. If you really need to do that calculation, you need a complex number package. However, it is more likely that there is an error in your equation or you are trying to use it outside its range of validity.
Negtive number with real number power may get NAN
Related
I am writing tests for code performing calculations on floating point numbers. Quite expectedly, the results are rarely exact and I would like to set a tolerance between the calculated and expected result. I have verified that in practice, with double precision, the results are always correct after rounding of last two significant decimals, but usually after rounding the last decimal. I am aware of the format in which doubles and floats are stored, as well as the two main methods of rounding (precise via BigDecimal and faster via multiplication, math.round and division). As the mantissa is stored in binary however, is there a way to perform rounding using base 2 rather than 10?
Just clearing the last 3 bits almost always yields equal results, but if I could push it and instead 'add 2' to the mantissa if its second least significast bit is set, I could probably reach the limit of accuracy. This would be easy enough, expect I have no idea how to handle overflow (when all bits 52-1 are set).
A Java solution would be preferred, but I could probably port one for another language if I understood it.
EDIT:
As part of the problem was that my code was generic with regards to arithmetic (relying on scala.Numeric type class), what I did was an incorporation of rounding suggested in the answer into a new numeric type, which carried the calculated number (floating point in this case) and rounding error, essentially representing a range instead of a point. I then overrode equals so that two numbers are equal if their error ranges overlap (and they share arithmetic, i.e. the number type).
Yes, rounding off binary digits makes more sense than going through BigDecimal and can be implemented very efficiently if you are not worried about being within a small factor of Double.MAX_VALUE.
You can round a floating-point double value x with the following sequence in Java (untested):
double t = 9 * x; // beware: this overflows if x is too close to Double.MAX_VALUE
double y = x - t + t;
After this sequence, y should contain the rounded value. Adjust the distance between the two set bits in the constant 9 in order to adjust the number of bits that are rounded off. The value 3 rounds off one bit. The value 5 rounds off two bits. The value 17 rounds off four bits, and so on.
This sequence of instruction is attributed to Veltkamp and is typically used in “Dekker multiplication”. This page has some references.
I am new to Java. I am writing a small program to calculate the value of a number raised to a power, when I ran into a problem with negative numbers raised to a fractional exponent.
System.out.println(Math.pow(-8, 1/3f));
The output is NaN while I'm expecting -2?
What am I doing wrong? Are there any alternatives to calculate problems like this?
Any help appreciated.
This case is described in documentation .
If the first argument is finite and less than zero. <...>
if the second argument is finite and not an integer, then the result is NaN.
As far as I know there is no method in Java standard library to do it, so you have to implement it manually.
I understand that due to the nature of a float/double one should not use them for precision important calculations. However, i'm a little confused on their limitations due to mixed answers on similar questions, whether or not floats and doubles will always be inaccurate regardless of significant digits or are only inaccurate up to the 16th digit.
I've ran a few examples in Java,
System.out.println(Double.parseDouble("999999.9999999999");
// this outputs correctly w/ 16 digits
System.out.println(Double.parseDouble("9.99999999999999");
// This also outputs correctly w/ 15 digits
System.out.println(Double.parseDouble("9.999999999999999");
// But this doesn't output correctly w/ 16 digits. Outputs 9.999999999999998
I can't find the link to another answer that stated that values like 1.98 and 2.02 would round down to 2.0 and therefore create inaccuracies but testing shows that the values are printed correctly. So my first question is whether or not floating/double values will always be inaccurate or is there a lower limit where you can be assured of precision.
My second question is in regards to using BigDecimal. I know that I should be using BigDecimal for precision important calculations. Therefore I should be using BigDecimal's methods for arithmetic and comparing. However, BigDecimal also includes a doubleValue() method which will convert the BigDecimal to a double. Would it be safe for me to do a comparison between double values that I know for sure have less than 16 digits? There will be no arithmetic done on them at all so the inherent values should not have changed.
For example, is it safe for me to do the following?
BigDecimal myDecimal = new BigDecimal("123.456");
BigDecimal myDecimal2 = new BigDecimal("234.567");
if (myDecimal.doubleValue() < myDecimal2.doubleValue()) System.out.println("myDecimal is smaller than myDecimal2");
Edit: After reading some of the responses to my own answer i've realized my understanding was incorrect and have deleted it. Here are some snippets from it that might help in the future.
"A double cannot hold 0.1 precisely. The closest representable value to 0.1 is 0.1000000000000000055511151231257827021181583404541015625. Java Double.toString only prints enough digits to uniquely identify the double, not the exact value." - Patricia Shanahan
Sources:
https://stackoverflow.com/a/5749978 - States that a double can hold up to 15 digits
I suggest you read this page:
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
Once you've read and understood it, and perhaps converted several examples to their binary representations in the 64 bit floating point format, then you'll have a much better idea of what significant digits a Double can hold.
As a side note, (perhaps trivial) a nice and reliable way to store a known precision of value is to simply multiply it by the relevant factor and store as some integral type, which are completely precise.
For example:
double costInPounds = <something>; //e.g. 3.587
int costInPence = (int)(costInPounds * 100 + 0.5); //359
Plainly some precision can be lost, but if a required/desired precision is known, this can save a lot of bother with floating point values, and once this has been done, no precision can be lost by further manipulations.
The + 0.5 is to ensure that rounding works as expected. (int) takes the 'floor' of the provided double value, so adding 0.5 makes it round up and down as expected.
I have written a system that is able to convert any base (2-36) to another base with whole numbers, and it can convert any real number from base 10 to any other base (2-36).
My problem arises with converting a rational/irrational number from any base besides 10 to another base.
I use the following algorithm for right-side of the decimal point conversion:
1) Take the right side of the decimal point (0.xxxxxx--->) in the input and multiply it by the base you are converting to.
2) Take the number greater than one (left of the point) and add it to the right side of the converted number.
3) Take the right side of the product and use it in the next repetition as the multiplier (it times the base)
4) Repeat until satisfied or left with a whole number (0 on the right side).
This works nicely for converting any floating point number from decimal to another base, but obviously you can't convert FROM a base that isn't decimal.
So what I tried is converting that initial value to the right of the decimal to base 10, performing the math part, and then converting it back to the original base for when I add it to the output value (it's converted to the new base before being added).
Unfortunately, this returns incorrect results for the right side of the decimal point. So, I have answers that are always correct on the left side, but incorrect on the right if converting from a base that is not base 10.
Does anyone have any ideas for how to make this work? Or perhaps it just won't?
EDIT
Alternatively, can anyone link me/show me how to convert a rational hexadecimal value into decimal? That alone would be sufficient for me to work around this issue.
SOLUTION
I found a fairly easy workaround to this problem for anyone else in the future who reads this question.
All you have to do is the take the number on the right side of the decimal (whatever base it may be) and convert it to decimal (you can see how to convert integers here). Then take that number and divide it by the greatest place value in it. For instance:
A.C
C == 12 (dec)
12 / 16 = .75 (this is the fractional value in decimal)
You can then take that fractional decimal value and run it through the algorithm I discussed above.
Thanks for everyone's help on this issue!
Using floating point implies that you do not want to perform accurate computation.
Only numbers written in bases 2, 4, 8, 16,... can ever be accurately represented in Java floating point values (leaving integers aside). This is due to the limitations of the floating point representation.
Only numbers written in bases 2, 4, 5, 8, 10, 16, 20, 25, 32,... can be accurately printed in the decimal base. This is due to the limitation of our decimal number system.
I expect that you should therefore adapt some rules as to rounding of results and implement those throughout the algorithm. Make sure that you round rather than truncate, otherwise going through the floating points will give you incorrect results even in cases where the precision of the double type is sufficient for your purposes, or where the number can be accurately represented.
If you want to perform the computation in much higher precision, look at the BigInteger class and redesign your algorithm exclusively in integers. Alternatively, use a library for working with fractions; this is useful because the inputs to your algorithm can always be accurately represented as a fraction. However, in the end it always boils down to defining result rounding rules and implementing them correctly.
Edit:
As I learned from the comments, you prefer to emit output digits gradually, before the whole input is read. This is basically possible, but
You need to keep an interval, rather than a single number, as the "accummulator"; for example, if you have so far read 0.1111 in ternary, then you know that the output lies between 0.49382716 and 0.50617284 and you cannot emit even the first decimal digit after the decimal point at this stage. This is necessary to avoid seeing outputs like 0.4999999992 on the most "rational" of inputs.
When the full input is read, it is safer to "round up" and emit output based on the upper bound of the interval rather than on the bottom bound. This way 0.1111 in ternary will be converted to 0.5 in decimal. (This can be ignored if you are limited to hex to decimal conversion.)
Keep track of the maximum precision achieved by the input (logarithm of the width of the interval) and make sure you emit no more output digits than the input guarantees.
Use an internal representation of interval endpoints (lower and upper bounds) that can safely deal with the maximum precision you need.
Keep in mind that even quite popular software occasionally gets the details of this algorithm wrong and stay away from representing any intermediate results in floating point data types, or truncate the input to a number of digits that they can safely represent if it is longer.
You mention irrational numbers in the question, but every number that can be expressed with a finite (or periodically repeating) expansion, regardless of the base used, is necessary a rational number.
In conversions from hex to decimal, the output can even always be represented accurately which allows some simplifications like indefinitely waiting for the lower and upper bound to converge.
I couldn't really come up with a proper title for my question but allow me to present my case; I want to calculate a significance ratio in the form: p = 1 - X / Y
Here X comes from an iterative process; the process takes a large number of steps and counts how many different ways the process can end up in different states (stored in a HashMap). Once the iteration is over, I select a number of states and sum their values. It's hard to tell how large these numbers are so I am intending to implement the sum as BigInteger.
Y, on the other hand comes from a binomial coefficient with numbers in thousands-scale. I am inclined to use logGamma to calculate these coefficients, which as a result give me the natural logarithm of the value.
What I am interested in is to do division X / Y in the best/most effective way. If I can get X in the natural logarithm then I could subtract the powers and have my result as 1 - e ^ (lnX - lnY).
I see that BigInteger can't be logarithmized by Math.log, what can I do in this case?
You may be able to use doubles. A double can be extremely large, about 1.7e308. What it lacks is precision: it only supports about 15 digits. But if you can live with 15 digits of precision (in other words, if you don't care about the difference between 1,000,000,000,000,000 and 1,000,000,000,000,001) then doubles might get you close enough.
If you are calculating binomial coefficients on numbers in the thousands, then Doubles will not be good enough.
Instead I would be inclined to call the toString method on the number, and compute the log as log(10) * number.toString().length() + log(asFloat("0." + number.toString()) where asFloat takes a string representation of a number and converts it to a float.
If you need maximum precision, how about converting the BigIntegers into BigDecimals and doing algebra on them. If precision isn't paramount, then perhaps you can convert your BigIntegers into doubles and do simple algebra with them. Perhaps you can tell us more about your problem domain and why you feel logarithms are the best way to go.