This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I turn a floating point number into the closest fraction represented by a byte numerator and denominator?
I would like to take an arbitrary float or double in Java and convert it to a rational number - ie. a number of the form a/b where a and b are long integers. How can I do this in a reasonably efficient way?
(BTW - I already have code for simplifying the fractions, so it doesn't matter whether a/b are in their simplest form).
First see how double (or float but I refer only to double below) is constructed by IEEE-754 rules:
Then convert double to bits with Double.doubleToLongBits.
Count the fraction using this method 1 + bit_0 * 2^(-1) + bit_1 * 2^(-2) ....
Multiply the result with exponent (2^(exponent) to be precise) and with a sign.
Here is the code:
double number = -0.15625;
// Code below doesn't work for 0 and NaN - just check before
long bits = Double.doubleToLongBits(number);
long sign = bits >>> 63;
long exponent = ((bits >>> 52) ^ (sign << 11)) - 1023;
long fraction = bits << 12; // bits are "reversed" but that's not a problem
long a = 1L;
long b = 1L;
for (int i = 63; i >= 12; i--) {
a = a * 2 + ((fraction >>> i) & 1);
b *= 2;
}
if (exponent > 0)
a *= 1 << exponent;
else
b *= 1 << -exponent;
if (sign == 1)
a *= -1;
// Here you have to simplify the fraction
System.out.println(a + "/" + b);
But be careful - with big exponents you may run into numbers that won't fit into your variables. In fact you may consider storing exponent along the fraction and only multiply it if exponent is small enough. If it's not and you have to display the fraction to the user you may use the scientific notation (that requires solving equation 2^n = x * 10^m where m is your decimal exponent and x is a number you have to multiply the fraction with. But that's a matter for another question...).
Let long bits = Double.doubleToLongBits(double). From the Javadoc of Double.longBitsToDouble:
...let s, e, and m be three values that can be computed from the argument:
int s = ((bits >> 63) == 0) ? 1 : -1;
int e = (int)((bits >> 52) & 0x7ffL);
long m = (e == 0) ?
(bits & 0xfffffffffffffL) << 1 :
(bits & 0xfffffffffffffL) | 0x10000000000000L;
Then the floating-point result equals the value of the mathematical expression s·m·2e-1075.
That result is most certainly a rational number.
The rational number corresponding to any FP value is (mantissa/2^-exponent), where the mantissa and exponent are as defined in IEEE 754 (Wiki reference). You can then apply divisions by LCD (or I guess GCF) to get a canonical rational number.
The various concepts contained under the rubric continued fractions yield best-possible rational approximations for a given maximum denominator. Specifically, you're asking about calculating a convergent sequence. At some point, when your denominator is large enough according to whatever criteria you want (or are forced upon you by finite integer implementation lengths), terminate calculating the convergent terms and use the last one. Algorithms are described in rather good detail on the linked Wikipedia pages.
To address one concern you raised, the fractions generated in the convergent sequence are always in reduced form. They are also provably the best possible approximations for a given denominator. Precisely, a convergent term of the form m/n is closer to the target number than another other fraction with denominator < n. In other words, the convergent algorithm yields better approximations than trial and error.
As you know, floating point numbers cannot store even simple numbers such as 0.1 exactly. If you use a naïve approach for converting floating point numbers, then you might end up with huge numerators and denomiators.
However, there are algorithms that might help: the Dragon4 and Grisu3 algorithms are designed to create the most readable output for floating point numbers. They take into advantage that certain floating point bit sequences can be expressed by several decimal fractions and choose the shortest of these.
For a first implementation, I would use Dragon4 and/or Grisu3 to create the shortest decimal fraction out of the floating point. For example the floating point number with the bits cd cc cc cc cc cc f4 3f would result in the decimal fraction 1.3 instead of 1.29999999. I then would express the decimal fraction in the form a/b and simplify it. In the given example, this would be 13/10, with no further simplification.
Please note that the conversion into a decimal fraction may be a disadvantage. For example, the rational number 1/3 cannot be expressed exactly in both decimal and floating point numbers. So, the best solution would be to modify an algorithm such as Dragon4 to use arbitrary fractional denominators and not just 10. Alas, this almost certainly will require quite a lot of work and some CS background.
Related
I was wondering about the differences between positive and negative zero in different numeric types.
I understand the IEEE-754 for floating point arithmetic and bit representation in double precision so the following didn't come as a surprise
double posz = 0.0;
double negz = -0.0;
System.out.println(Long.toBinaryString(Double.doubleToLongBits(posz)));
System.out.println(Long.toBinaryString(Double.doubleToLongBits(negz)));
// output
>>> 0
>>> 1000000000000000000000000000000000000000000000000000000000000000
What did surprise me and showed me that im clueless about the bit representation of long type in java is that even if i shift right (unsigned >>>) then the binary representation of both positive and negative zero is the same
long posz = 0L;
long negz = -0L;
for (int i = 63; i >= 0; i--) {
System.out.print((posz >>> i) & 1);
}
System.out.println();
for (int i = 63; i >= 0; i--) {
System.out.print((negz >>> i) & 1);
}
// output
>>> 0000000000000000000000000000000000000000000000000000000000000000
>>> 0000000000000000000000000000000000000000000000000000000000000000
so i am wondering what does java do from a bit representation when i write the following
long posz = 0L;
long negz = -0L;
Does the compiler understand that they are both zero and disregards the sign (and so assignes 0 to the sign bit) or is there other magic here?
or is there other magic here?
Yes. 2's complement.
2's complement is a bit magical. It accomplishes 2 major objectives. Before getting into that, let's first stew on the notion of negative zero for a moment.
Negative zero is kinda weird. Why does it exist at all?
Negative zero isn't actually a thing. Ask any mathematician "Hey, so, what's up with negative zero?" and they'll just look at you in befuddlement. It's not a thing. Mathematically, 0 and -0 are utterly identical. Not just 'nearly identical', but 100%, fully, in all possible ways, identical. We don't generally want our numbers to be capable of representing both 5.0 as well as 5.00 - as those two are entirely, 100%, identical. If you don't think that a value system ought to waste bits trying to differentiate between 5.0 and 5.00, then it's equally bizarro to want the ability to represent -0.0 and +0.0 as distinct entities.
So, wanting -0 in the first place is kinda weird. All the numeric primitives (long, int, short, byte, and I guess char which is technically numeric too) all cannot represent this number. Instead, long z = -0 boils down to:
Take the constant "0".
Apply the 'negate' operation to this number (- is a unary operator. Just like 2+5 makes the system calculate the binary operation of "addition" on elements 2 and 5, -x makes the system calculate the unary operation of "negation" on element x. Applying the negation operation to 0 produces 0. It's no different from writing, say, int x = 5 + 0;. That +0 part doesn't do anything. The - in front of -0 doesn't do anything. In contrast to -0.0 where it does do something (gets you negative zero, the double value, instead of positive zero).
Store this result in z (so, just 0 then).
There is no way to tell if that minus is there. They both result in ALL ZERO bits, and hence, there is no way for the computer to tell if you initialized that variable with the expression -0 or with +0. Again in contrast to double where as you noticed there's a bit different.
So why does double have it then?
Let's stew a bit on the notion of doubles and IEEE-754 math.
A double takes 64 bits. From basic pure mathematical principles then, a double is as incapable of representing more than 2^64 different possible values you are capable of breaking the speed of light or making 1+1=3.
And yet, a double aims to represent all numbers. There are way more numbers between 0 and 1 than 2^64 options (in fact, an infinite amount of numbers exist between 0 and 1), and that's just 0 to 1.
So, how doubles actually work is different. A few less than 2^64 numbers are chosen from the entire number line. Let's call these the blessed numbers.
The blessed numbers are not equally distributed. The closer you are to 1, the more blessed numbers exist. In other words, the distance between 2 blessed numbers increases as you move away from 1. For example, if you go from, say, 1e100 (a 1 with a hundred zeroes) and want to find the next blessed number, it's quite a ways. It's in fact higher than 1.0! - 1e100+1 is in fact 1e100 again, because the way double math works is that after every single last mathematical operation you to do them, the end result is rounded to the nearest blessed number.
Let's try it!
double d = 1e100;
System.out.println(d);
System.out.println(d + 1);
// prints: 1.0E100
// 1.0E100
But that means.. double values don't actually represent a single number!!. What any given double represents is in fact this concept:
An unknown number whose value lies between [D - 𝛿, D + 𝛿], where D is the blessed number that is closed to this unknown number this value represents, and, and 𝛿 is half of the distance between D and the next nearest blessed number on either side.
Given that usually 𝛿 is incredibly small, this is 'good enough'. But this weirdness does explain why you really, really do not want any business at all with double if accuracy is important (such as with currencies. Don't store those in doubles, ever!)
Given that, what does -0.0 represent? not actually just 0. It represents, specifically: An unknown number whose value lies between [-𝛿, 0] where 0 is real zero (and this, has no sign), and 𝛿 is Double.MIN_VALUE: the smallest non-zero positive number representable with a double.
That's why -0.0 and +0.0 both exist: They are in fact different concepts. Rarely relevant, but sometimes it is. In contrast to e.g. long where 5 just means 5 and not "between 4.5 and 5.5", because longs fundamentally don't recognize that fractional parts exist in the first place. Given that 5 just means 5, then 0 just means 0, and there is no such thing as negative zero in the first place.
Now we get to 2's complement
2's complement is a cool system. It has two neat properties:
It only has the one zero.
It does not matter if you treat the bit sequence as signed-by-way-of-2s-complement or as unsigned, for the purposes of the operations: Addition, Substraction, Increment, Decrement, zero-check. The modifications you do to the bits to implement those operations is identical.
It DOES matter for greater than, less than, and divide.
2's complement works like this: To negate a number, take all bits and flip them (i.e. do a NOT operation on the bits). Then, add 1.
Let's try it!
int x = 5;
int y = -x;
for (int i = 31; i >= 0; i--) {
System.out.print((x >>> i) & 1);
}
System.out.println();
for (int i = 31; i >= 0; i--) {
System.out.print((y >>> i) & 1);
}
System.out.println();
// prints 00000000000000000000000000000101
// 11111111111111111111111111111011
As we can see, the 'flip all bits and add 1' algorithm was applied.
2s complement is, of course, reversible: If you do 'flip all bits and add 1' twice in a row you get the same number out.
Now let's try -0. 0 is 32 0 bits, then flip them all, then add 1:
00000000000000000000000000000000
11111111111111111111111111111111 // flip all
100000000000000000000000000000000 // add 1
00000000000000000000000000000000 // that 1 fell off
and because ints can only store 32 bits, that final '1' falls off of the end. And we're left with zero again.
Now let's go with bytes ( abit smaller) and try to add, say, 200 and 50 together.
11001000 // 200 in binary
00110010 // 50 in binary
-------- +
11111010 // 250 in binary.
now let's instead go: Oh wait, whoops, that was an error, actually these numbers are in 2s complement. That wasn't 200, nono. 11001000 is a bit sequence that actually means (let's apply the 'flip all bits, add 1' scheme: 00111000 - it's actually -56. So the operation was meant to represent '-56 + 50'. Which is -6. -6 in binary is (write out 6, flip bits, add 1):
00000110
11111001
11111010
hey now, look at that, nothing changed! It's the same result! So, when the computer does x + y, where x and y are numbers, the computer does not care. Whether x is "an unsigned number" or "a signed with 2s complement number", the operation is identical.
That's why 2s complement is applied. It makes math MUCH faster. The CPU doesn't have to futz about with branching out to deal with sign bits.
In this sense it is more correct to say that in java, int, long, char, byte and short are neither signed nor unsigned, they just are. At least for the purposes of +, -, ++, and --. No the idea that int is signed is fundamentally a property of e.g. System.out.println(int) - that method chooses to render the bitsequence 11111111111111111111111111111111 as "-1" instead of as 4294967296.
long has no such thing as negative zero. Only float and double have a different representation of positive and negative zero.
I want to create a setter for a double variable num, but I would only like to update it if the input is a multiple of 0.5.
Here's what I have, but I'm worried about floating-point errors.
public void setNum(double num) {
if (num % 0.5 == 0.0) {
this.num = num;
}
}
I assume that for some inputs that actually are a multiple of 0.5, it might return some 0.0000003 or 0.49999997, thus not 0.0.
What can I do to remedy this? Or is this not a problem in this case?
Unless you're dealing with really big floating point numbers, you won't lose accuracy for something that actually is an exact multiple of 0.5, because 0.5 is exactly expressible in binary. But for a number that is close enough to a multiple of 0.5, you might find that (e.g.) 10.500000000000000001 has been stored as 10.5.
So (num % 0.5 == 0.0) will definitely be true if num is a multiple of 0.5, but it might also be true if num is a slightly inaccurate representation of a number that is close to a multiple of 0.5.
Java’s % operator never introduces any rounding error because the result is always small enough that it is able to represent the exact remainder.
The Java Language Specification, Java SE 11 Edition, 15.7.3 defines % for cases not involving NaNs, infinities, or zeros:
In the remaining cases, where neither an infinity, nor a zero, nor NaN is involved, the floating-point remainder r from the division of a dividend n by a divisor d is defined by the mathematical relation r = n - (d ⋅ q) where q is an integer that is negative only if n/d is negative and positive only if n/d is positive, and whose magnitude is as large as possible without exceeding the magnitude of the true mathematical quotient of n and d.
Thus the magnitude of r is not greater than the magnitude of n (because we subtract some d ⋅ q from n that is smaller than n in magnitude and that is zero or has the same sign as n) and is less than the magnitude of d (because otherwise q could be one larger in magnitude). This means r is at least as fine as n and q—its exponent is at least as small as n’s exponent and as q’s exponent. And that means no significant bits in the binary representation of n - (d ⋅ q) are below the position value of r’s lowest bit. Therefore, no significant bits were beyond the point where r had to be rounded. So nothing was lost in rounding. So r is an exact result.
This question already has answers here:
How to avoid floating point precision errors with floats or doubles in Java?
(12 answers)
Double calculation producing odd result [duplicate]
(3 answers)
Closed 7 years ago.
This is partly academic, as for my purposes I only need it rounded to two decimal places; but I am keen to know what is going on to produce two slightly different results.
This is the test that I wrote to narrow it to the simplest implementation:
#Test
public void shouldEqual() {
double expected = 450.00d / (7d * 60); // 1.0714285714285714
double actual = 450.00d / 7d / 60; // 1.0714285714285716
assertThat(actual).isEqualTo(expected);
}
But it fails with this output:
org.junit.ComparisonFailure:
Expected :1.0714285714285714
Actual :1.0714285714285716
Can anyone explain in detail what is going on under the hood to result in the value at 1.000000000000000X being different?
Some of the points I'm looking for in an answer are:
Where is the precision lost?
Which method is preferred, and why?
Which is actually correct? (In pure maths, both can't be right. Perhaps both are wrong?)
Is there a better solution or method for these arithmetic operations?
I see a bunch of questions that tell you how to work around this problem, but not one that really explains what's going on, other than "floating-point roundoff error is bad, m'kay?" So let me take a shot at it. Let me first point out that nothing in this answer is specific to Java. Roundoff error is a problem inherent to any fixed-precision representation of numbers, so you get the same issues in, say, C.
Roundoff error in a decimal data type
As a simplified example, imagine we have some sort of computer that natively uses an unsigned decimal data type, let's call it float6d. The length of the data type is 6 digits: 4 dedicated to the mantissa, and 2 dedicated to the exponent. For example, the number 3.142 can be expressed as
3.142 x 10^0
which would be stored in 6 digits as
503142
The first two digits are the exponent plus 50, and the last four are the mantissa. This data type can represent any number from 0.001 x 10^-50 to 9.999 x 10^+49.
Actually, that's not true. It can't store any number. What if you want to represent 3.141592? Or 3.1412034? Or 3.141488906? Tough luck, the data type can't store more than four digits of precision, so the compiler has to round anything with more digits to fit into the constraints of the data type. If you write
float6d x = 3.141592;
float6d y = 3.1412034;
float6d z = 3.141488906;
then the compiler converts each of these three values to the same internal representation, 3.142 x 10^0 (which, remember, is stored as 503142), so that x == y == z will hold true.
The point is that there is a whole range of real numbers which all map to the same underlying sequence of digits (or bits, in a real computer). Specifically, any x satisfying 3.1415 <= x <= 3.1425 (assuming half-even rounding) gets converted to the representation 503142 for storage in memory.
This rounding happens every time your program stores a floating-point value in memory. The first time it happens is when you write a constant in your source code, as I did above with x, y, and z. It happens again whenever you do an arithmetic operation that increases the number of digits of precision beyond what the data type can represent. Either of these effects is called roundoff error. There are a few different ways this can happen:
Addition and subtraction: if one of the values you're adding has a different exponent from the other, you will wind up with extra digits of precision, and if there are enough of them, the least significant ones will need to be dropped. For example, 2.718 and 121.0 are both values that can be exactly represented in the float6d data type. But if you try to add them together:
1.210 x 10^2
+ 0.02718 x 10^2
-------------------
1.23718 x 10^2
which gets rounded off to 1.237 x 10^2, or 123.7, dropping two digits of precision.
Multiplication: the number of digits in the result is approximately the sum of the number of digits in the two operands. This will produce some amount of roundoff error, if your operands already have many significant digits. For example, 121 x 2.718 gives you
1.210 x 10^2
x 0.02718 x 10^2
-------------------
3.28878 x 10^2
which gets rounded off to 3.289 x 10^2, or 328.9, again dropping two digits of precision.
However, it's useful to keep in mind that, if your operands are "nice" numbers, without many significant digits, the floating-point format can probably represent the result exactly, so you don't have to deal with roundoff error. For example, 2.3 x 140 gives
1.40 x 10^2
x 0.23 x 10^2
-------------------
3.22 x 10^2
which has no roundoff problems.
Division: this is where things get messy. Division will pretty much always result in some amount of roundoff error unless the number you're dividing by happens to be a power of the base (in which case the division is just a digit shift, or bit shift in binary). As an example, take two very simple numbers, 3 and 7, divide them, and you get
3. x 10^0
/ 7. x 10^0
----------------------------
0.428571428571... x 10^0
The closest value to this number which can be represented as a float6d is 4.286 x 10^-1, or 0.4286, which distinctly differs from the exact result.
As we'll see in the next section, the error introduced by rounding grows with each operation you do. So if you're working with "nice" numbers, as in your example, it's generally best to do the division operations as late as possible because those are the operations most likely to introduce roundoff error into your program where none existed before.
Analysis of roundoff error
In general, if you can't assume your numbers are "nice", roundoff error can be either positive or negative, and it's very difficult to predict which direction it will go just based on the operation. It depends on the specific values involved. Look at this plot of the roundoff error for 2.718 z as a function of z (still using the float6d data type):
In practice, when you're working with values that use the full precision of your data type, it's often easier to treat roundoff error as a random error. Looking at the plot, you might be able to guess that the magnitude of the error depends on the order of magnitude of the result of the operation. In this particular case, when z is of the order 10-1, 2.718 z is also on the order of 10-1, so it will be a number of the form 0.XXXX. The maximum roundoff error is then half of the last digit of precision; in this case, by "the last digit of precision" I mean 0.0001, so the roundoff error varies between -0.00005 and +0.00005. At the point where 2.718 z jumps up to the next order of magnitude, which is 1/2.718 = 0.3679, you can see that the roundoff error also jumps up by an order of magnitude.
You can use well-known techniques of error analysis to analyze how a random (or unpredictable) error of a certain magnitude affects your result. Specifically, for multiplication or division, the "average" relative error in your result can be approximated by adding the relative error in each of the operands in quadrature - that is, square them, add them, and take the square root. With our float6d data type, the relative error varies between 0.0005 (for a value like 0.101) and 0.00005 (for a value like 0.995).
Let's take 0.0001 as a rough average for the relative error in values x and y. The relative error in x * y or x / y is then given by
sqrt(0.0001^2 + 0.0001^2) = 0.0001414
which is a factor of sqrt(2) larger than the relative error in each of the individual values.
When it comes to combining operations, you can apply this formula multiple times, once for each floating-point operation. So for instance, for z / (x * y), the relative error in x * y is, on average, 0.0001414 (in this decimal example) and then the relative error in z / (x * y) is
sqrt(0.0001^2 + 0.0001414^2) = 0.0001732
Notice that the average relative error grows with each operation, specifically as the square root of the number of multiplications and divisions you do.
Similarly, for z / x * y, the average relative error in z / x is 0.0001414, and the relative error in z / x * y is
sqrt(0.0001414^2 + 0.0001^2) = 0.0001732
So, the same, in this case. This means that for arbitrary values, on average, the two expressions introduce approximately the same error. (In theory, that is. I've seen these operations behave very differently in practice, but that's another story.)
Gory details
You might be curious about the specific calculation you presented in the question, not just an average. For that analysis, let's switch to the real world of binary arithmetic. Floating-point numbers in most systems and languages are represented using IEEE standard 754. For 64-bit numbers, the format specifies 52 bits dedicated to the mantissa, 11 to the exponent, and one to the sign. In other words, when written in base 2, a floating point number is a value of the form
1.1100000000000000000000000000000000000000000000000000 x 2^00000000010
52 bits 11 bits
The leading 1 is not explicitly stored, and constitutes a 53rd bit. Also, you should note that the 11 bits stored to represent the exponent are actually the real exponent plus 1023. For example, this particular value is 7, which is 1.75 x 22. The mantissa is 1.75 in binary, or 1.11, and the exponent is 1023 + 2 = 1025 in binary, or 10000000001, so the content stored in memory is
01000000000111100000000000000000000000000000000000000000000000000
^ ^
exponent mantissa
but that doesn't really matter.
Your example also involves 450,
1.1100001000000000000000000000000000000000000000000000 x 2^00000001000
and 60,
1.1110000000000000000000000000000000000000000000000000 x 2^00000000101
You can play around with these values using this converter or any of many others on the internet.
When you compute the first expression, 450/(7*60), the processor first does the multiplication, obtaining 420, or
1.1010010000000000000000000000000000000000000000000000 x 2^00000001000
Then it divides 450 by 420. This produces 15/14, which is
1.0001001001001001001001001001001001001001001001001001001001001001001001...
in binary. Now, the Java language specification says that
Inexact results must be rounded to the representable value nearest to the infinitely precise result; if the two nearest representable values are equally near, the one with its least significant bit zero is chosen. This is the IEEE 754 standard's default rounding mode known as round to nearest.
and the nearest representable value to 15/14 in 64-bit IEEE 754 format is
1.0001001001001001001001001001001001001001001001001001 x 2^00000000000
which is approximately 1.0714285714285714 in decimal. (More precisely, this is the least precise decimal value that uniquely specifies this particular binary representation.)
On the other hand, if you compute 450 / 7 first, the result is 64.2857142857..., or in binary,
1000000.01001001001001001001001001001001001001001001001001001001001001001...
for which the nearest representable value is
1.0000000100100100100100100100100100100100100100100101 x 2^00000000110
which is 64.28571428571429180465... Note the change in the last digit of the binary mantissa (compared to the exact value) due to roundoff error. Dividing this by 60 gives you
1.000100100100100100100100100100100100100100100100100110011001100110011...
Look at the end: the pattern is different! It's 0011 that repeats, instead of 001 as in the other case. The closest representable value is
1.0001001001001001001001001001001001001001001001001010 x 2^00000000000
which differs from the other order of operations in the last two bits: they're 10 instead of 01. The decimal equivalent is 1.0714285714285716.
The specific rounding that causes this difference should be clear if you look at the exact binary values:
1.0001001001001001001001001001001001001001001001001001001001001001001001...
1.0001001001001001001001001001001001001001001001001001100110011001100110...
^ last bit of mantissa
It works out in this case that the former result, numerically 15/14, happens to be the most accurate representation of the exact value. This is an example of how leaving division until the end benefits you. But again, this rule only holds as long as the values you're working with don't use the full precision of the data type. Once you start working with inexact (rounded) values, you no longer protect yourself from further roundoff errors by doing the multiplications first.
It has to do with how the double type is implemented and the fact that the floating-point types don't make the same precision guarantees as other simpler numerical types. Although the following answer is more specifically about sums, it also answers your question by explaining how there is no guarantee of infinite precision in floating-point mathematical operations: Why does changing the sum order returns a different result?. Essentially you should never attempt to determine the equality of floating-point values without specifying an acceptable margin of error. Google's Guava library includes DoubleMath.fuzzyEquals(double, double, double) to determine the equality of two double values within a certain precision. If you wish to read up on the specifics of floating-point equality this site is quite useful; the same site also explains floating-point rounding errors. In summation: the expected and actual values of your calculation differ because of the rounding differing between the calculations due to the order of operations.
Let's simplify things a bit. What you want to know is why 450d / 420 and 450d / 7 / 60 (specifically) give different results.
Let's see how division is performed in IEE double-precision floating point format. Without going deep into implementation details, it's basically XOR-ing the sign bit, subtracting the exponent of the divisor from the exponent of the dividend, dividing the mantissas, and normalizing the result.
First, we should represent our numbers in the proper format for double:
450 is 0 10000000111 1100001000000000000000000000000000000000000000000000
420 is 0 10000000111 1010010000000000000000000000000000000000000000000000
7 is 0 10000000001 1100000000000000000000000000000000000000000000000000
60 is 0 10000000100 1110000000000000000000000000000000000000000000000000
Let's first divide 450 by 420
First comes the sign bit, it's 0 (0 xor 0 == 0).
Then comes the exponent. 10000000111b - 10000000111b + 1023 == 10000000111b - 10000000111b + 01111111111b == 01111111111b
Looking good, now the mantissa:
1.1100001000000000000000000000000000000000000000000000 / 1.1010010000000000000000000000000000000000000000000000 == 1.1100001 / 1.101001. There are a couple of different ways to do this, I'll talk a bit about them later. The result is 1.0(001) (you can verify it here).
Now we should normalize the result. Let's see the guard, round and sticky bit values:
0001001001001001001001001001001001001001001001001001 0 0 1
Guard bit's 0, we don't do any rounding. The result is, in binary:
0 01111111111 0001001001001001001001001001001001001001001001001001
Which gets represented as 1.0714285714285714 in decimal.
Now let's divide 450 by 7 by analogy.
Sign bit = 0
Exponent = 10000000111b - 10000000001b + 01111111111b == -01111111001b + 01111111111b + 01111111111b == 10000000101b
Mantissa = 1.1100001 / 1.11 == 1.00000(001)
Rounding:
0000000100100100100100100100100100100100100100100100 1 0 0
Guard bit is set, round and sticky bits are not. We are rounding to-nearest (default mode for IEEE), and we're stuck right between the two possible values which we could round to. As the lsb is 0, we add 1. This gives us the rounded mantissa:
0000000100100100100100100100100100100100100100100101
The result is
0 10000000101 0000000100100100100100100100100100100100100100100101
Which gets represented as 64.28571428571429 in decimal.
Now we will have to divide it by 60... But you already know that we have lost some precision. Dividing 450 by 420 didn't require rounding at all, but here, we already had to round the result at least once. But, for completeness's sake, let's finish the job:
Dividing 64.28571428571429 by 60
Sign bit = 0
Exponent = 10000000101b - 10000000100b + 01111111111b == 01111111110b
Mantissa = 1.0000000100100100100100100100100100100100100100100101 / 1.111 == 0.10001001001001001001001001001001001001001001001001001100110011
Round and shift:
0.1000100100100100100100100100100100100100100100100100 1 1 0 0
1.0001001001001001001001001001001001001001001001001001 1 0 0
Rounding just as in the previous case, we get the mantissa: 0001001001001001001001001001001001001001001001001010.
As we shifted by 1, we add that to the exponent, getting
Exponent = 01111111111b
So, the result is:
0 01111111111 0001001001001001001001001001001001001001001001001010
Which gets represented as 1.0714285714285716 in decimal.
Tl;dr:
The first division gave us:
0 01111111111 0001001001001001001001001001001001001001001001001001
And the last division gave us:
0 01111111111 0001001001001001001001001001001001001001001001001010
The difference is in the last 2 bits only, but we could have lost more - after all, to get the second result, we had to round two times instead of none!
Now, about mantissa division. Floating-point division is implemented in two major ways.
The way mandated by the IEEE long division (here are some good examples; it's basically the regular long division, but with binary instead of decimal), and it's pretty slow. That is what your computer did.
There is also a faster but less accrate option, multiplication by inverse. First, a reciprocal of the divisor is found, and then multiplication is performed.
That's because double division often lead to a loss of precision. Said loss can vary depending on the order of the divisions.
When you divide by 7d, you already lost some precision with the actual result. Then only you divide an erroneous result by 60.
When you divide by 7d * 60, you only have to use division once, thus losing precision only once.
Note that double multiplication can sometimes fail too, but that's much less common.
Certainly the order of the operations mixed with the fact that doubles aren't precise :
450.00d / (7d * 60) --> a = 7d * 60 --> result = 450.00d / a
vs
450.00d / 7d / 60 --> a = 450.00d /7d --> result = a / 60
The code review tool I use complains with the below when I start comparing two float values using equality operator. What is the correct way and how to do it? Is there a helper function (commons-*) out there which I can reuse?
Description
Cannot compare floating-point values using the equals (==) operator
Explanation
Comparing floating-point values by using either the equality (==) or inequality (!=) operators is not always accurate because of rounding errors.
Recommendation
Compare the two float values to see if they are close in value.
float a;
float b;
if(a==b)
{
..
}
IBM has a recommendation for comparing two floats, using division rather than subtraction - this makes it easier to select an epsilon that works for all ranges of input.
if (abs(a/b - 1) < epsilon)
As for the value of epsilon, I would use 5.96e-08 as given in this Wikipedia table, or perhaps 2x that value.
It wants you to compare them to within the amount of accuracy you need. For example if you require that the first 4 decimal digits of your floats are equal, then you would use:
if(-0.00001 <= a-b && a-b <= 0.00001)
{
..
}
Or:
if(Math.abs(a-b) < 0.00001){ ... }
Where you add the desired precision to the difference of the two numbers and compare it to twice the desired precision.
Whatever you think is more readable. I prefer the first one myself as it clearly shows the precision you are allowing on both sides.
a = 5.43421 and b = 5.434205 will pass the comparison
private static final float EPSILON = <very small positive number>;
if (Math.abs(a-b) < EPSILON)
...
As floating point offers you variable but uncontrollable precision (that is, you can't set the precision other than when you choose between using double and float), you have to pick your own fixed precision for comparisons.
Note that this isn't a true equivalence operator any more, as it isn't transitive. You can easily get a equals b and b equals c but a not equals c.
Edit: also note that if a is negative and b is a very large positive number, the subtraction can overflow and the result will be negative infinity, but the test will still work, as the absolute value of negative infinity is positive infinity, which will be bigger than EPSILON.
Use commons-lang
org.apache.commons.lang.math.NumberUtils#compare
Also commons-math (in your situation more appropriate solution):
http://commons.apache.org/math/apidocs/org/apache/commons/math/util/MathUtils.html#equals(double, double)
The float type is an approximate value - there's an exponent portion and a value portion with finite accuracy.
For example:
System.out.println((0.6 / 0.2) == 3); // false
The risk is that a tiny rounding error can make a comparison false, when mathematically it should be true.
The workaround is to compare floats allowing a minor difference to still be "equal":
static float e = 0.00000000000001f;
if (Math.abs(a - b) < e)
Apache commons-math to the rescue: MathUtils.(double x, double y, int maxUlps)
Returns true if both arguments are equal or within the range of allowed error (inclusive). Two float numbers are considered equal if there are (maxUlps - 1) (or fewer) floating point numbers between them, i.e. two adjacent floating point numbers are considered equal.
Here's the actual code form the Commons Math implementation:
private static final int SGN_MASK_FLOAT = 0x80000000;
public static boolean equals(float x, float y, int maxUlps) {
int xInt = Float.floatToIntBits(x);
int yInt = Float.floatToIntBits(y);
if (xInt < 0)
xInt = SGN_MASK_FLOAT - xInt;
if (yInt < 0)
yInt = SGN_MASK_FLOAT - yInt;
final boolean isEqual = Math.abs(xInt - yInt) <= maxUlps;
return isEqual && !Float.isNaN(x) && !Float.isNaN(y);
}
This gives you the number of floats that can be represented between your two values at the current scale, which should work better than an absolute epsilon.
I took a stab at this based on the way java implements == for doubles. It converts to the IEEE 754 long integer form first and then does a bitwise compare. Double also provides the static doubleToLongBits() to get the integer form. Using bit fiddling you can 'round' the mantissa of the double by adding 1/2 (one bit) and truncating.
In keeping with supercat's observation, the function first tries a simple == comparison and only rounds if that fails. Here is what I came up with some (hopefully) helpful comments.
I did some limited testing, but can't say I've tried all edge cases. Also, I did not test performance. It shouldn't be too bad.
I just realized that this is essentially the same solution as the one offered by Dmitri. Perhaps a bit more concise.
static public boolean nearlyEqual(double lhs, double rhs){
// This rounds to the 6th mantissa bit from the end. So the numbers must have the same sign and exponent and the mantissas (as integers)
// need to be within 32 of each other (bottom 5 bits of 52 bits can be different).
// To allow 'n' bits of difference create an additive value of 1<<(n-1) and a mask of 0xffffffffffffffffL<<n
// e.g. 4 bits are: additive: 0x10L = 0x1L << 4 and mask: 0xffffffffffffffe0L = 0xffffffffffffffffL << 5
//int bitsToIgnore = 5;
//long additive = 1L << (bitsToIgnore - 1);
//long mask = ~0x0L << bitsToIgnore;
//return ((Double.doubleToLongBits(lhs)+additive) & mask) == ((Double.doubleToLongBits(rhs)+additive) & mask);
return lhs==rhs?true:((Double.doubleToLongBits(lhs)+0x10L) & 0xffffffffffffffe0L) == ((Double.doubleToLongBits(rhs)+0x10L) & 0xffffffffffffffe0L);
}
The following modification handles the change in sign case where the value is on either side of 0.
return lhs==rhs?true:((Double.doubleToLongBits(lhs)+0x10L) & 0x7fffffffffffffe0L) == ((Double.doubleToLongBits(rhs)+0x10L) & 0x7fffffffffffffe0L);
There are many cases where one wants to regard two floating-point numbers as equal only if they are absolutely equivalent, and a "delta" comparison would be wrong. For example, if f is a pure function), and one knows that q=f(x) and y===x, then one should know that q=f(y) without having to compute it. Unfortunately the == has two defects in this regard.
If one value is positive zero and the other is negative zero, they will compare as equal even though they are not necessarily equivalent. For example if f(d)=1/d, a=0 and b=-1*a, then a==b but f(a)!=f(b).
If either value is a NaN, the comparison will always yield false even if one value was assigned directly from the other.
Although there are many cases where checking floating-point numbers for exact equivalence is right and proper, I'm not sure about any cases where the actual behavior of == should be considered preferable. Arguably, all tests for equivalence should be done via a function that actually tests equivalence (e.g. by comparing bitwise forms).
First, a few things to note:
The "Standard" way to do this is to choose an constant epsilon, but constant epsilons do not work correctly for all number ranges.
If you want to use a constant epsilon sqrt(EPSILON) the square root of the epsilon from float.h is a generally considered a good value. (this comes from an infamous "orange book" who's name escapes me at the moment).
Floating point division is going to be slow, so you probably want to avoid it for comparisons even if it behaves like picking an epsilon that is custom made for the numbers' magnitudes.
What do you really want to do? something like this:
Compare how many representable floating point numbers the values differ by.
This code comes from this really great article by Bruce Dawson. The article has been since updated here. The main difference is the old article breaks the strict-aliasing rule. (casting floating pointers to int pointer, dereferencing, writing, casting back). While the C/C++ purist will quickly point out the flaw, in practice this works, and I consider the code more readable. However, the new article uses unions and C/C++ gets to keep its dignity. For brevity I give the code that breaks strict aliasing below.
// Usable AlmostEqual function
bool AlmostEqual2sComplement(float A, float B, int maxUlps)
{
// Make sure maxUlps is non-negative and small enough that the
// default NAN won't compare as equal to anything.
assert(maxUlps > 0 && maxUlps < 4 * 1024 * 1024);
int aInt = *(int*)&A;
// Make aInt lexicographically ordered as a twos-complement int
if (aInt < 0)
aInt = 0x80000000 - aInt;
// Make bInt lexicographically ordered as a twos-complement int
int bInt = *(int*)&B;
if (bInt < 0)
bInt = 0x80000000 - bInt;
int intDiff = abs(aInt - bInt);
if (intDiff <= maxUlps)
return true;
return false;
}
The basic idea in the code above is to first notice that given the IEEE 754 floating point format, {sign-bit, biased-exponent, mantissa}, that the numbers are lexicographically ordered if interpreted as signed magnitude ints. That is the sign bit becomes the sign bit, the and the exponent always completely outranks the mantissa in defining magnitude of the float and because it comes first in determining the magnitude of the number interpreted as an int.
So, we interpret the bit representation of the floating point number as a signed-magnitude int. We then convert the signed-magnitude ints to a two's complement ints by subtracting them from 0x80000000 if the number is negative. Then we just compare the two values as we would any signed two's complement ints, and seeing how many values they differ by. If this amount is less than the threshold you choose for how many representable floats the values may differ by and still be considered equal, then you say that they are "equal." Note that this method correctly lets "equal" numbers differ by larger values for larger magnitude floats, and by smaller values for smaller magnitude floats.
I'm creating an RPN calculator for a school project and having trouble with the modulus operator. Since we're using the double data type, modulus won't work on floating-point numbers. For example, 0.5 % 0.3 should return 0.2, but I'm getting a division by zero exception.
The instruction says to use fmod(). I've looked everywhere for fmod(), including javadoc, but I can't find it. I'm starting to think it's a method I'm going to have to create?
Edit: Hmmm, strange. I just plugged in those numbers again and it seems to be working fine… but just in case. Do I need to watch out for using the mod operator in Java when using floating types? I know something like this can't be done in C++ (I think).
You probably had a typo when you first ran it.
evaluating 0.5 % 0.3 returns '0.2' (A double) as expected.
Mindprod has a good overview of how modulus works in Java.
Unlike C, Java allows using the % for both integer and floating point and (unlike C89 and C++) it is well-defined for all inputs (including negatives):
From JLS §15.17.3:
The result of a floating-point
remainder operation is determined by
the rules of IEEE arithmetic:
If either operand is NaN, the result is NaN.
If the result is not NaN, the sign of the result equals the sign of
the dividend.
If the dividend is an infinity, or the divisor is a zero, or both, the
result is NaN.
If the dividend is finite and the divisor is an infinity, the result
equals the dividend.
If the dividend is a zero and the divisor is finite, the result
equals the dividend.
In the remaining cases, where neither an infinity, nor a zero, nor
NaN is involved, the floating-point
remainder r from the division of a
dividend n by a divisor d is defined
by the mathematical relation r=n-(d·q)
where q is an integer that is negative
only if n/d is negative and positive
only if n/d is positive, and whose
magnitude is as large as possible
without exceeding the magnitude of the
true mathematical quotient of n and d.
So for your example, 0.5/0.3 = 1.6... . q has the same sign (positive) as 0.5 (the dividend), and the magnitude is 1 (integer with largest magnitude not exceeding magnitude of 1.6...), and r = 0.5 - (0.3 * 1) = 0.2
I thought the regular modulus operator would work for this in Java, but it can't be hard to code. Just divide the numerator by the denominator, and take the integer portion of the result. Multiply that by the denominator, and subtract the result from the numerator.
x = n/d
xint = Integer portion of x
result = n - d*xint
fmod is the standard C function for handling floating-point modulus; I imagine your source was saying that Java handles floating-point modulus the same as C's fmod function. In Java you can use the % operator on doubles the same as on integers:
int x = 5 % 3; // x = 2
double y = .5 % .3; // y = .2