Dealing with Roundoff Error when writing RREF - java

I'm trying to write a program that solves for the reduced row echelon form when given a matrix. Basically what I'm doing is writing a program that solves systems of equations. However, due to the fact that there are times when I need to do division to result in repeating digits (such as 2/3 which is .66666...) and java rounds off to a certain digit, there are times when a pivot should be 0 (meaning no pivot) is something like .0000001 and it messes up my whole program.
My first question is if I were to have some sort of if statement, what is the best way to write something like "if this number is less than .00001 away from being an integer, then round to that closest integer".
My second question is does anyone have any ideas on more optimal ways of handling this situation rather than just put if statements rounding numbers all over the place.
Thank you very much.

You say that you are writing a program that solves systems of equations. This is quite a complicated problem. If you only want to use such a program, you are better off using a library written by somebody else. I will assume that you really want to write the program yourself, for fun and/or education.
You identified the main problem: using floating point numbers leads to rounding and thus to inexact results. There are two solutions for this.
The first solution is not to use floating point numbers. Use only integers and reduce the matrix to row echelon form (not reduced); this can be done without divisions. Since all computations with integers are exact, a pivot that should be 0 will be exactly 0 (actually, there may be a problem with overflow). Of course, this will only work if the matrix you start with consists of integers. You can generalize this approach by working with fractions instead of integers.
The second solution is to use floating point numbers and be very careful. This is a topic of a whole branch of mathematics / computer science called numerical analysis. It is too complicated to explain in an answer here, so you have to get a book on numerical analysis. In simple terms, what you want to do is to say that if Math.abs(pivot) < some small value, then you assume that the pivot should be zero, but that it is something like .0000000001 because of rounding errors, so you just act as if the pivot is zero. The problem is finding out what "some small value" is.

Related

How to test the correct implementation of special polynomials?

I need to implement the calculation of some special polynomials in Java (the language is not really important). These are calculated as a weighted sum of a number of base polynomials with fixed coefficients.
Each base polynomial has 2 to 10 coefficients and there are typically 10 base polynomials considered, giving a total of, say 20-50 coefficients.
Basically the calculation is no big deal but I am worried about typos. I only have a printed document as a template. So i would like to implement unit tests for the calculations. The issue is: How do I get reliable testing data. I do have another software that is supposed to calculate these functions but the process is complicated and also error prone - I would have to scale the input values, go through a number of menu selections in the software to produce the output and then paste it to my testing code.
I guess that there is no way around using the external software to generate some testing data, but maybe you have some recommendations for making this type of testing procedure safer or minimize the required number of test cases.
I am also worried about providing suitable input values: Depending on the value of the independent variable, certain terms will only have a tiny contribution to the output, while for other values they might dominate.
The types of errors I expect (and need to avoid) are:
Typos in coefficients
Coefficients applied to wrong power (i.e. a_7*x^6 instead of a_7*x^7 - just for demonstration, I am not calculating this way but am using Horner's scheme)
Off-by one errors (i.e. missing zero order or highest order term)
Since you have a polynomial of degree 10, testing at 11 distinct points should give certainty.
However, already a test at one well-randomized point, x=1.23004 to give an idea (away from small fractions like 2/3, 4/5), will with high probability show a difference if there is an error, because it is unlikely that the difference between the wrong and the true polynomial has a root at exactly this place.

How to deal with float rounding errors

I'm trying to implement basic 2D vector math functions for a game, in Java. They will be intensively used by the game, so I want them to be as fast as possible.
I started with integers as the vector coordinates because the game needs nothing more precise for the coordinates, but for all calculations I still would have to change to double vectors to get a clear result (eg. intersection between two lines).
Using doubles, there are rounding errors. I could simply ignore them and use something like
d1 - d2 <= 0.0001
to compare the values, but I assume with further calculations the error could sum up until it becomes significant. So I thought I could round them after every possibly unprecise operation, but that turned out to produce much worse results, assumedly because the program also rounds unexact values (eg. 0.33333333... -> 0.3333300...).
Using BigDecimal would be far too slow.
What is the best way to solve this problem?
Inaccurate Method
When you are using numbers that require Precise calculations you need to be sure that you aren't doing something like: (and this is what it seems like you are currently doing)
This will result in the accumulation of rounding errors as the process continues; giving you extremely innacurate data long-term. In the above example, you are actually rounding off the starting float 4 times, each time it becomes more and more inaccurate!
Accurate Method
A better and more accurate way of obtaining numbers is to do this:
This will help you to avoid the accumulation of rounding errors because each calculation is based off of only 1 conversion and the results from that conversion are not compounded into the next calculation.
The best method of attack would be to start at the highest precision that is necessary, then convert on an as-needed basis, but leave the original intact. I would suggest you to follow the process from the second picture that I posted.
I started with integers as the vector coordinates because the game needs nothing more precise for the coordinates, but for all calculations I still would have to change to double vectors to get a clear result (eg. intersection between two lines).
It's important to note that you should not attempt to perform any type of rounding of your values if there is not noticeable impact on your end result; you will simply be doing more work for little to no gain, and may even suffer a performance decrease if done often enough.
This is a minor addition to the prior answer. When converting the float to an integer, it is important to round rather than just casting. In the following program, d is the largest double that is strictly less than 1.0. It could easily arise as the result of a calculation that would have result 1.0 in infinitely precise real number arithmetic.
The simple cast gets result 0. Rounding first gets result 1.
public class Test {
public static void main(String[] args) {
double d = Math.nextDown(1.0);
System.out.println(d);
System.out.println((int)d);
System.out.println((int)Math.round(d));
}
}
Output:
0.9999999999999999
0
1

Unit testing a discrete Fourier transformation

Several months ago I had to implement a two-dimensional Fourier transformation in Java. While the results seemed sane for a few manual checks I wondered how a good test-driven approach would look like.
Basically what I did was that I looked at reasonable values of the DC components and compared the AC components if they roughly match the Mathematica output.
My question is: Which unit tests would you implement for a discrete Fourier transformation? How would you validate results returned by your calculation?
As for other unit-tests, you should consider small fixed input test-vectors for which results can easily be computed manually and compared against. For the more involved input test-vectors, a direct DFT implementation should be easy enough to implement and used to cross-validate results (possibly on top of your own manual computations).
As far as specific test vectors for one-dimensional FFT, you can start with the following from dsprelated, which they selected to exercise common flaws:
Single FFT tests - N inputs and N outputs
Input random data
Inputs are all zeros
Inputs are all ones (or some other nonzero value)
Inputs alternate between +1 and -1.
Input is e^(8*j*2*pi*i/N) for i = 0,1,2, ...,N-1. (j = sqrt(-1))
Input is cos(8*2*pi*i/N) for i = 0,1,2, ...,N-1.
Input is e^((43/7)*j*2*pi*i/N) for i = 0,1,2, ...,N-1. (j sqrt(-1))
Input is cos((43/7)*2*pi*i/N) for i = 0,1,2, ...,N-1.
Multi FFT tests - run continuous sets of random data
Data sets start at times 0, N, 2N, 3N, 4N, ....
Data sets start at times 0, N+1, 2N+2, 3N+3, 4N+4, ....
For two-dimensional FFT, you can then build on the above. The first three cases are still directly applicable (random data, all zeros, all ones). Others require a bit more work but are still manageable for small input sizes.
Finally google searches should yield some reference images (before and after transform) for a few common cases such as black & white squares, rectangle, circles which are can be used as reference (see for example http://www.fmwconcepts.com/misc_tests/FFT_tests/).
99.9% of the numerical and coding issues you will likely find will be found by testing with a random complex vectors and comparing with a direct DFT to a tolerance on the order of floating point precision.
Zero, constant, or sinusoidal vectors may help understand a failure by allowing your eye to catch issues like initialization, clipping, folding, scaling. But they will not typically find anything that the random case does not.
My kissfft library does a few extra tests related to fixed point issues -- not an issue if you are working in floating point.

Double as close to 0 as possible?

I need a value as close to 0 as possible. I need to be able to divide through this value, but it should be effectively 0.
Does Java provide an easy way of generating a double with only the least significant bit set? Or do I have to calculate it myself?
//EDIT: A little background information, because someone requested it. I know that my soultion is not a particularly clean one, but here you are:
I am writing a program for homework. It calculates the resistance of a circuit consisting of multiple resistors in parallel and serial circuits.
It is a 2nd year programming class. Our teacher still designs classes for us, we need to implement them according to his design.
Parallel circuits involve calculation of 1/*resistance*, therefore my program prohibits creation of resistors with 0 Ohm. Physics tells you that this is impossible anyway (you have just a tiny little resistance in every metal).
However, the example circuit we should use to test the program contains a 0 Ohm resistor. It is placed in a serial circuit, but resistors do not know where they are (the teacher designed it that way), so I cannot change my program to allow resistors with 0 Ohm resistance in serial circuits only.
Two solutions:
Allow 0 Ohm resistors in any case - if division by 0 occurs, well, bad luck
Set the resistor not to 0, but to a resistance one can neglect.
Both are not very good. The first one seemed not too good to me, and neither did the second, but I had to decide.
It was just a random choice that threw up the problem. I could not let go without solving it, so switching to the first one was not an option anymore ;-)
Use Double.MIN_VALUE:
A constant holding the smallest positive nonzero value of type double, 2-1074. It is equal to the hexadecimal floating-point literal 0x0.0000000000001P-1022 and also equal to Double.longBitsToDouble(0x1L).
If you would like to divide by "zero" you can actually just use Double.POSITIVE_INFINITY as the result.

Java resources to do number crunching?

What are the best resources on learning 'number crunching' using Java ? I am referring to things like correct methods of decimal number processing , best practices , API , notable idioms for performance and common pitfalls ( and their solutions ) while coding for number processing using Java.
This question seems a bit open ended and open to interpretation. As such, I will just give two short things.
1) Decimal precision - never assume that two floating point (or double) numbers are equal, even if you went through the exact same steps to calculate them both. Due to a number of issues with rounding in various situations, you often cannot be certain that a decimal number is exactly what you expect. If you do double myNumber = calculateMyNumber() and then do a bunch of things and then come back to it and check if(myNumber == calculateMyNumber(), that evaluation could be false even if you have not changed the calculations done in calculateMyNumber()
2) There are limitations in the size and precision of numbers that you can keep track of. If you have int myNumber = 2000000000 and if(myNumber*2 < myNumber), that will actually evaluate to true, as myNumber*2 will result in a number less than myNumber, because the memory allocated for the number isn't big enough to hold a number that large and it will overflow, becoming smaller than it was before. Look into classes that encapsulate large numbers, such as BigInteger and BigDecimal.
You will figure stuff like this out as a side effect if you study the computer representations of numbers, or binary representations of numbers.
First, you should learn about floating point math. This is not specific to java, but it will allow you to make informed decisions later about, for example, when it's OK to use Java primitives such as float and double. Relevant topics include (copied from a course that I took on scientific computing):
Sources of error: roundoff, truncation error, incomplete convergence, statistical error,
program bug.
Computer floating point arithmetic and the IEEE standard.
Error amplification through cancellation.
Conditioning, condition number, and error amplification.
This leads you to decisions about whether to use Java's BigDecimal, BigInteger, etc. There are lots of questions and answers about this already.
Next, you're going to hit performance, including both CPU and memory. You probably will find various rules of thumb, such as "autoboxing a lot is a serious performance problem." But as always, the best thing to do is profile your own application. For example, let's say somebody insists that some optimization is important, even though it affects legibility. If that optimization doesn't matter for your application, then don't do it!
Finally, where possible, it often pays to use a library for numerical stuff. Your first attempt probably will be slower and more buggy than existing libraries. For example, for goodness sake, don't implement your own linear programming routine.

Categories