Implementing exponential moving average in Java - java

I essentially have an array of values like this:
0.25, 0.24, 0.27, 0.26, 0.29, 0.34, 0.32, 0.36, 0.32, 0.28, 0.25, 0.24, 0.25
The above array is oversimplified, I'm collecting 1 value per millisecond in my real code and I need to process the output on an algorithm I wrote to find the closest peak before a point in time. My logic fails because in my example above, 0.36 is the real peak, but my algorithm would look backwards and see the very last number 0.25 as the peak, as there's a decrease to 0.24 before it.
The goal is to take these values and apply an algorithm to them which will "smooth" them out a bit so that I have more linear values. (ie: I'd like my results to be curvy, not jaggedy)
I've been told to apply an exponential moving average filter to my values. How can I do this? It's really hard for me to read mathematical equations, I deal much better with code.
How do I process values in my array, applying an exponential moving average calculation to even them out?
float[] mydata = ...
mySmoothedData = exponentialMovingAverage(mydata, 0.5);
float[] exponentialMovingAverage(float[] input, float alpha) {
// what do I do here?
return result;
}

To compute an exponential moving average, you need to keep some state around and you need a tuning parameter. This calls for a little class (assuming you're using Java 5 or later):
class ExponentialMovingAverage {
private double alpha;
private Double oldValue;
public ExponentialMovingAverage(double alpha) {
this.alpha = alpha;
}
public double average(double value) {
if (oldValue == null) {
oldValue = value;
return value;
}
double newValue = oldValue + alpha * (value - oldValue);
oldValue = newValue;
return newValue;
}
}
Instantiate with the decay parameter you want (may take tuning; should be between 0 and 1) and then use average(…) to filter.
When reading a page on some mathmatical recurrence, all you really need to know when turning it into code is that mathematicians like to write indexes into arrays and sequences with subscripts. (They've a few other notations as well, which doesn't help.) However, the EMA is pretty simple as you only need to remember one old value; no complicated state arrays required.

I am having a hard time understanding your questions, but I will try to answer anyway.
1) If your algorithm found 0.25 instead of 0.36, then it is wrong. It is wrong because it assumes a monotonic increase or decrease (that is "always going up" or "always going down"). Unless you average ALL your data, your data points---as you present them---are nonlinear. If you really want to find the maximum value between two points in time, then slice your array from t_min to t_max and find the max of that subarray.
2) Now, the concept of "moving averages" is very simple: imagine that I have the following list: [1.4, 1.5, 1.4, 1.5, 1.5]. I can "smooth it out" by taking the average of two numbers: [1.45, 1.45, 1.45, 1.5]. Notice that the first number is the average of 1.5 and 1.4 (second and first numbers); the second (new list) is the average of 1.4 and 1.5 (third and second old list); the third (new list) the average of 1.5 and 1.4 (fourth and third), and so on. I could have made it "period three" or "four", or "n". Notice how the data is much smoother. A good way to "see moving averages at work" is to go to Google Finance, select a stock (try Tesla Motors; pretty volatile (TSLA)) and click on "technicals" at the bottom of the chart. Select "Moving Average" with a given period, and "Exponential moving average" to compare their differences.
Exponential moving average is just another elaboration of this, but weights the "older" data less than the "new" data; this is a way to "bias" the smoothing toward the back. Please read the Wikipedia entry.
So, this is more a comment than an answer, but the little comment box was just to tiny. Good luck.

Take a look at this.
If your noise has zero average, consider also the use of a Kalman filter.

In a rolling manner.... i also use commons.apache math library
public LinkedList EMA(int dperiods, double alpha)
throws IOException {
String line;
int i = 0;
DescriptiveStatistics stats = new SynchronizedDescriptiveStatistics();
stats.setWindowSize(dperiods);
File f = new File("");
BufferedReader in = new BufferedReader(new FileReader(f));
LinkedList<Double> ema1 = new LinkedList<Double>();
// Compute some statistics
while ((line = in.readLine()) != null) {
double sum = 0;
double den = 0;
System.out.println("line: " + " " + line);
stats.addValue(Double.parseDouble(line.trim()));
i++;
if (i > dperiods)
for (int j = 0; j < dperiods; j++) {
double var = Math.pow((1 - alpha), j);
den += var;
sum += stats.getElement(j) * var;
System.out.println("elements:"+stats.getElement(j));
System.out.println("sum:"+sum);
}
else
for (int j = 0; j < i; j++) {
double var = Math.pow((1 - alpha), j);
den += var;
sum += stats.getElement(j) * var;
}
ema1.add(sum / den);
System.out.println("EMA: " + sum / den);
}
return ema1;
}

public class MovingAvarage {
public static void main(String[] args) {
double[] array = {1.2, 3.4, 4.5, 4.5, 4.5};
double St = 0D;
for(int i=0; i<array.length; i++) {
St = movingAvarage(St, array[i]);
}
System.out.println(St);
}
private static double movingAvarage(double St, double Yt) {
double alpha = 0.01, oneMinusAlpha = 0.99;
if(St <= 0D) {
St = Yt;
} else {
St = alpha*Yt + oneMinusAlpha*St;
}
return St;
}
}

If you're having trouble with the math, you could go with a simple moving average instead of exponential. So the output you get would be the last x terms divided by x. Untested pseudocode:
int data[] = getFilled();
int outdata[] = initializeme()
for (int y = 0; y < data.length; y++)
int sum = 0;
for (int x = y; x < y-5; x++)
sum+=data[x];
outdata[y] = sum / 5;
Note that you will need to handle the start and end parts of the data since clearly you can't average the last 5 terms when you are on your 2nd data point. Also, there are more efficient ways of calculating this moving average(sum = sum - oldest + newest), but this is to get the concept of what's happening across.

Related

Adjust list by normalizing in java

When analysing data sets, such as data for human heights or for human weights, a common step is to adjust the data. This adjustment can be done by normalizing to values between 0 and 1, or throwing away outliers.
For this program, adjust the values by dividing all values by the largest value. The input begins with an integer indicating the number of floating-point values that follow. Assume that the list will always contain fewer than 20 floating-point values.
Output each floating-point value with two digits after the decimal point, which can be achieved as follows:
System.out.printf("%.2f", yourValue);
Ex: If the input is:
5 30.0 50.0 10.0 100.0 65.0
the output is:
0.30 0.50 0.10 1.00 0.65
The 5 indicates that there are five floating-point values in the list, namely 30.0, 50.0, 10.0, 100.0, and 65.0. 100.0 is the largest value in the list, so each value is divided by 100.0.
For coding simplicity, follow every output value by a space, including the last one.
This is my code so far:
import java.util.Scanner;
public class LabProgram {
public static void main(String[] args) {
Scanner scnr = new Scanner(System.in);
double numElements;
numElements = scnr.nextDouble();
double[] userList = new double[numElements];
int i;
double maxValue;
for (i = 0; i < userList.length; ++i) {
userList[i] = scnr.nextDouble();
}
maxValue = userList[i];
for (i = 0; i < userList.length; ++i) {
if (userList[i] > maxValue) {
maxValue = userList[i];
}
}
for (i = 0; i < userList.length; ++i) {
userList[i] = userList[i] / maxValue;
System.out.print(userList[i] + " ");
System.out.printf("%.2f", userList[i]);
}
}
}
I keep getting this output.
LabProgram.java:8: error: incompatible types: possible lossy conversion from double to int
double [] userList = new double [numElements];
^
1 error
I think my variable is messed up. I read through my book and could not find help. Can someone please help me on here. Thank you so much! This has been very stressful for me.
The specific error message is because the index and size of an element must be int. So declare and assign at once: int numElements = scnr.nextInt();
Better way of programming things:
skip manual input (aka Scanner and consorts). Makes you crazy and testing a 100'000'000 times slower
you can integrate the interactive part later, once the method is done. You already know how, your code already shows.
use an explicit method to do your work. Don't throw everything into the main method. This way you can run multiple examples/tests on the method, and you have a better implementation for later.
check for invalid input INSIDE the method that you implement. Once you can rely in such a method, you can keep on using it later on.
you could even move the example numbers to its own test method, so you can run multiple test methods. You will learn about Unit Testing later on.
Example code:
public class LabProgram {
public static void main(final String[] args) {
final double[] initialValues = new double[] { 30.0, 50.0, 10.0, 100.0, 65.0 };
final double[] adjustedValues = normalizeValuesByHighest(initialValues);
System.out.println("Adjusted values:");
for (final double d : adjustedValues) {
System.out.printf("%.2f ", Double.valueOf(d));
}
// expected otuput is 0.30 0.50 0.10 1.00 0.65
System.out.println();
System.out.println("All done.");
}
static public double[] normalizeValuesByHighest(final double[] pInitialValues) {
if (pInitialValues == null) throw new IllegalArgumentException("Invalid double[] given!");
if (pInitialValues.length < 1) throw new IllegalArgumentException("double[] given contains no elements!");
// detect valid max value
double tempMaxValue = -Double.MAX_VALUE;
boolean hasValues = false;
for (final double d : pInitialValues) {
if (Double.isNaN(d)) continue;
tempMaxValue = Math.max(tempMaxValue, d);
hasValues = true;
}
if (!hasValues) throw new IllegalArgumentException("double[] given contains no valid elements, only NaNs!");
// create return array
final double maxValue = tempMaxValue; // final from here on
final double[] ret = new double[pInitialValues.length];
for (int i = 0; i < pInitialValues.length; i++) {
ret[i] = pInitialValues[i] / maxValue; // NaN will stay NaN
}
return ret;
}
}
Output:
Adjusted values:
0,30 0,50 0,10 1,00 0,65
All done.

Optimisation in Java Using Apache Commons Math

I'm trying to minimise a value in Java usingcommons-math. I've had a look at their documentation but I don't really get how to implement it.
Basically, in my code below, I have a Double which has the expected goals in a soccer match and I'd like to optimise the probability value of under 3 goals occurring in a game to 0.5.
import org.apache.commons.math3.distribution.PoissonDistribution;
public class Solver {
public static void main(String[] args) {
final Double expectedGoals = 2.9d;
final PoissonDistribution poissonGoals = new PoissonDistribution(expectedGoals);
Double probabilityUnderThreeGoals = 0d;
for (int score = 0; score < 15; score++) {
final Double probability =
poissonGoals.probability(score);
if (score < 3) {
probabilityUnderThreeGoals = probabilityUnderThreeGoals + probability;
}
}
System.out.println(probabilityUnderThreeGoals); //prints 0.44596319855718064, I want to optimise this to 0.5
}
}
The cumulative probability (<= x) of a Poisson random variable can be calculated by:
In your case, x is 2 and you want to find lambda (the mean) such that this is 0.5. You can type this into WolframAlpha and have it solve it for you. So rather than an optimisation problem, this is just a root-finding problem (though one could argue that optimisation problems are just finding roots.)
You can also do this with Apache Commons Maths, with one of the root finders.
int maximumGoals = 2;
double expectedProbability = 0.5;
UnivariateFunction f = x -> {
double sum = 0;
for (int i = 0; i <= maximumGoals; i++) {
sum += Math.pow(x, i) / CombinatoricsUtils.factorialDouble(i);
}
return sum * Math.exp(-x) - expectedProbability;
};
// the four parameters that "solve" takes are:
// the number of iterations, the function to solve, min and max of the root
// I've put some somewhat sensible values as an example. Feel free to change them
double answer = new BisectionSolver().solve(Integer.MAX_VALUE, f, 0, maximumGoals / expectedProbability);
System.out.println("Solved: " + answer);
System.out.println("Cumulative Probability: " + new PoissonDistribution(answer).cumulativeProbability(maximumGoals));
This prints:
Solved: 2.674060344696045
Cumulative Probability: 0.4999999923623868

Quadratic Time for 4-sum Implementation

Given an array with x elements, I must find four numbers that, when summed, equal zero. I also need to determine how many such sums exist.
So the cubic time involves three nested iterators, so we just have to look up the last number (with binary search).
Instead by using the cartesian product (same array for X and Y) we can store all pairs and their sum in a secondary array. So for each sum d we just have to look for -d.
This should look something like for (close to) quadratic time:
public static int quad(Double[] S) {
ArrayList<Double> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(d + di);
}
}
Collections.sort(pairs);
for (Double d : pairs) {
int index = Collections.binarySearch(pairs, -d);
if (index > 0) count++; // -d was found so increment
}
return count;
}
With x being 353 (for our specific array input), the solution should be 528 but instead I only find 257 using this solution. For our cubic time we are able to find all 528 4-sums
public static int count(Double[] a) {
Arrays.sort(a);
int N = a.length;
int count = 0;
for(int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
for (int k = 0; k < N; k++) {
int l = Arrays.binarySearch(a, -(a[i] + a[j] + a[k]));
if (l > 0) count++;
}
}
}
return count;
}
Is the precision of double lost by any chance?
EDIT: Using BigDecimal instead of double was discussed, but we were afraid it would have an impact on performance. We are only dealing with 353 elements in our array, so would this mean anything to us?
EDITEDIT: I apologize if I use BigDecimal incorrectly. I have never dealt with the library before. So after multiple suggestions I tried using BigDecimal instead
public static int quad(Double[] S) {
ArrayList<BigDecimal> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(new BigDecimal(d + di));
}
}
Collections.sort(pairs);
for (BigDecimal d : pairs) {
int index = Collections.binarySearch(pairs, d.negate());
if (index >= 0) count++;
}
return count;
}
So instead of 257 it was able to find 261 solutions. This might indicate there is a problem double and I am in fact losing precision. However 261 is far away from 528, but I am unable to locate the cause.
LASTEDIT: So I believe this is horrible and ugly code, but it seems to be working none the less. We had already experimented with while but with BigDecimal we are now able to get all 528 matches.
I am not sure if it's close enough to quadratic time or not, time will tell.
I present you the monster:
public static int quad(Double[] S) {
ArrayList<BigDecimal> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(new BigDecimal(d + di));
}
}
Collections.sort(pairs);
for (BigDecimal d : pairs) {
BigDecimal negation = d.negate();
int index = Collections.binarySearch(pairs, negation);
while (index >= 0 && negation.equals(pairs.get(index))) {
index--;
}
index++;
while (index >= 0 && negation.equals(pairs.get(index))) {
count++;
index++;
}
}
return count;
}
You should use the BigDecimal class instead of double here, since exact precision of the floating point numbers in your array adding up to 0 is a must for your solution. If one of your decimal values was .1, you're in trouble. That binary fraction cannot be precisely represented with a double. Take the following code as an example:
double counter = 0.0;
while (counter != 1.0)
{
System.out.println("Counter = " + counter);
counter = counter + 0.1;
}
You would expect this to execute 10 times, but it is an infinite loop since counter will never be precisely 1.0.
Example output:
Counter = 0.0
Counter = 0.1
Counter = 0.2
Counter = 0.30000000000000004
Counter = 0.4
Counter = 0.5
Counter = 0.6
Counter = 0.7
Counter = 0.7999999999999999
Counter = 0.8999999999999999
Counter = 0.9999999999999999
Counter = 1.0999999999999999
Counter = 1.2
Counter = 1.3
Counter = 1.4000000000000001
Counter = 1.5000000000000002
Counter = 1.6000000000000003
When you search for either pairs or an individual element, you need to count with multiplicity. I.e., if you find element -d in your array of either singletons or pairs, then you need to increase the count by the number of matches that are found, not just increase by 1. This is probably why you're not getting the full number of results when you search over pairs. And it could mean that the number 528 of matches is not the true full number when you are searching over singletons. And in general, you should not use double precision arithmetic for exact arithmetic; use an arbitrary precision rational number package instead.

Reduce treatment time of the FFT

I'm currently working on Java for Android. I try to implement the FFT in order to realize a kind of viewer of the frequencies.
Actually I was able to do it, but the display is not fluid at all.
I added some traces in order to check the treatment time of each part of my code, and the fact is that the FFT takes about 300ms to be applied on my complex array, that owns 4096 elements. And I need it to take less than 100ms, as my thread (that displays the frequencies) is refreshed every 100ms. I reduced the initial array in order that the FFT results own only 1028 elements, and it works, but the result is deprecated.
Does someone have an idea ?
I used the default fft.java and Complex.java classes that can be found on the internet.
For information, my code computing the FFT is the following :
int bytesPerSample = 2;
Complex[] x = new Complex[bufferSize/2] ;
for (int index = 0 ; index < bufferReadResult - bytesPerSample + 1; index += bytesPerSample)
{
// 16BITS = 2BYTES
float asFloat = Float.intBitsToFloat(asInt);
double sample = 0;
for (int b = 0; b < bytesPerSample; b++) {
int v = buffer[index + b];
if (b < bytesPerSample - 1 || bytesPerSample == 1) {
v &= 0xFF;
}
sample += v << (b * 8);
}
double sample32 = 100 * (sample / 32768.0); // don't know the use of this compute...
x[index/bytesPerSample] = new Complex(sample32, 0);
}
Complex[] tx = new Complex[1024]; // size = 2048
///// reduction of the size of the signal in order to improve the fft traitment time
for (int i = 0; i < x.length/4; i++)
{
tx[i] = new Complex(x[i*4].re(), 0);
}
// Signal retrieval thanks to the FFT
fftRes = FFT.fft(tx);
I don't know Java, but you're way of converting between your input data and an array of complex values seems very convoluted. You're building two arrays of complex data where only one is necessary.
Also it smells like your complex real and imaginary values are doubles. That's way over the top for what you need, and ARMs are veeeery slow at double arithmetic anyway. Is there a complex class based on single precision floats?
Thirdly you're performing a complex fft on real data by filling the imaginary part of your complexes with zero. Whilst the result will be correct it is twice as much work straight off (unless the routine is clever enough to spot that, which I doubt). If possible perform a real fft on your data and save half your time.
And then as Simon says there's the whole issue of avoiding garbage collection and memory allocation.
Also it looks like your FFT has no preparatory step. This mean that the routine FFT.fft() is calculating the complex exponentials every time. The longest part of the FFT calculation is working out the complex exponentials, which is a shame because for any given FFT length the exponentials are constants. They don't depend on your input data at all. In the real time world we use FFT routines where we calculate the exponentials once at the start of the program and then the actual fft itself takes that const array as one of its inputs. Don't know if your FFT class can do something similar.
If you do end up going to something like FFTW then you're going to have to get used to calling C code from your Java. Also make sure you get a version that supports (I think) NEON, ARM's answer to SSE, AVX and Altivec. It's worth ploughing through their release notes to check. Also I strongly suspect that FFTW will only be able to offer a significant speed up if you ask it to perform an FFT on single precision floats, not doubles.
Google luck!
--Edit--
I meant of course 'good luck'. Give me a real keyboard quick, these touchscreen ones are unreliable...
First, thanks for all your answers.
I followed them and made two test :
first one, I replace the double used in my Complex class by float. The result is just a bit better, but not enough.
then I've rewroten the fft method in order not to use Complex anymore, but a two-dimensional float array instead. For each row of this array, the first column contains the real part, and the second one the imaginary part.
I also changed my code in order to instanciate the float array only once, on the onCreate method.
And the result... is worst !! Now it takes a little bit more than 500ms instead of 300ms.
I don't know what to do now.
You can find below the initial fft fonction, and then the one I've re-wroten.
Thanks for your help.
// compute the FFT of x[], assuming its length is a power of 2
public static Complex[] fft(Complex[] x) {
int N = x.length;
// base case
if (N == 1) return new Complex[] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2 : " + N); }
// fft of even terms
Complex[] even = new Complex[N/2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
Complex[] q = fft(even);
// fft of odd terms
Complex[] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
Complex[] r = fft(odd);
// combine
Complex[] y = new Complex[N];
for (int k = 0; k < N/2; k++) {
double kth = -2 * k * Math.PI / N;
Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
y[k] = q[k].plus(wk.times(r[k]));
y[k + N/2] = q[k].minus(wk.times(r[k]));
}
return y;
}
public static float[][] fftf(float[][] x) {
/**
* x[][0] = real part
* x[][1] = imaginary part
*/
int N = x.length;
// base case
if (N == 1) return new float[][] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2 : " + N); }
// fft of even terms
float[][] even = new float[N/2][2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
float[][] q = fftf(even);
// fft of odd terms
float[][] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
float[][] r = fftf(odd);
// combine
float[][] y = new float[N][2];
double kth, wkcos, wksin ;
for (int k = 0; k < N/2; k++) {
kth = -2 * k * Math.PI / N;
//Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
wkcos = Math.cos(kth) ; // real part
wksin = Math.sin(kth) ; // imaginary part
// y[k] = q[k].plus(wk.times(r[k]));
y[k][0] = (float) (q[k][0] + wkcos * r[k][0] - wksin * r[k][1]);
y[k][1] = (float) (q[k][1] + wkcos * r[k][1] + wksin * r[k][0]);
// y[k + N/2] = q[k].minus(wk.times(r[k]));
y[k + N/2][0] = (float) (q[k][0] - (wkcos * r[k][0] - wksin * r[k][1]));
y[k + N/2][1] = (float) (q[k][1] - (wkcos * r[k][1] + wksin * r[k][0]));
}
return y;
}
actually I think I don't understand everything.
First, about Math.cos and Math.sin : how do you want me not to compute it each time ? Do you mean that I should instanciate the whole values only once (e.g store it in an array) and use them for each compute ?
Second, about the N % 2, indeed it's not very useful, I could make the test before the call of the function.
Third, about Simon's advice : I mixed what he said and what you said, that's why I've replaced the Complex by a two-dimensional float[][]. If that was not what he suggested, then what was it ?
At least, I'm not a FFT expert, so what do you mean by making a "real FFT" ? Do you mean that my imaginary part is useless ? If so, I'm not sure, because later in my code, I compute the magnitude of each frequence, so sqrt(real[i]*real[i] + imag[i]*imag[i]). And I think that my imaginary part is not equal to zero...
thanks !

How to alter double by its smallest increment

Is something broken or I fail to understand what is happening?
static String getRealBinary(double val) {
long tmp = Double.doubleToLongBits(val);
StringBuilder sb = new StringBuilder();
for (long n = 64; --n > 0; tmp >>= 1)
if ((tmp & 1) == 0)
sb.insert(0, ('0'));
else
sb.insert(0, ('1'));
sb.insert(0, '[').insert(2, "] [").insert(16, "] [").append(']');
return sb.toString();
}
public static void main(String[] argv) {
for (int j = 3; --j >= 0;) {
double d = j;
for (int i = 3; --i >= 0;) {
d += Double.MIN_VALUE;
System.out.println(d +getRealBinary(d));
}
}
}
With output:
2.0[1] [00000000000] [000000000000000000000000000000000000000000000000000]
2.0[1] [00000000000] [000000000000000000000000000000000000000000000000000]
2.0[1] [00000000000] [000000000000000000000000000000000000000000000000000]
1.0[0] [11111111110] [000000000000000000000000000000000000000000000000000]
1.0[0] [11111111110] [000000000000000000000000000000000000000000000000000]
1.0[0] [11111111110] [000000000000000000000000000000000000000000000000000]
4.9E-324[0] [00000000000] [000000000000000000000000000000000000000000000000001]
1.0E-323[0] [00000000000] [000000000000000000000000000000000000000000000000010]
1.5E-323[0] [00000000000] [000000000000000000000000000000000000000000000000011]
The general idea is first convert the double to its long representation (using doubleToLongBits as you have done in getRealBinary), increment that long by 1, and finally convert the new long back to the double it represents via longBitsToDouble.
EDIT: Java (since 1.5) provides Math.ulp(double), which I'm guessing you can use to compute the next higher value directly thus: x + Math.ulp(x).
Floating point numbers are not spread out uniformly over the number line like integer types are. They are more densely packed near 0 and very far apart as you approach infinity. Therefore there is no constant that you can add to a floating point number to get to the next floating point number.
Your code is not well-formed. You try to add the minimum double value and expect the result to be different from the original value. The problem is that double.MinValue is so small that the result is rounded and doesn't get affected.
Suggested reading: http://en.wikipedia.org/wiki/Machine_epsilon
On the Wikipedia article there is the Java code too. Epsilon is by definition the smallest number such as (X + eps * X != X), and eps*X is called "relative-epsilon"
Since Java 1.8 there is java.lang.Math.nextUp(double) doing exactly what you want. There is also opposite java.lang.Math.nextDown(double).
In case you want to use the BigDecimal class, there is the BigDecimal.ulp() method as well.

Categories