Java Sine Oscillator for a Flanger effect - java

For a coursework exercise I need to create a sine oscillator with which to alter the delay time in playing back an echo of the sound (a flanger). This oscillator needs to have an adjustable frequency.
The value returned by the function should be between 1 and -1, which I achieved with this function:
public void oscillateNumber(){
for (int i = 0; i < 200; i++){
oscResult = Math.sin((Number1* Math.PI)/180.0);
updateNumber();
}
}
And by having Number1 varying between -180 and 180 (found this solution here: How to use a Sine / Cosine wave to return an oscillating number)
How could I go about incorporating a frequency into this oscillator? The frequency needs to be adjustable between 0 and 5Hz...
I am completely new to this material so I am not entirely grasping the mechanics of this, another function I found is
public void oscillateNumber3(){
for (int i = 0; i < 400; i++){
oscResult = (float)Math.sin( angle );
angle += (float)(2*Math.PI) * frequency / 44100f;
java.lang.System.out.println(oscResult);
}
}
Where if I add this value to the delay it gives me a bit more resemblance to the effect but I am not sure it's actually correct...
Any pointer to this would be really appreciated.
UPDATE
Ok so following Oli's pointer I came up with this function for continuously modulating the delay with a number produced by the oscillator, I'm not quite sure about the loop though:
public void oscillatorNumber(int frequency, int sampleRate){
for (int t = 0; t < (sampleRate * frequency); t++){
oscResult = (float)Math.sin( angle );
angle += (float)(2*Math.PI) * 2 * (t / 44100); // sin(2*pi* f *(t/Fs))
java.lang.System.out.println(oscResult);
}
}
Does this look about right?

The general expression for a sinusoidal oscillator is:
y(t) = sin(2*pi*f*t)
where f is the frequency in Hz, and t is the time in seconds.

Related

Optimisation in Java Using Apache Commons Math

I'm trying to minimise a value in Java usingcommons-math. I've had a look at their documentation but I don't really get how to implement it.
Basically, in my code below, I have a Double which has the expected goals in a soccer match and I'd like to optimise the probability value of under 3 goals occurring in a game to 0.5.
import org.apache.commons.math3.distribution.PoissonDistribution;
public class Solver {
public static void main(String[] args) {
final Double expectedGoals = 2.9d;
final PoissonDistribution poissonGoals = new PoissonDistribution(expectedGoals);
Double probabilityUnderThreeGoals = 0d;
for (int score = 0; score < 15; score++) {
final Double probability =
poissonGoals.probability(score);
if (score < 3) {
probabilityUnderThreeGoals = probabilityUnderThreeGoals + probability;
}
}
System.out.println(probabilityUnderThreeGoals); //prints 0.44596319855718064, I want to optimise this to 0.5
}
}
The cumulative probability (<= x) of a Poisson random variable can be calculated by:
In your case, x is 2 and you want to find lambda (the mean) such that this is 0.5. You can type this into WolframAlpha and have it solve it for you. So rather than an optimisation problem, this is just a root-finding problem (though one could argue that optimisation problems are just finding roots.)
You can also do this with Apache Commons Maths, with one of the root finders.
int maximumGoals = 2;
double expectedProbability = 0.5;
UnivariateFunction f = x -> {
double sum = 0;
for (int i = 0; i <= maximumGoals; i++) {
sum += Math.pow(x, i) / CombinatoricsUtils.factorialDouble(i);
}
return sum * Math.exp(-x) - expectedProbability;
};
// the four parameters that "solve" takes are:
// the number of iterations, the function to solve, min and max of the root
// I've put some somewhat sensible values as an example. Feel free to change them
double answer = new BisectionSolver().solve(Integer.MAX_VALUE, f, 0, maximumGoals / expectedProbability);
System.out.println("Solved: " + answer);
System.out.println("Cumulative Probability: " + new PoissonDistribution(answer).cumulativeProbability(maximumGoals));
This prints:
Solved: 2.674060344696045
Cumulative Probability: 0.4999999923623868

Monte Carlo Simulation

I'm a student in a Java Programming class. My problem deals with an interpretation of the Monte Carlo Simulation. I'm supposed to find the probability that three quarters or three pennies will be picked out of a purse that has 3 quarters and 3 pennies. Once a coin is picked it is not replaced. The probability should be 0.1XXXXXXX. I keep getting 0 or 1 for my answer. This is what i have so far.
public class CoinPurse {
public static void main(String[] args) {
System.out.print("Probability of Drawing 3 coins of the Same Type - ");
System.out.println(coinPurseSimulation(100));
}
/**
Runs numTrials trials of a Monte Carlo simulation of drawing
3 coins out of a purse containing 3 pennies and 3 quarters.
Coins are not replaced once drawn.
#param numTrials - the number of times the method will attempt to draw 3 coins
#returns a double - the fraction of times 3 coins of the same type were drawn.
*/
public static double coinPurseSimulation(int numTrials) {
final int P = 1;
final int Q = 2;
int [] purse = {Q, Q, Q, P, P, P};
int [] drawCoins = new int[3];
for (int draw = 0; draw < 3; draw ++) {
int index = (int)(Math.random() * purse.length);
drawCoins[draw] = purse[index];
int [] newPurse = new int[purse.length-1];
int j = 0;
for (int i =0; i < purse.length; i++) {
if (i == index) {
continue;
}
newPurse[j] = purse[i];
j++;
}
purse = newPurse;
}
double number = 0.0;
double result = 0.0;
for (int i = 0; i < numTrials; i++) {
result++;
for (int j = 0; j < numTrials;j++) {
if(drawCoins[0] == drawCoins [1] && drawCoins[1] == drawCoins[2]) {
number++;
}
}
}
return number/result;
}
}
The reason you only ever get 0 or 1 is that you only draw (or pick) coins from the purse once, but you then test that draw numTrials * numTrials times. You have two loops (with indices i and j) iterating numTrials time - your logic is a little messed up there.
You can put the first loop (for drawing coins) within a second loop (for running trials) and your code will work. I've put a minimal refactor below (using your code as closely as possible), with two comments afterwards that might help you some more.
public class CoinPurse
{
public static void main(String[] args)
{
System.out.print("Probability of Drawing 3 coins of the Same Type - ");
System.out.println(coinPurseSimulation(100));
}
/**
* Runs numTrials trials of a Monte Carlo simulation of drawing 3 coins out
* of a purse containing 3 pennies and 3 quarters. Coins are not replaced
* once drawn.
*
* #param numTrials
* - the number of times the method will attempt to draw 3 coins
* #returns a double - the fraction of times 3 coins of the same type were
* drawn.
*/
public static double coinPurseSimulation(int numTrials)
{
final int P = 1;
final int Q = 2;
double number = 0.0;
double result = 0.0;
// Changed your loop index to t to avoid conflict with i in your draw
// loop
for (int t = 0; t < numTrials; t++)
{
result++;
// Moved your draw without replacement code here
int[] purse =
{ Q, Q, Q, P, P, P };
int[] drawCoins = new int[3];
for (int draw = 0; draw < 3; draw++)
{
int index = (int) (Math.random() * purse.length);
drawCoins[draw] = purse[index];
int[] newPurse = new int[purse.length - 1];
int j = 0;
for (int i = 0; i < purse.length; i++)
{
if (i == index)
{
continue;
}
newPurse[j] = purse[i];
j++;
}
purse = newPurse;
}
// Deleted the loop with index j - you don't need to test the same
// combination numTrials times...
if (drawCoins[0] == drawCoins[1] && drawCoins[1] == drawCoins[2])
{
number++;
}
}
return number / result;
}
}
Picking coins code
I have some comments on your routing for drawing coins:
It works correctly
It is rather cumbersome
It would have been easier for you to spot the problem if you had broken this bit of code into a separate method.
I'm going to address 3 and then 2.
Break the drawing code out into a method
private static int[] pickCoins(int[] purse, int numPicks)
{
//A little error check
if (numPicks > purse.length)
{
System.err.println("Can't pick " + numPicks +
" coins from a purse with only " + purse.length + " coins!");
}
int[] samples = new int[numPicks];
// Your sampling code here
return samples;
}
You can now simply call from within your second loop i.e.
drawCoins = pickCoins(purse, 3);
Sampling algorithm
#pjs's answer recommends using Collections.shuffle() and then taking the first 3 coins in your collection (e.g. an ArrayList). This is a good suggestion, but I'm guessing you haven't been introduced to Collections yet, and may not be 'allowed' to use them. If you are - do use them. If not (as I assume), you might want to think about better ways to randomly draw n items from an r length array without replacement.
One (well accepted) way is the Fisher-Yates shuffle and its derivatives. In effect it involves picking randomly from the unpicked subset of an array.
In Java - an working example could be as follows - it works by moving picked coins to the "end" of the purse and picking only from the first maxInd unpicked coins.
private static int[] pickCoins(int[] purse, int numCoins)
{
int[] samples = new int[numCoins];
int maxInd = purse.length - 1;
for (int i = 0; i < numCoins; i++)
{
int index = (int) (Math.random() * maxInd);
int draw = purse[index];
samples[i] = draw;
// swap the already drawn sample with the one at maxInd and decrement maxInd
purse[index] = purse[maxInd];
purse[maxInd] = draw;
maxInd -= 1;
}
return samples;
}
Expected results
You say your expected result is 0.1XXXXXXX. As you're learning Monte Carlo simulation - you might need to think about that a little more. The expected result depends on how many trials you do.
First, in this simple example, you can consider the analytic (or in some sense exact) result. Consider the procedure:
You draw your first coin - it doesn't matter which one it is
Whichever coin it was, there are 2 left in the bag that are the same - the probability of picking one of those is 2 / 5
If you picked one of the matching coins in step 2, there is now 1 matching coin left in the bag. The probability of picking that is 1 / 4
So, the probability of getting 3 matching coins (of either denomination) is 2 / 5 * 1 / 4 == 2 / 20 == 0.1
Your Monte Carlo programme is trying to estimate that probability. You would expect it to converge on 0.1 given sufficient estimates (i.e. with numTrials high enough). It won't always give a value equal to, or even starting with, 0.1. With sufficient number of trials, it's likely to give something starting 0.09 or 0.1. However, if numTrials == 1, it will give either 0 or 1, because it will draw once and the draw will either match or not. If numTrials == 2, the results can only be 0, 0.5 or 1 and so on.
One of the lessons of doing Monte Carlo simulation to estimate probabilities is to have a sufficiently high sample count to get a good estimate. That in turn depends on the accuracy you want - you can use your code to investigate this once it's working.
You need to move the loop where you generate draws down into the numTrials loop. The way you've written it you're making a single draw, and then checking that one result numTrials times.
I haven't checked the logic for your draw carefully, but that's because I'd recommend a different (and much simpler) approach. Use Collections.shuffle() on your set of quarters and pennies, and check the first three elements after each shuffle.
If done correctly, the answer should be 2 * (3/6) * (2/5) * (1/4), which is 0.1.

Reduce treatment time of the FFT

I'm currently working on Java for Android. I try to implement the FFT in order to realize a kind of viewer of the frequencies.
Actually I was able to do it, but the display is not fluid at all.
I added some traces in order to check the treatment time of each part of my code, and the fact is that the FFT takes about 300ms to be applied on my complex array, that owns 4096 elements. And I need it to take less than 100ms, as my thread (that displays the frequencies) is refreshed every 100ms. I reduced the initial array in order that the FFT results own only 1028 elements, and it works, but the result is deprecated.
Does someone have an idea ?
I used the default fft.java and Complex.java classes that can be found on the internet.
For information, my code computing the FFT is the following :
int bytesPerSample = 2;
Complex[] x = new Complex[bufferSize/2] ;
for (int index = 0 ; index < bufferReadResult - bytesPerSample + 1; index += bytesPerSample)
{
// 16BITS = 2BYTES
float asFloat = Float.intBitsToFloat(asInt);
double sample = 0;
for (int b = 0; b < bytesPerSample; b++) {
int v = buffer[index + b];
if (b < bytesPerSample - 1 || bytesPerSample == 1) {
v &= 0xFF;
}
sample += v << (b * 8);
}
double sample32 = 100 * (sample / 32768.0); // don't know the use of this compute...
x[index/bytesPerSample] = new Complex(sample32, 0);
}
Complex[] tx = new Complex[1024]; // size = 2048
///// reduction of the size of the signal in order to improve the fft traitment time
for (int i = 0; i < x.length/4; i++)
{
tx[i] = new Complex(x[i*4].re(), 0);
}
// Signal retrieval thanks to the FFT
fftRes = FFT.fft(tx);
I don't know Java, but you're way of converting between your input data and an array of complex values seems very convoluted. You're building two arrays of complex data where only one is necessary.
Also it smells like your complex real and imaginary values are doubles. That's way over the top for what you need, and ARMs are veeeery slow at double arithmetic anyway. Is there a complex class based on single precision floats?
Thirdly you're performing a complex fft on real data by filling the imaginary part of your complexes with zero. Whilst the result will be correct it is twice as much work straight off (unless the routine is clever enough to spot that, which I doubt). If possible perform a real fft on your data and save half your time.
And then as Simon says there's the whole issue of avoiding garbage collection and memory allocation.
Also it looks like your FFT has no preparatory step. This mean that the routine FFT.fft() is calculating the complex exponentials every time. The longest part of the FFT calculation is working out the complex exponentials, which is a shame because for any given FFT length the exponentials are constants. They don't depend on your input data at all. In the real time world we use FFT routines where we calculate the exponentials once at the start of the program and then the actual fft itself takes that const array as one of its inputs. Don't know if your FFT class can do something similar.
If you do end up going to something like FFTW then you're going to have to get used to calling C code from your Java. Also make sure you get a version that supports (I think) NEON, ARM's answer to SSE, AVX and Altivec. It's worth ploughing through their release notes to check. Also I strongly suspect that FFTW will only be able to offer a significant speed up if you ask it to perform an FFT on single precision floats, not doubles.
Google luck!
--Edit--
I meant of course 'good luck'. Give me a real keyboard quick, these touchscreen ones are unreliable...
First, thanks for all your answers.
I followed them and made two test :
first one, I replace the double used in my Complex class by float. The result is just a bit better, but not enough.
then I've rewroten the fft method in order not to use Complex anymore, but a two-dimensional float array instead. For each row of this array, the first column contains the real part, and the second one the imaginary part.
I also changed my code in order to instanciate the float array only once, on the onCreate method.
And the result... is worst !! Now it takes a little bit more than 500ms instead of 300ms.
I don't know what to do now.
You can find below the initial fft fonction, and then the one I've re-wroten.
Thanks for your help.
// compute the FFT of x[], assuming its length is a power of 2
public static Complex[] fft(Complex[] x) {
int N = x.length;
// base case
if (N == 1) return new Complex[] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2 : " + N); }
// fft of even terms
Complex[] even = new Complex[N/2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
Complex[] q = fft(even);
// fft of odd terms
Complex[] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
Complex[] r = fft(odd);
// combine
Complex[] y = new Complex[N];
for (int k = 0; k < N/2; k++) {
double kth = -2 * k * Math.PI / N;
Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
y[k] = q[k].plus(wk.times(r[k]));
y[k + N/2] = q[k].minus(wk.times(r[k]));
}
return y;
}
public static float[][] fftf(float[][] x) {
/**
* x[][0] = real part
* x[][1] = imaginary part
*/
int N = x.length;
// base case
if (N == 1) return new float[][] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2 : " + N); }
// fft of even terms
float[][] even = new float[N/2][2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
float[][] q = fftf(even);
// fft of odd terms
float[][] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
float[][] r = fftf(odd);
// combine
float[][] y = new float[N][2];
double kth, wkcos, wksin ;
for (int k = 0; k < N/2; k++) {
kth = -2 * k * Math.PI / N;
//Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
wkcos = Math.cos(kth) ; // real part
wksin = Math.sin(kth) ; // imaginary part
// y[k] = q[k].plus(wk.times(r[k]));
y[k][0] = (float) (q[k][0] + wkcos * r[k][0] - wksin * r[k][1]);
y[k][1] = (float) (q[k][1] + wkcos * r[k][1] + wksin * r[k][0]);
// y[k + N/2] = q[k].minus(wk.times(r[k]));
y[k + N/2][0] = (float) (q[k][0] - (wkcos * r[k][0] - wksin * r[k][1]));
y[k + N/2][1] = (float) (q[k][1] - (wkcos * r[k][1] + wksin * r[k][0]));
}
return y;
}
actually I think I don't understand everything.
First, about Math.cos and Math.sin : how do you want me not to compute it each time ? Do you mean that I should instanciate the whole values only once (e.g store it in an array) and use them for each compute ?
Second, about the N % 2, indeed it's not very useful, I could make the test before the call of the function.
Third, about Simon's advice : I mixed what he said and what you said, that's why I've replaced the Complex by a two-dimensional float[][]. If that was not what he suggested, then what was it ?
At least, I'm not a FFT expert, so what do you mean by making a "real FFT" ? Do you mean that my imaginary part is useless ? If so, I'm not sure, because later in my code, I compute the magnitude of each frequence, so sqrt(real[i]*real[i] + imag[i]*imag[i]). And I think that my imaginary part is not equal to zero...
thanks !

Calculating Standard Deviation of Angles?

So I'm working on an application using compass angles (in degrees). I've managed to determine the calculation of the mean of angles, by using the following (found at http://en.wikipedia.org/wiki/Directional_statistics#The_fundamental_difference_between_linear_and_circular_statistics) :
double calcMean(ArrayList<Double> angles){
double sin = 0;
double cos = 0;
for(int i = 0; i < angles.size(); i++){
sin += Math.sin(angles.get(i) * (Math.PI/180.0));
cos += Math.cos(angles.get(i) * (Math.PI/180.0));
}
sin /= angles.size();
cos /= angles.size();
double result =Math.atan2(sin,cos)*(180/Math.PI);
if(cos > 0 && sin < 0) result += 360;
else if(cos < 0) result += 180;
return result;
}
So I get my mean/average values correctly, but I can't get proper variance/stddev values. I'm fairly certain I'm calculating my variance incorrectly, but can't think of a correct way to do it.
Here's how I'm calculating variance:
double calcVariance(ArrayList<Double> angles){
//THIS IS WHERE I DON'T KNOW WHAT TO PUT
ArrayList<Double> normalizedList = new ArrayList<Double>();
for(int i = 0; i < angles.size(); i++){
double sin = Math.sin(angles.get(i) * (Math.PI/180));
double cos = Math.cos(angles.get(i) * (Math.PI/180));
normalizedList.add(Math.atan2(sin,cos)*(180/Math.PI));
}
double mean = calcMean(angles);
ArrayList<Double> squaredDifference = new ArrayList<Double>();
for(int i = 0; i < normalizedList.size(); i++){
squaredDifference.add(Math.pow(normalizedList.get(i) - mean,2));
}
double result = 0;
for(int i = 0; i < squaredDifference.size(); i++){
result+=squaredDifference.get(i);
}
return result/squaredDifference.size();
}
While it's the proper way to calculate variance, I'm not what I'm supposed to use. I presume that I'm supposed to use arctangent, but the standard deviation/variance values seem off. Help?
EDIT:
Example: Inputting the values 0,350,1,0,0,0,1,358,9,1 results with the average angle of 0.0014 (since the angles are so close to zero), but if you just do a non-angle average, you'll get 72...which is way off. Since I don't know how to manipulate individual values to be what they should be, the variance calculated is 25074, resulting in a standard deviation of 158 degrees, which is insane!! (It should only be a few degrees) What I think I need to do is properly normalize individual values so I can get correct variance/stddev values.
By the Wikipedia page you link to the circular standard deviation is sqrt(-log R²), where R = |mean of samples|, if you consider the samples as complex numbers on the unit circle. So the calculation of standard deviation is very similar to the calculation of the mean angle:
double calcStddev(ArrayList<Double> angles){
double sin = 0;
double cos = 0;
for(int i = 0; i < angles.size(); i++){
sin += Math.sin(angles.get(i) * (Math.PI/180.0));
cos += Math.cos(angles.get(i) * (Math.PI/180.0));
}
sin /= angles.size();
cos /= angles.size();
double stddev = Math.sqrt(-Math.log(sin*sin+cos*cos));
return stddev;
}
And if you think about it for a minute it makes sense: When you average a bunch of points close to each other on the unit circle the result is not too far off from the circle, so R will be close to 1 and the stddev near 0. If the points are distributed evenly along the circle their average will be close to 0, so R will be close to 0 and the stddev very large.
When you use Math.atan(sin/cosine) you get an angle between -90 and 90 degrees. If you have 120 degrees angle, you get cos=-0.5 and sin=0.866, then you get atan(-1.7)=-60 degrees. Thus you put wrong angles in your normalized list.
Assuming that variance is a linear deviation, I'd recommend you to rotate your angles array by the -calcMean(angles) and add/subtract 360 to/from angles above/below 180/-180 (damn my writing!)) while finding maximum and minimum angle. It will give you desired deviations. Like this:
Double meanAngle = calcMean(angles)
Double positiveDeviation = new Double(0);
Double negativeDeviation = new Double(0);
Iterator<Double> it = angles.iterator();
while (it.hasNext())
{
Double deviation = it.next() - meanAngle;
if (deviation > 180) deviation -= 180;
if (deviation <= -180) deviation += 180;
if (deviation > positiveDeviation) positiveDeviation = deviation;
if (deviation > negativeDeviation) negativeDeviation = deviation;
}
return positiveDeviation - negativeDeviation;
For average squared deviations you should use your method (with angles, not "normalized" ones), and keep looking for (-180, 180) range!
The math library function remainder is handy for dealing with angles.
A simple change would be to replace
normalizedList.get(i) - mean
with
remainder( normalizedList.get(i) - mean, 360.0)
However your first loop is then redundant, as the call to remainder will take care of all the normalisation. Moreover it's simpler just to sum up the squared differences, rather than store them. Personally I like to avoid pow() when arithmetic will do. So your function could be:
double calcVariance(ArrayList<Double> angles){
double mean = calcMean(angles);
double result = 0;
for(int i = 0; i < angles.size(); i++){
double diff = remainder( angles.get(i) - mean, 360.0);
result += diff*diff;
}
return result/angles.size();
}
The current good way to deal with this is now the two functions already implemented in scipy :
circmean : https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.circmean.html
circstd : https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.circstd.html
Couple of great things included :
vectorization for fast computing
nan dealing
high, low thresholds, typically for angles between 0 and 360 degrees vs between 0 and 2 Pi.
The accepted answer by Joni does an excellent job at answering this question, but as Brian Hawkins noted:
Mind the units. The function as written takes angles in degrees as input and returns the standard deviation in radians.
Here's a version that fixes that issue by using degrees for both its arguments and its return value. It also has more flexibility, as it allows for a variable number of arguments.
public static double calcStdDevDegrees(double... angles) {
double sin = 0;
double cos = 0;
for (int i = 0; i < angles.length; i++) {
sin += Math.sin(angles[i] * (Math.PI/180.0));
cos += Math.cos(angles[i] * (Math.PI/180.0));
}
sin /= angles.length;
cos /= angles.length;
double stddev = Math.sqrt(-Math.log(sin*sin+cos*cos));
return Math.toDegrees(stddev);
}

Implementing exponential moving average in Java

I essentially have an array of values like this:
0.25, 0.24, 0.27, 0.26, 0.29, 0.34, 0.32, 0.36, 0.32, 0.28, 0.25, 0.24, 0.25
The above array is oversimplified, I'm collecting 1 value per millisecond in my real code and I need to process the output on an algorithm I wrote to find the closest peak before a point in time. My logic fails because in my example above, 0.36 is the real peak, but my algorithm would look backwards and see the very last number 0.25 as the peak, as there's a decrease to 0.24 before it.
The goal is to take these values and apply an algorithm to them which will "smooth" them out a bit so that I have more linear values. (ie: I'd like my results to be curvy, not jaggedy)
I've been told to apply an exponential moving average filter to my values. How can I do this? It's really hard for me to read mathematical equations, I deal much better with code.
How do I process values in my array, applying an exponential moving average calculation to even them out?
float[] mydata = ...
mySmoothedData = exponentialMovingAverage(mydata, 0.5);
float[] exponentialMovingAverage(float[] input, float alpha) {
// what do I do here?
return result;
}
To compute an exponential moving average, you need to keep some state around and you need a tuning parameter. This calls for a little class (assuming you're using Java 5 or later):
class ExponentialMovingAverage {
private double alpha;
private Double oldValue;
public ExponentialMovingAverage(double alpha) {
this.alpha = alpha;
}
public double average(double value) {
if (oldValue == null) {
oldValue = value;
return value;
}
double newValue = oldValue + alpha * (value - oldValue);
oldValue = newValue;
return newValue;
}
}
Instantiate with the decay parameter you want (may take tuning; should be between 0 and 1) and then use average(…) to filter.
When reading a page on some mathmatical recurrence, all you really need to know when turning it into code is that mathematicians like to write indexes into arrays and sequences with subscripts. (They've a few other notations as well, which doesn't help.) However, the EMA is pretty simple as you only need to remember one old value; no complicated state arrays required.
I am having a hard time understanding your questions, but I will try to answer anyway.
1) If your algorithm found 0.25 instead of 0.36, then it is wrong. It is wrong because it assumes a monotonic increase or decrease (that is "always going up" or "always going down"). Unless you average ALL your data, your data points---as you present them---are nonlinear. If you really want to find the maximum value between two points in time, then slice your array from t_min to t_max and find the max of that subarray.
2) Now, the concept of "moving averages" is very simple: imagine that I have the following list: [1.4, 1.5, 1.4, 1.5, 1.5]. I can "smooth it out" by taking the average of two numbers: [1.45, 1.45, 1.45, 1.5]. Notice that the first number is the average of 1.5 and 1.4 (second and first numbers); the second (new list) is the average of 1.4 and 1.5 (third and second old list); the third (new list) the average of 1.5 and 1.4 (fourth and third), and so on. I could have made it "period three" or "four", or "n". Notice how the data is much smoother. A good way to "see moving averages at work" is to go to Google Finance, select a stock (try Tesla Motors; pretty volatile (TSLA)) and click on "technicals" at the bottom of the chart. Select "Moving Average" with a given period, and "Exponential moving average" to compare their differences.
Exponential moving average is just another elaboration of this, but weights the "older" data less than the "new" data; this is a way to "bias" the smoothing toward the back. Please read the Wikipedia entry.
So, this is more a comment than an answer, but the little comment box was just to tiny. Good luck.
Take a look at this.
If your noise has zero average, consider also the use of a Kalman filter.
In a rolling manner.... i also use commons.apache math library
public LinkedList EMA(int dperiods, double alpha)
throws IOException {
String line;
int i = 0;
DescriptiveStatistics stats = new SynchronizedDescriptiveStatistics();
stats.setWindowSize(dperiods);
File f = new File("");
BufferedReader in = new BufferedReader(new FileReader(f));
LinkedList<Double> ema1 = new LinkedList<Double>();
// Compute some statistics
while ((line = in.readLine()) != null) {
double sum = 0;
double den = 0;
System.out.println("line: " + " " + line);
stats.addValue(Double.parseDouble(line.trim()));
i++;
if (i > dperiods)
for (int j = 0; j < dperiods; j++) {
double var = Math.pow((1 - alpha), j);
den += var;
sum += stats.getElement(j) * var;
System.out.println("elements:"+stats.getElement(j));
System.out.println("sum:"+sum);
}
else
for (int j = 0; j < i; j++) {
double var = Math.pow((1 - alpha), j);
den += var;
sum += stats.getElement(j) * var;
}
ema1.add(sum / den);
System.out.println("EMA: " + sum / den);
}
return ema1;
}
public class MovingAvarage {
public static void main(String[] args) {
double[] array = {1.2, 3.4, 4.5, 4.5, 4.5};
double St = 0D;
for(int i=0; i<array.length; i++) {
St = movingAvarage(St, array[i]);
}
System.out.println(St);
}
private static double movingAvarage(double St, double Yt) {
double alpha = 0.01, oneMinusAlpha = 0.99;
if(St <= 0D) {
St = Yt;
} else {
St = alpha*Yt + oneMinusAlpha*St;
}
return St;
}
}
If you're having trouble with the math, you could go with a simple moving average instead of exponential. So the output you get would be the last x terms divided by x. Untested pseudocode:
int data[] = getFilled();
int outdata[] = initializeme()
for (int y = 0; y < data.length; y++)
int sum = 0;
for (int x = y; x < y-5; x++)
sum+=data[x];
outdata[y] = sum / 5;
Note that you will need to handle the start and end parts of the data since clearly you can't average the last 5 terms when you are on your 2nd data point. Also, there are more efficient ways of calculating this moving average(sum = sum - oldest + newest), but this is to get the concept of what's happening across.

Categories