I have found the Dynamic Time Warping (DTW) library which coded in Java. Floating point array (float []) as an input and pass by JSP.
However, I have triple confirmed that the huge memory usage problem is caused by the DTW library.
Java always prompt me that "java.lang.OutOfMemoryError". Besides, DTW algorithm always use around 2-3 GB for performing DTW.
I think the root cause that is caused by global variable, am I correct?
Would you please help me to improve and keep the program with low memory usage?
JSP Code:
...
float[] integer_hum_UDS = new float[hum_uds.length];
float[] integer_midi_UDS = new float[midi_uds.length];
...
DTW dtw = new DTW(integer_midi_UDS, integer_hum_UDS); //<- Huge Memory usage statement
float distance = dtw.getDistance();
Java Code:
public class DTW {
protected float[] seq1;
protected float[] seq2;
protected int[][] warpingPath;
protected int n;
protected int m;
protected int K;
protected float warpingDistance;
/**
* Constructor
*
* #param query
* #param templete
*/
public DTW(float[] sample, float[] templete) {
seq1 = sample;
seq2 = templete;
n = seq1.length;
m = seq2.length;
K = 1;
warpingPath = new int[n + m][2]; // max(n, m) <= K < n + m
warpingDistance = 0;
this.compute();
}
public void clear() {
seq1 = null;
seq2 = null;
warpingPath = null;
n = 0;
m = 0;
K = 0;
warpingDistance = 0;
System.gc();
Runtime.getRuntime().freeMemory();
}
public void compute() {
float accumulatedDistance = 0;
float[][] d = new float[n][m]; // local distances
float[][] D = new float[n][m]; // global distances
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
d[i][j] = distanceBetween(seq1[i], seq2[j]);
}
}
D[0][0] = d[0][0];
for (int i = 1; i < n; i++) {
D[i][0] = d[i][0] + D[i - 1][0];
}
for (int j = 1; j < m; j++) {
D[0][j] = d[0][j] + D[0][j - 1];
}
for (int i = 1; i < n; i++) {
for (int j = 1; j < m; j++) {
accumulatedDistance = Math.min(Math.min(D[i-1][j], D[i-1][j-1]), D[i][j-1]);
accumulatedDistance += d[i][j];
D[i][j] = accumulatedDistance;
}
}
accumulatedDistance = D[n - 1][m - 1];
int i = n - 1;
int j = m - 1;
int minIndex = 1;
warpingPath[K - 1][0] = i;
warpingPath[K - 1][1] = j;
while ((i + j) != 0) {
if (i == 0) {
j -= 1;
} else if (j == 0) {
i -= 1;
} else { // i != 0 && j != 0
float[] array = { D[i - 1][j], D[i][j - 1], D[i - 1][j - 1] };
minIndex = this.getIndexOfMinimum(array);
if (minIndex == 0) {
i -= 1;
} else if (minIndex == 1) {
j -= 1;
} else if (minIndex == 2) {
i -= 1;
j -= 1;
}
} // end else
K++;
warpingPath[K - 1][0] = i;
warpingPath[K - 1][1] = j;
} // end while
warpingDistance = accumulatedDistance / K;
//this.reversePath(warpingPath);
//Clear
this.clear();
//d = null;
//D = null;
//warpingPath = null;
//accumulatedDistance = 0;
}
/**
* Changes the order of the warping path (increasing order)
*
* #param path the warping path in reverse order
*/
/*
protected void reversePath(int[][] path) {
int[][] newPath = new int[K][2];
for (int i = 0; i < K; i++) {
for (int j = 0; j < 2; j++) {
newPath[i][j] = path[K - i - 1][j];
}
}
warpingPath = newPath;
}
*/
/**
* Returns the warping distance
*
* #return
*/
public float getDistance() {
return warpingDistance;
}
/**
* Computes a distance between two points
*
* #param p1 the point 1
* #param p2 the point 2
* #return the distance between two points
*/
protected float distanceBetween(float p1, float p2) {
return (p1 - p2) * (p1 - p2);
}
/**
* Finds the index of the minimum element from the given array
*
* #param array the array containing numeric values
* #return the min value among elements
*/
protected int getIndexOfMinimum(float[] array) {
int index = 0;
float val = array[0];
for (int i = 1; i < array.length; i++) {
if (array[i] < val) {
val = array[i];
index = i;
}
}
return index;
}
/**
* Returns a string that displays the warping distance and path
*/
public String toString() {
String retVal = "Warping Distance: " + warpingDistance + "\n";
/*
retVal += "Warping Path: {";
for (int i = 0; i < K; i++) {
retVal += "(" + warpingPath[i][0] + ", " +warpingPath[i][1] + ")";
retVal += (i == K - 1) ? "}" : ", ";
}
*/
return retVal;
}
/**
* Tests this class
*
* #param args ignored
*/
public static void main(String[] args) {
float[] n2 = {1, 2, 3, 4, 5, 9, 19, 49.7555f};
float[] n1 = {1, 2, 3, 4};
DTW dtw = new DTW(n1, n2);
System.out.println(dtw);
}
}
Related
Given an array of integers, I need to display a graph representing each integer.
For example, given an array {2,4,6,-3,-4,5,-1,-2,3}, the output on the console should be like:
*
* *
* * *
* * * *
* * * * *
* * * * *
* * * *
* * *
* *
*
I took the matrix approach, but so far I have been only able to achieve the horizontal version of the desired graph (90° clockwise rotated version of the desired graph, to be more precise).
How can I rotate my graph 90° in an anticlockwise direction to get the desired result? I tried transposing my result matrix but then the graph becomes the exact inverse of the desired result.
Here is the code,
public static char[][] plot(int[] array) {
int max = Integer.MIN_VALUE;
int min = Integer.MAX_VALUE;
for (int i = 0; i < array.length; i++) {
if (array[i] > max) {
max = array[i];
}
if (array[i] < min) {
min = array[i];
}
}
if (min < 0) {
min *= (-1);
}
char[][] tempArray = new char[array.length][max + min + 1];
for (int i = 0; i < tempArray.length; i++) {
if (array[i] > 0) {
for (int j = (min + 1); j <= min + array[i]; j++) {
tempArray[i][j] = '*';
}
System.out.println();
} else {
for (int j = min - 1; j >= min + array[i]; j--) {
tempArray[i][j] = '*';
}
}
}
return tempArray;
}
public static void main(String...s){
int[] arr = {2,4,6,-3,-4,5,-1,-2,3};
System.out.println(Arrays.deepToString(plot(arr)));
}
Also, I'm not able to nicely format the output matrix (I'm still looking at what is going on there), so I used Arrays.deeptoString().
I make own plotter (plot0) which write directly to the output stream char by char, line by line:
private static void plot0(final int... numbers) {
final int cols = numbers.length;
final IntSummaryStatistics stat = Arrays.stream(numbers).summaryStatistics();
final int max = stat.getMax();
final int min = stat.getMin();
final int rows;
if (min < 0) {
rows = Math.max(0, max) + ~min + 2;
} else {
rows = max;
}
for (int i = 0; i < rows; i++) {
final int val = Math.max(0, max) - i;
if (0 == val) {
continue; // do not plot zero
}
for (int j = 0; j < cols; j++) {
final int num = numbers[j];
System.out.print(
0 < num && 0 < val && num >= val
|| 0 > num && 0 > val && num <= val
? '*'
: ' ');
}
System.out.println();
}
}
I made some small changes in your code ( rename to plot1), but basically the original thinking works.
change Arrays.deepToString(tempArray[i]) to new String(tempArray[i])
two dimensional array is not printable, use a line of char instead
fill zero values with ' ' in Arrays.fill(tempArray[i], ' ');
missing skip 0 value from print, the baseline is visible
end of line after fill all '*' values, not before
the '*' values override the ' ', so you cannot write directly to the output.
private static void plot1(final int... array) {
int max = Integer.MIN_VALUE;
int min = Integer.MAX_VALUE;
for (final int element : array) {
if (element > max) {
max = element;
}
if (element < min) {
min = element;
}
}
if (min < 0) {
min *= -1;
}
final char[][] tempArray = new char[array.length][max + min + 1];
for (int i = 0; i < tempArray.length; i++) {
Arrays.fill(tempArray[i], ' ');
if (array[i] > 0) {
for (int j = min + 1; j <= min + array[i]; j++) {
tempArray[i][j] = '*';
}
} else if (array[i] < 0) {
for (int j = min - 1; j >= min + array[i]; j--) {
tempArray[i][j] = '*';
}
}
System.out.println(new String(tempArray[i]));
}
}
I was trying to solve a XOR problem, but the output always converged to 0.5, so i tried a simpler problem like NOT and the same thing happened.
I really don't know what's going on, i checked the code a million times and everything seems to be right, when i debugged it saving the neural network info I saw that the either the weight values or the biases values were getting really large. To do that I followed the 3 blue 1 brown youtube series about neural network and some other videos, too.
this is my code:
PS: I put the entire code here but I think the main problem is inside the bakpropag function
class NeuralNetwork {
int inNum, hiddenLayersNum, outNum, netSize;
int[] hiddenLayerSize;
Matrix[] weights;
Matrix[] biases;
Matrix[] sums;
Matrix[] activations;
Matrix[] error;
Matrix inputs;
long samples = 0;
float learningRate;
//Constructor------------------------------------------------------------------------------------------------------
NeuralNetwork(int inNum, int hiddenLayersNum, int[] hiddenLayerSize, int outNum, float learningRate) {
this.inNum = inNum;
this.hiddenLayersNum = hiddenLayersNum;
this.hiddenLayerSize = hiddenLayerSize;
this.outNum = outNum;
this.netSize = hiddenLayersNum + 1;
this.learningRate = learningRate;
//output layer plus the hidden layer size
//Note: I'm not adding the input layer because it doesn't have weights
weights = new Matrix[netSize];
//no biases added to the output layer
biases = new Matrix[netSize - 1];
sums = new Matrix[netSize];
activations = new Matrix[netSize];
error = new Matrix[netSize];
initializeHiddenLayer();
initializeOutputLayer();
}
//Initializing Algorithms------------------------------------------------------------------------------------------
void initializeHiddenLayer() {
for (int i = 0; i < hiddenLayersNum; i++) {
if (i == 0) {//only the first hidden layer takes the inputs
weights[i] = new Matrix(hiddenLayerSize[i], inNum);
} else {
weights[i] = new Matrix(hiddenLayerSize[i], hiddenLayerSize[i - 1]);
}
biases[i] = new Matrix(hiddenLayerSize[i], 1);
sums[i] = new Matrix(hiddenLayerSize[i], 1);
activations[i] = new Matrix(hiddenLayerSize[i], 1);
error[i] = new Matrix(hiddenLayerSize[i], 1);
}
}
void initializeOutputLayer() {
//the output layer takes the last hidden layer activation values
weights[netSize - 1] = new Matrix(outNum, hiddenLayerSize[hiddenLayerSize.length - 1]);
activations[netSize - 1] = new Matrix(outNum, 1);
sums[netSize - 1] = new Matrix(outNum, 1);
error[netSize - 1] = new Matrix(outNum, 1);
for (Matrix m : weights) {
for (int i = 0; i < m.i; i++) {
for (int j = 0; j < m.j; j++) {
m.values[i][j] = random(-1, 1);
}
}
}
for (Matrix m : biases) {
for (int i = 0; i < m.i; i++) {
for (int j = 0; j < m.j; j++) {
m.values[i][j] = 1;
}
}
}
for (Matrix m : sums) {
for (int i = 0; i < m.i; i++) {
for (int j = 0; j < m.j; j++) {
m.values[i][j] = 0;
}
}
}
}
//Calculation------------------------------------------------------------------------------------------------------
void calculate(float[] inputs) {
this.inputs = new Matrix(0, 0);
this.inputs = this.inputs.arrayToCollumn(inputs);
sums[0] = (weights[0].matrixMult(this.inputs)).sum(biases[0]);
activations[0] = sigM(sums[0]);
for (int i = 1; i < netSize - 1; i++) {
sums[i] = weights[i].matrixMult(activations[i - 1]);
activations[i] = sigM(sums[i]).sum(biases[i]);
}
//there's no biases in the output layer
//And the output layer uses sigmoid function
sums[netSize - 1] = weights[netSize - 1].matrixMult(activations[netSize - 1 - 1]);
activations[netSize - 1] = sigM(sums[netSize - 1]);
}
//Sending outputs--------------------------------------------------------------------------------------------------
Matrix getOuts() {
return activations[netSize - 1];
}
//Backpropagation--------------------------------------------------------------------------------------------------
void calcError(float[] exp) {
Matrix expected = new Matrix(0, 0);
expected = expected.arrayToCollumn(exp);
//E = (output - expected)
error[netSize - 1] = this.getOuts().diff(expected);
samples++;
}
void backPropag(int layer) {
if (layer == netSize - 1) {
error[layer].scalarDiv(samples);
for (int i = layer - 1; i >= 0; i--) {
prevLayerCost(i);
}
weightError(layer);
backPropag(layer - 1);
} else {
weightError(layer);
biasError(layer);
if (layer != 0)
backPropag(layer - 1);
}
}
void weightError(int layer) {
if (layer != 0) {
for (int i = 0; i < weights[layer].i; i++) {
for (int j = 0; j < weights[layer].j; j++) {
float changeWeight = 0;
if (layer != netSize - 1)
changeWeight = activations[layer - 1].values[j][0] * deriSig(sums[layer].values[i][0]) * error[layer].values[i][0];
else
changeWeight = activations[layer - 1].values[j][0] * deriSig(sums[layer].values[i][0]) * error[layer].values[i][0];
weights[layer].values[i][j] += -learningRate * changeWeight;
}
}
} else {
for (int i = 0; i < weights[layer].i; i++) {
for (int j = 0; j < weights[layer].j; j++) {
float changeWeight = this.inputs.values[j][0] * deriSig(sums[layer].values[i][0]) * error[layer].values[i][0];
weights[layer].values[i][j] += -learningRate * changeWeight;
}
}
}
}
void biasError(int layer) {
for (int i = 0; i < biases[layer].i; i++) {
for (int j = 0; j < biases[layer].j; j++) {
float changeBias = 0;
if (layer != netSize - 1)
changeBias = deriSig(sums[layer].values[i][0]) * error[layer].values[i][0];
biases[layer].values[i][j] += -learningRate * changeBias;
}
}
}
void prevLayerCost(int layer) {
for (int i = 0; i < activations[layer].i; i++) {
for (int j = 0; j < activations[layer + 1].j; j++) {//for all conections of that neuron to the next layer
if (layer != netSize - 1)
error[layer].values[i][0] += weights[layer + 1].values[j][i] * deriSig(sums[layer + 1].values[j][0]) * error[layer + 1].values[j][0];
else
error[layer].values[i][0] += weights[layer + 1].values[j][i] * deriSig(sums[layer + 1].values[j][0]) * error[layer + 1].values[j][0];
}
}
}
//Activation Functions---------------------------------------------------------------------------------------------
Matrix reLUM(Matrix m) {
Matrix temp = m.copyM();
for (int i = 0; i < temp.i; i++) {
for (int j = 0; j < temp.j; j++) {
temp.values[i][j] = ReLU(m.values[i][j]);
}
}
return temp;
}
float ReLU(float x) {
return max(0, x);
}
float deriReLU(float x) {
if (x <= 0)
return 0;
else
return 1;
}
Matrix sigM(Matrix m) {
Matrix temp = m.copyM();
for (int i = 0; i < temp.i; i++) {
for (int j = 0; j < temp.j; j++) {
temp.values[i][j] = sig(m.values[i][j]);
}
}
return temp;
}
float sig(float x) {
return 1 / (1 + exp(-x));
}
float deriSig(float x) {
return sig(x) * (1 - sig(x));
}
//Saving Files-----------------------------------------------------------------------------------------------------
void SaveNeuNet() {
for (int i = 0; i < weights.length; i++) {
weights[i].saveM("weights\\weightLayer" + i);
}
for (int i = 0; i < biases.length; i++) {
biases[i].saveM("biases\\biasLayer" + i);
}
for (int i = 0; i < activations.length; i++) {
activations[i].saveM("activations\\activationLayer" + i);
}
for (int i = 0; i < error.length; i++) {
error[i].saveM("errors\\errorLayer" + i);
}
}
}
and this is the Matrix code:
class Matrix {
int i, j, size;
float[][] values;
Matrix(int i, int j) {
this.i = i;
this.j = j;
this.size = i * j;
values = new float[i][j];
}
Matrix sum (Matrix other) {
if (other.i == this.i && other.j == this.j) {
for (int x = 0; x < this.i; x++) {
for (int z = 0; z < this.j; z++) {
values[x][z] += other.values[x][z];
}
}
return this;
}
return null;
}
Matrix diff(Matrix other) {
if (other.i == this.i && other.j == this.j) {
for (int x = 0; x < this.i; x++) {
for (int z = 0; z < this.j; z++) {
values[x][z] -= other.values[x][z];
}
}
return this;
}
return null;
}
Matrix scalarMult(float k) {
for (int i = 0; i < this.i; i++) {
for (int j = 0; j < this.j; j++) {
values[i][j] *= k;
}
}
return this;
}
Matrix scalarDiv(float k) {
if (k != 0) {
for (int i = 0; i < this.i; i++) {
for (int j = 0; j < this.j; j++) {
values[i][j] /= k;
}
}
return this;
} else
return null;
}
Matrix matrixMult(Matrix other) {
if (this.j != other.i)
return null;
else {
Matrix temp = new Matrix(this.i, other.j);
for (int i = 0; i < temp.i; i++) {
for (int j = 0; j < temp.j; j++) {
for (int k = 0; k < this.j; k++) {
temp.values[i][j] += this.values[i][k] * other.values[k][j];
}
}
}
return temp;
}
}
Matrix squaredValues(){
for (int i = 0; i < this.i; i++){
for (int j = 0; j < this.j; j++){
values[i][j] = sq(values[i][j]);
}
}
return this;
}
void printM() {
for (int x = 0; x < this.i; x++) {
print("| ");
for (int z = 0; z < this.j; z++) {
print(values[x][z] + " | ");
}
println();
}
}
void saveM(String name) {
String out = "";
for (int x = 0; x < this.i; x++) {
out += "| ";
for (int z = 0; z < this.j; z++) {
out += values[x][z] + " | ";
}
out += "\n";
}
saveStrings("outputs\\" + name + ".txt", new String[] {out});
}
Matrix arrayToCollumn(float[] array) {
Matrix temp = new Matrix(array.length, 1);
for (int i = 0; i < array.length; i++)
temp.values[i][0] = array[i];
return temp;
}
Matrix arrayToLine(float[] array) {
Matrix temp = new Matrix(1, array.length);
for (int j = 0; j < array.length; j++)
temp.values[0][j] = array[j];
return temp;
}
Matrix copyM(){
Matrix temp = new Matrix(i, j);
for (int i = 0; i < this.i; i++){
for (int j = 0; j < this.j; j++){
temp.values[i][j] = this.values[i][j];
}
}
return temp;
}
}
As I said, the outputs are always converging to 0.5 instead of the actual value 1 or 0
I rewrote the code and it is working now! I have no idea what was wrong with the code before but this one works:
class NeuralNetwork {
int netSize;
float learningRate;
Matrix[] weights;
Matrix[] biases;
Matrix[] activations;
Matrix[] sums;
Matrix[] errors;
NeuralNetwork(int inNum, int hiddenNum, int[] hiddenLayerSize, int outNum, float learningRate) {
netSize = hiddenNum + 1;
this.learningRate = learningRate;
weights = new Matrix[netSize];
biases = new Matrix[netSize - 1];
activations = new Matrix[netSize];
sums = new Matrix[netSize];
errors = new Matrix[netSize];
initializeMatrices(inNum, hiddenNum, hiddenLayerSize, outNum);
}
//INITIALIZING MATRICES
void initializeMatrices(int inNum, int hiddenNum, int[] layerSize, int outNum) {
for (int i = 0; i < hiddenNum; i++) {
if (i == 0)
weights[i] = new Matrix(layerSize[0], inNum);
else
weights[i] = new Matrix(layerSize[i], layerSize[i - 1]);
biases[i] = new Matrix(layerSize[i], 1);
activations[i] = new Matrix(layerSize[i], 1);
errors[i] = new Matrix(layerSize[i], 1);
sums[i] = new Matrix(layerSize[i], 1);
weights[i].randomize(-1, 1);
biases[i].randomize(-1, 1);
activations[i].randomize(-1, 1);
}
weights[netSize - 1] = new Matrix(outNum, layerSize[layerSize.length - 1]);
activations[netSize - 1] = new Matrix(outNum, 1);
errors[netSize - 1] = new Matrix(outNum, 1);
sums[netSize - 1] = new Matrix(outNum, 1);
weights[netSize - 1].randomize(-1, 1);
activations[netSize - 1].randomize(-1, 1);
}
//---------------------------------------------------------------------------------------------------------------
void forwardPropag(float[] ins) {
Matrix inputs = new Matrix(0, 0);
inputs = inputs.arrayToCollumn(ins);
sums[0] = (weights[0].matrixMult(inputs)).sum(biases[0]);
activations[0] = sigM(sums[0]);
for (int i = 1; i < netSize - 1; i++) {
sums[i] = (weights[i].matrixMult(activations[i - 1])).sum(biases[i]);
activations[i] = sigM(sums[i]);
}
//output layer does not have biases
sums[netSize - 1] = weights[netSize - 1].matrixMult(activations[netSize - 2]);
activations[netSize - 1] = sigM(sums[netSize - 1]);
}
Matrix predict(float[] inputs) {
forwardPropag(inputs);
return activations[netSize - 1].copyM();
}
//SUPERVISED LEARNING - BACKPROPAGATION
void train(float[] inps, float[] expec) {
Matrix expected = new Matrix(0, 0);
expected = expected.arrayToCollumn(expec);
errors[netSize - 1] = predict(inps).diff(expected);
calcErorrPrevLayers();
adjustWeights(inps);
adjustBiases();
for (Matrix m : errors){
m.reset();
}
}
void calcErorrPrevLayers() {
for (int l = netSize - 2; l >= 0; l--) {
for (int i = 0; i < activations[l].i; i++) {
for (int j = 0; j < activations[l + 1].i; j++) {
errors[l].values[i][0] += weights[l + 1].values[j][i] * dSig(sums[l + 1].values[j][0]) * errors[l + 1].values[j][0];
}
}
}
}
void adjustWeights(float[] inputs) {
for (int l = 0; l < netSize; l++) {
if (l == 0) {
//for ervery neuron n in the first layer
for (int n = 0; n < activations[l].i; n++) {
//for every weight w of the first layer
for (int w = 0; w < inputs.length; w++) {
float weightChange = inputs[w] * dSig(sums[l].values[n][0]) * errors[l].values[n][0];
weights[l].values[n][w] += -learningRate * weightChange;
}
}
} else {
//for ervery neuron n in the first layer
for (int n = 0; n < activations[l].i; n++) {
//for every weight w of the first layer
for (int w = 0; w < activations[l - 1].i; w++) {
float weightChange = activations[l - 1].values[w][0] * dSig(sums[l].values[n][0]) * errors[l].values[n][0];
weights[l].values[n][w] += -learningRate * weightChange;
}
}
}
}
}
void adjustBiases() {
for (int l = 0; l < netSize - 1; l++) {
//for ervery neuron n in the first layer
for (int n = 0; n < activations[l].i; n++) {
float biasChange = dSig(sums[l].values[n][0]) * errors[l].values[n][0];
biases[l].values[n][0] += -learningRate * biasChange;
}
}
}
//ACTIVATION FUNCTION
float sig(float x) {
return 1 / (1 + exp(-x));
}
float dSig(float x) {
return sig(x) * (1 - sig(x));
}
Matrix sigM(Matrix m) {
Matrix temp = m.copyM();
for (int i = 0; i < m.i; i++) {
for (int j = 0; j < m.j; j++) {
temp.values[i][j] = sig(m.values[i][j]);
}
}
return temp;
}
}
I have been trying to sort 2d array of type Double by the code that i have provided , my array does not consist of any 0s but after sorting there are several 0s .
sorting code
Arrays.sort(merge, new Comparator<double[]>() {
#Override
public int compare(double[] o1, double[] o2) {
return Double.compare(o1[4], o2[4]);
}
});
here is my complete code , i want to sort on the basis of array[i][4]
i.e 4th index
:-
public class EvoAlgo {
static int k = 40;//generations
static double al = 0, ah = 1, bl = -2, bh = 2, cl = -2, ch = 1, dl = 0.5, dh = 1;
static double m1 = 0.02, m2 = 0.22, m3 = -0.11, m4 = 0.22;
static double f(double a, double b, double c, double d) {
return (2 * a * a) - (2.5 * a * b * c * c) + (4 * b * c * d) + (0.25 * c * d) - (0.2 * d * d);
}
static double[] mutate(double[] child) {
double m = Math.random();
if (m <= 0.25) {
child[0] = child[0] * m1;
} else if (m <= 0.5) {
child[1] = child[1] * m2;
} else if (m <= 0.75) {
child[2] = child[2] * m3;
} else {
child[3] = child[3] * m4;
}
child[4] = f(child[0], child[1], child[2], child[3]);
return child;
}
static double[][] initRandomPop() {
double[][] initPop = new double[25][5];
for (int i = 0; i < initPop.length; i++) {
double ax = al + (Math.random() * ((ah - al) + 1));
double bx = bl + (Math.random() * ((bh - bl) + 1));
double cx = cl + (Math.random() * ((ch - cl) + 1));
double dx = dl + (Math.random() * ((dh - dl) + 1));
initPop[i][0] = ax;
initPop[i][1] = bx;
initPop[i][2] = cx;
initPop[i][3] = dx;
initPop[i][4] = f(ax, bx, cx, dx);
}
return initPop;
}
static double[][] merge(double[][] a, double[][] b) {
double[][] merge = new double[a.length + b.length][5];
for (int i = 0; i < a.length; i++) {
merge[i] = a[i];
}
for (int i = a.length; i < b.length; i++) {
merge[i] = a[i];
}
return merge;
}
static double[][] newPop(double[][] a) {
double[][] newPop = new double[25][5];
for (int i = 0; i < 25; i++) {
newPop[i] = a[i];
}
return newPop;
}
public static void main(String[] args) {
double[][] initPop = initRandomPop();
double[][] child = new double[40][5];
int cc = 0;
for (int i = 0; i < k; i++) {
for (int j = 0; j < 20; j++) {
int ri1 = 0 + (int) (Math.random() * ((24 - 0) + 1));
int ri2 = 0 + (int) (Math.random() * ((24 - 0) + 1));
double[] ind1 = initPop[ri1];
double[] ind2 = initPop[ri2];
double[] c1 = new double[5];
double[] c2 = new double[5];
c1[0] = ind1[0];
c1[1] = ind1[1];
c1[2] = ind2[2];
c1[3] = ind2[3];
c1[4] = f(ind1[0], ind1[1], ind2[2], ind2[3]);
c2[0] = ind2[0];
c2[1] = ind2[1];
c2[2] = ind1[2];
c2[3] = ind1[3];
c2[4] = f(ind2[0], ind2[1], ind1[2], ind1[3]);
c1 = mutate(c1);
c2 = mutate(c2);
child[cc++] = c1;
child[cc++] = c2;
}
double[][] merge = merge(child, initPop);
Arrays.sort(merge, new Comparator<double[]>() {
#Override
public int compare(double[] o1, double[] o2) {
return Double.compare(o1[4], o2[4]);
}
});
for (int j = 0; j < merge.length; j++) {
System.out.println(merge[j][4]);
}
initPop = newPop(merge);
cc = 0;
}
System.out.println("Fittest Person On Earth " + initPop[0][4]);
}
}
0 is the default value. It can be compared to null so most likely you have a situation where an array field isn't given a value.
Sorry for wrong question ! The problem was in merge() method.
here's the fix
static double[][] merge(double[][] a, double[][] b) {
double[][] merge = new double[a.length + b.length][5];
for (int i = 0; i < a.length; i++) {
merge[i] = a[i];
}
int p = 0;
for (int i = a.length; i < merge.length; i++) {
merge[i] = b[p++];
}
return merge;
}
I want to apply DCT on an image, but before doing that on such a big matrix, I wanted to apply the DCT and IDCT on a 2X2 matrix.
Following is the code I have written to perform the DCT and IDCT on a 2X2 matrix.But am not getting back the original matrix after the IDCT.
Where have I gone wrong??
package dct;
/**
*
* #author jain
*/
public class try4 {
public static void main(String[] args)
{
int n = 2;
double[][] ob = new double[n][n];
double[][] dct = new double[n][n];
double[][] rb = new double[n][n];
double[] c = new double[2];
// initialize co-efficients
c[0] = 1/Math.sqrt(2);
c[1] = 1;
ob[0][0] = 54.0;
ob[0][1] = 35.0;
ob[1][0] = 28.0;
ob[1][1] = 45.0;
for(int u = 0; u < 2;u++)
{
for(int v =0; v < 2;v++)
{
double sum = 0;
for(int j = 0;j < 2; j++)
{
for(int i = 0;i < 2;i++)
{
//sum += Math.cos(((2*i+1)/(2.0*n))*u*Math.PI)*Math.cos(((2*j+1)/(2.0*n))*v*Math.PI)*ob[i][j];
sum += Math.cos(( (2*i + 1) * (u*Math.PI) ) / (2*n) ) * Math.cos(( (2*j + 1) * (v*Math.PI) ) / (2*n) ) * ob[i][j];
}
}
sum = sum * (2/n) * c[u]*c[v];
dct[u][v] = sum;
}
}
System.out.println("The DCT matrix is ");
for(int i= 0; i < 2;i++)
{
for(int j = 0;j < 2; j++)
{
System.out.print(dct[i][j] + "\t");
}
System.out.println();
}
for(int u = 0; u < 2;u++)
{
for(int v =0; v < 2;v++)
{
double sum = 0;
for(int j = 0;j < 2; j++)
{
for(int i = 0;i < 2;i++)
{
//sum +=c[u]*c[v]*dct[u][v] * Math.cos( ((2*i+1)/(2.0*n))) *Math.cos(((2*j+1)/(2.0*n))*v*Math.PI);
sum += c[u]*c[v]*dct[u][v] * Math.cos( ( (2*i + 1) * (u*Math.PI) ) / (2*n) ) * Math.cos(( (2*j + 1) * (v*Math.PI) ) / (2*n) );
}
}
sum = sum * (2/n);
rb[u][v] = sum;
}
}
System.out.println("The retrieved matrix is ");
for(int i= 0; i < 2;i++)
{
for(int j = 0;j < 2; j++)
{
System.out.print(rb[i][j] + "\t");
}
System.out.println();
}
}// main ends
}// class ends
I have written a code in lucene, which firsts indexes xml documents, and finds the number of unique terms in the index.
Say there are n number (no.) of unique terms.
I want to generate a matrix of dimensions nXn, where
m[i][j] = (co_occurrence value of terms (i, j))/ (occurrence value of term i)
co_occurence of terms (i, j) = no. of documents in which ith term and jth terms, both are occurring
occurence of term j is the no. of documents in which the term j is occurring.
My code is working fine. But its not efficient. for large no. of files, where no. of terms are more than 2000, its taking more than 10 minutes.
here is my code for finding co_occurence -
int cooccurrence(IndexReader reader, String term_one, String term_two) throws IOException {
int common_doc_no = 0, finaldocno_one = 0, finaldocno_two = 0;
int termdocid_one[] = new int[6000];
int termdocid_two[] = new int[6000];
int first_docids[] = new int[6000];
int second_docids[] = new int[6000];
int k = 0;
for (java.util.Iterator<String> it = reader.getFieldNames(
FieldOption.ALL).iterator(); it.hasNext();) {
String fieldname = (String) it.next();
TermDocs t = reader.termDocs(new Term(fieldname, term_one));
while (t.next()) {
int x = t.doc();
if (termdocid_one[x] != 1) {
finaldocno_one++;
first_docids[k] = x;
k++;
}
termdocid_one[x] = 1;
}
}
/*
* System.out.println("value of finaldoc_one - " + finaldocno_one); for
* (int i = 0; i < finaldocno_one; i++) { System.out.println("" +
* first_docids[i]); }
*/
k = 0;
for (java.util.Iterator<String> it = reader.getFieldNames(
FieldOption.ALL).iterator(); it.hasNext();) {
String fieldname = (String) it.next();
TermDocs t = reader.termDocs(new Term(fieldname, term_two));
while (t.next()) {
int x = t.doc();
if (termdocid_two[x] != 1) {
finaldocno_two++;
second_docids[k] = x;
k++;
}
termdocid_two[x] = 1;
}
}
/*
* System.out.println("value of finaldoc_two - " + finaldocno_two);
*
* for (int i = 0; i < finaldocno_two; i++) { System.out.println("" +
* second_docids[i]); }
*/
int max;
int search = 0;
if (finaldocno_one > finaldocno_two) {
max = finaldocno_one;
search = 1;
} else {
max = finaldocno_two;
search = 2;
}
if (search == 1) {
for (int i = 0; i < max; i++) {
if (termdocid_two[first_docids[i]] == 1)
common_doc_no++;
}
} else if (search == 2) {
for (int i = 0; i < max; i++) {
if (termdocid_one[second_docids[i]] == 1)
common_doc_no++;
}
}
return common_doc_no;
}
code for calculation of knowledge matrix: -
void knowledge_matrix(double matrix[][], IndexReader reader, double avg_matrix[][]) throws IOException {
ArrayList<String> unique_terms_array = new ArrayList<>();
int totallength = unique_term_count(reader, unique_terms_array);
int co_occur_matrix[][] = new int[totallength + 3][totallength + 3];
double rowsum = 0;
for (int i = 1; i <= totallength; i++) {
rowsum = 0;
for (int j = 1; j <= totallength; j++) {
int co_occurence;
int occurence = docno_single_term(reader,
unique_terms_array.get(j - 1));
if (i > j) {
co_occurence = co_occur_matrix[i][j];
} else {
co_occurence = cooccurrence(reader,
unique_terms_array.get(i - 1),
unique_terms_array.get(j - 1));
co_occur_matrix[i][j] = co_occurence;
co_occur_matrix[j][i] = co_occurence;
}
matrix[i][j] = (float) co_occurence / (float) occurence;
rowsum += matrix[i][j];
if (i > 1)
{
avg_matrix[i - 1][j] = matrix[i - 1][j] - matrix[i - 1][0];
}
}
matrix[i][0] = rowsum / totallength;
}
for (int j = 1; j <= totallength; j++) {
avg_matrix[totallength][j] = matrix[totallength][j]
- matrix[totallength][0];
}
}
Please anyone suggest me any efficient method to implement it.
I think you can put the find process of term_one and term_two in one for loop. And you can use two hashsets to save the docid that you have found. And then use termOneSet.retainAll(termTwoSet) to get the doc which have both term_one and term_two.