Digital Image Processing Java Histogram not working - java
I am trying to get a few different histograms to show up to compare the original image and the output image after convolving it. It shows the image and the source histogram but when i call the dst histogram, it errors and does not display. If anyone could help, it would be much appreciated!
Error code -
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 853
at iptoolkit.Histogram.<init>(Histogram.java:15)
at assignment2.main(assignment2.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Process finished with exit code 0
Code -
public static void main(String[] args) throws Exception
{
MainWindow mw = new MainWindow();
mw.println("Testing...");
IntImage src = new IntImage("C:\\Users\\scott_000\\Documents\\Digital Imaging\\Digital Imaging\\Images\\Baboon.bmp"); //destination for source image
int nRows = src.getRows(); //calculating the rows of the images and columns
int nCols = src.getCols();
IntImage dst = new IntImage(nRows, nCols); //setting destination image(output image)
IntImage dst1 = new IntImage(nRows, nCols);
src.displayImage(400, 300); //displaying the source image (input)
int [][] mask = new int[][] { {-1, -2, -1}, {0, 0, 0}, {1, 2, 1} }; //mask imput (sobel masks in order from top to bottom
int [][] meanMask = new int[3][3];
meanMask[0][0] = 1;
meanMask[0][1] = 1;
meanMask[0][2] = 1;
meanMask[1][0] = 1; // 1 1 1
meanMask[1][1] = 1; // 1 1 1
meanMask[1][2] = 1; // 1 1 1
meanMask[2][0] = 1;
meanMask[2][1] = 1;
meanMask[2][2] = 1;
int [][] mask5x5 = new int[5][5];
mask5x5[0][0] = 1;
mask5x5[0][1] = 1;
mask5x5[0][2] = 1;
mask5x5[0][3] = 1;
mask5x5[0][4] = 1;
mask5x5[1][0] = 1;
mask5x5[1][1] = 1;
mask5x5[1][2] = 1;
mask5x5[1][3] = 1;
mask5x5[1][4] = 1;
mask5x5[2][0] = 1;
mask5x5[2][1] = 1;
mask5x5[2][2] = 1; // 1 1 1 1 1
mask5x5[2][3] = 1; // 1 1 1 1 1
mask5x5[2][4] = 1; // 1 1 1 1 1
mask5x5[3][0] = 1;
mask5x5[3][1] = 1;
mask5x5[3][2] = 1;
mask5x5[3][3] = 1;
mask5x5[3][4] = 1;
mask5x5[4][0] = 1;
mask5x5[4][1] = 1;
mask5x5[4][2] = 1;
mask5x5[4][3] = 1;
mask5x5[4][4] = 1;
convolve(src, mask5x5, dst); //calling convolve method, with input image(src), template(mask) and output image(dst)
dst.setScaling(true); //scales the image down to 255
dst.displayImage(); //display the output image
convolve(src, meanMask, dst1); //calling convolve method, with input image(src), template(mask) and output image(dst)
dst1.setScaling(true); //scales the image down to 255
dst1.displayImage(); //display the output image
Histogram h = new Histogram(src);
IntImage histImage;
histImage = h.makeImage(); //setting and displaying histogram
histImage.displayImage();
Histogram y = new Histogram(dst1);
IntImage histImage1;
histImage1 = y.makeImage(); //setting and displaying histogram
histImage1.displayImage();
}
static IntImage convolve(IntImage in, int[][] template, IntImage out) //parameters to be set
{
int nRows = in.getRows(); //find out how many of rows there are and cols
int nCols = in.getCols();
int nMaskRows = template.length; //set length of rows and cols
int nMaskCols = template[0].length;
int rBoarder = nMaskRows / 2; //calculation for the border of the image
int cBoarder = nMaskCols / 2;
int sum; //used for the calculation
for (int r = 0; r < (nRows - nMaskRows + 1); r++) //start at the first row(top left) and work to the right, number of rows - mask rows(whatever the mask is)
{
for (int c = 0; c < (nCols - nMaskCols + 1); c++) //same as above
{
sum = 0; //declaring the sum as 0
for (int mr = 0; mr < nMaskRows; mr++)
{
for (int mc = 0; mc < nMaskCols; mc++)
{
sum += in.pixels[r + mr][c + mc] * template[mr][mc]; //change this for calculating the edge preserving smoothing (mean, median etc)
}
}
out.pixels[r + rBoarder][c + cBoarder] = sum;
}
}
return out;
}
}
Your ArrayIndexOutOfBoundsException is probably being caused by the fact that your Sobel filter can output negative values. Most likely your Histogram is not expecting that.
I can't find any information about the library you're using (iptoolkit?) but I'd read the Histogram documentation to see it if gives any clues about how to handle negative values.
Had to scale the Image down, used this method to do so. Thanks for the help whiskeyspider!
static IntImage scale(IntImage in, int newMin, int newMax) {
int oldMin, oldMax;
double scaleFactor;
int nRows = in.getRows();
int nCols = in.getCols();
IntImage out = new IntImage(nRows, nCols);
oldMin = oldMax = in.pixels[0][0];
for (int r = 0; r < nRows; r++)
{
for (int c = 0; c < nCols; c++)
{
if (in.pixels[r][c] < oldMin)
{
oldMin = in.pixels[r][c];
} else {
if (in.pixels[r][c] > oldMax)
{
oldMax = in.pixels[r][c];
}
}
}
}
//Calculate scaling factor
scaleFactor = (double) (newMax - newMin) / (double) (oldMax - oldMin);
for (int r = 0; r < nRows; r++)
{
for (int c = 0; c < nCols; c++)
{
out.pixels[r][c] = (int) Math.round(newMin +
(in.pixels[r][c] - oldMin) * scaleFactor);
}
} //scale
return out;
}
Related
OpenCL compression of binary data
I'd like to ask for your help if you're experience in OpenCL. The task is so trivial, it's a shame I can't see what's wrong, but I couldn't solve this for 2 days now. We have a binary 3D volume data stored in 2D slices. On the CPU side in Java, each slice is compressed into a bit array, that is, each slice's size is calculated as: sliceSize = (width*height+31)/32; The Java code for compressing slices from 1 byte/voxel to 1 int/32 voxels is: hostUncompressed = new byte[depth * height * width]; hostCompressed = new int[depth * sliceSize]; deviceUncompressed = new byte[depth * height * width]; deviceCompressed = new int[depth * sliceSize]; int numOnes = 0; int k = 0; for (int i = 0; i < depth; ++i) { for (int y = 0; y < height; ++y) { for (int x = 0; x < width; ++x) { hostUncompressed[k++] = (byte) (((int) (Math.random() * 1000)) % 2); numOnes += (hostUncompressed[k - 1] == 1) ? 1 : 0; } } } for (int i = 0; i < depth; ++i) { int start = i * sliceSize; int index = start; int targetIndex = 0; int mask = 1; int buffer = 0; for (int y = 0; y < height; ++y) { for (int x = 0; x < width; ++x) { if (hostUncompressed[index] > 0) { buffer |= mask; } ++index; if ((index & 31) == 0) { hostCompressed[start + targetIndex++] = buffer; buffer = 0; mask = 1; } else { mask <<= 1; } } } } My OpenCL port of it looks like this: public void compress(cl_mem vol, int[] size3, int[] voxels) { int totalCompressedSize = voxels.length; cl_mem devCompressed = CL.clCreateBuffer(cl.getContext(), CL.CL_MEM_WRITE_ONLY, Sizeof.cl_int * totalCompressedSize, null, null); int[] sliceSizeInts = new int[]{(size3[0] * size3[1] + 31) / 32}; int[] dimensions = new int[]{size3[0], size3[1], size3[2], 0}; long[] localWorkSize = new long[]{1, 1, 1}; long[] globalWorkSize = new long[]{sliceSizeInts[0], size3[2], 1}; cl.calcLocalWorkSize(globalWorkSize, localWorkSize); CLUtils.round_size(localWorkSize, globalWorkSize); int k = 0; CL.clSetKernelArg(kernels[1], k++, Sizeof.cl_mem, Pointer.to(devCompressed)); CL.clSetKernelArg(kernels[1], k++, Sizeof.cl_mem, Pointer.to(vol)); CL.clSetKernelArg(kernels[1], k++, Sizeof.cl_int4, Pointer.to(dimensions)); CL.clEnqueueNDRangeKernel(cl.getCommandQueue(), kernels[1], 2, null, globalWorkSize, localWorkSize, 0, null, null); CL.clEnqueueReadBuffer(cl.getCommandQueue(), devCompressed, CL.CL_TRUE, 0, Sizeof.cl_int * totalCompressedSize, Pointer.to(voxels), 0, null, null); CL.clReleaseMemObject(devCompressed); CL.clFinish(cl.getCommandQueue()); } kernel void roiVolume_dataCompress( global int* compressed, global char* raw, int4 dimensions) { int comprSubId = get_global_id(0); int sliceIndex = get_global_id(1); int rawSliceSize = dimensions.y * dimensions.x; int comprSliceSize = (rawSliceSize + 31)/32; if ( sliceIndex < 0 || sliceIndex >= dimensions.z || comprSubId < 0 || comprSubId >= comprSliceSize ) return; int rawIndex; int rawSubIndex; int value = 0; for (int i = 0; i < 32; ++i) { rawSubIndex = comprSubId*32+i; if ( rawSubIndex < rawSliceSize) { rawIndex = sliceIndex * rawSliceSize + rawSubIndex; if (raw[rawIndex] != 0) value |= (1 << i); } } int comprIndex = sliceIndex * comprSliceSize + comprSubId; compressed[comprIndex] = value; } It works if depth=1, so if executed only on one slice, but from the second slice it gets wrong and I can't see any pattern in the array that could help. Any help would really be appreciated. Thank you.
code does not work with arrays (multiple arrays in arraylist)
hi I'm having a little problem with arrays. here's the code: int frame_size = 410; int frame_shift = 320; ArrayList<double[]> frames = new ArrayList<double[]>(); for (int i = 0; i + frame_size < inbuf.length; i = i + frame_shift) { double[] frame = new double[frame_size]; System.arraycopy(inbuf, i, frame, 0, frame_size); frames.add(frame); } here I share a large array into several small, and add them to arraylist I need to get more of ArrayList arrays and pass them to the function, and then accept the answer and assemble arrays processed one: int[] Cover = new int[frames.size() * nParam]; for (int i = 0; i < frames.size(); i++) { double[] finMc = Gos.getVek(frames.get(i)); for (int c = 0; c < finMc.length; c++) { int mc = (int) finMc[c]; for (int m = 0; m < Cover.length; m++) { Cover[m] = mc; } } } all this code does not work ( all elements of the array are zero Cover. Сover[0] = 0 Cover[1] = 0 Cover[2] = 0 ... help solve the problem, please!) thank you in advance) Update int frame_size = 410; int frame_shift = 320; ArrayList<double[]> frames = new ArrayList<double[]>(); for (int i = 0; i + frame_size < inbuf.length; i = i + frame_shift) { double[] frame = new double[frame_size]; System.arraycopy(inbuf, i, frame, 0, frame_size); frames.add(frame); } int[] Cover = new int[frames.size() * nParam]; for (int i = 0; i < frames.size(); i++) { double[] finMc = Gos.getVek(frames.get(i)); for (int c = 0; c < finMc.length; c++) { int mc = (int) finMc[c]; Cover[i * frames.size() + c] = (int) finMc[c]; } } Code^ not work( UPDATE 2 double[] inbuf = new double[Size]; inbuf = toDoubleArray(Gos.data); inbuf[2] = 10; inbuf[4] = 14; toDoubleArray public static double[] toDoubleArray(byte[] byteArray) { int times = Double.SIZE / Byte.SIZE; double[] doubles = new double[byteArray.length / times]; for (int i = 0; i < doubles.length; i++) { doubles[i] = ByteBuffer.wrap(byteArray, i * times, times) .getDouble(); } return doubles; } Code not work: int frame_size = 410; int frame_shift = 320; ArrayList<double[]> frames = new ArrayList<double[]>(); for (int i = 0; i + frame_size < inbuf.length; i = i + frame_shift) { double[] frame = new double[frame_size]; System.arraycopy(inbuf, i, frame, 0, frame_size); frames.add(frame); } double[] Cover = new double[frames.size() * nParam]; for (int i = 0; i < frames.size(); i++) { double[] finMc = Gos.getVek(frames.get(i)); for (int c = 0; c < finMc.length; c++) { Cover[i * frames.size() + c] = finMc[c]; } }
A couple of thoughts spring to mind immediately: 1) for (int m = 0; m < Cover.length; m++) { Cover[m] = mc; } This block starts m over at 0 every time through the loop. This means you're always writing over the same portion of the Cover array. So effectively, it's only the last frame's data that's stored. You probably meant for(int m = i * frames.size(); m < (i+1)*frames.size(); i++) { Cover[m] = mc; } But this raises a further issue -- you're writing the same value (mc) into the entire area allocated for a whole frame of data. You probably want to merge this loop with the previous loop so that this doesn't happen. for (int c = 0; c < finMc.length; c++) { Cover[i * frames.size() + c] = (int)finMc[c]; } 2) int mc = (int) finMc[c]; That line casts the value to an int which truncates the value stored at finMc[c]. If finMc[c] is between 0 and 1 this will yield 0 when the data is copied and casted. This is compounded by the previous issue which ensures that only the last frame's data ever gets copied. This is simply solved by removing the cast and declaring Cover as an array of doubles instead of ints. So in sum, the code might work a bit better if it's written this way: double[] Cover = new double[frames.size() * nParam]; for (int i = 0; i < frames.size(); i++) { double[] finMc = Gos.getVek(frames.get(i)); for (int c = 0; c < finMc.length; c++) { Cover[i * frames.size() + c] = finMc[c]; } }
2D Array to Rectangles
Is there a way to parse 2 dimensional array like this into a rectangle object (x,y, width, height)?. I need the array of all possible rectangles... {0,0,0,0,0} {0,0,0,0,0} {0,1,1,0,0} {0,1,1,0,0} {0,0,0,0,0} This would give 4 rectangles (we are looking at 0): 0,0,5,2 0,0,1,5 3,0,2,5 0,5,5,1 I have tried something like this, but it only gives the area of the biggest rectangle... public static int[] findMaxRectangleArea(int[][] A, int m, int n) { // m=rows & n=cols according to question int corX =0, corY = 0; int[] single = new int[n]; int largeX = 0, largest = 0; for (int i = 0; i < m; i++) { single = new int[n]; // one d array used to check line by line & // it's size will be n for (int k = i; k < m; k++) { // this is used for to run until i // contains element int a = 0; int y = k - i + 1; // is used for row and col of the comming // array int shrt = 0, ii = 0, small = 0; int mix = 0; int findX = 0; for (int j = 0; j < n; j++) { single[j] = single[j] + A[k][j]; // postions element are // added if (single[j] == y) { // element position equals shrt = (a == 0) ? j : shrt; // shortcut a = a + 1; if (a > findX) { findX = a; mix = shrt; } } else { a = 0; } } a = findX; a = (a == y) ? a - 1 : a; if (a * y > largeX * largest) { // here i am checking the values // with xy largeX = a; largest = y; ii = i; small = mix; } } }// end of loop return largeX * largest; } this code is working with 1s, but that is not the point right now
Is there some empirical mode decomposition library in java? [closed]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations. Closed 6 years ago. Improve this question I would like to ask you about any empirical mode decomposition library written in java. I cannot find any. Best if it is open source. Thank you
I have just found a C implementation ( https://code.google.com/p/realtime-emd/ ) and translated it to Java. So please note that this code snipped is not Java styled code, it is just Java code that compiles and runs. /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package tryout.emd; /** * * #author Krusty */ public class Emd { private void emdSetup(EmdData emd, int order, int iterations, int locality) { emd.iterations = iterations; emd.order = order; emd.locality = locality; emd.size = 0; emd.imfs = null; emd.residue = null; emd.minPoints = null; emd.maxPoints = null; emd.min = null; emd.max = null; } private void emdResize(EmdData emd, int size) { int i; // emdClear(emd); emd.size = size; emd.imfs = new double[emd.order][]; // cnew(double*, emd->order); for(i = 0; i < emd.order; i++) emd.imfs[i] = new double[size]; // cnew(double, size); emd.residue = new double[size]; // cnew(double, size); emd.minPoints = new int[size / 2]; // cnew(int, size / 2); emd.maxPoints = new int[size/2]; //cnew(int, size / 2); emd.min = new double[size]; // cnew(double, size); emd.max = new double[size]; // cnew(double, size); } private void emdCreate(EmdData emd, int size, int order, int iterations, int locality) { emdSetup(emd, order, iterations, locality); emdResize(emd, size); } private void emdDecompose(EmdData emd, double[] signal) { int i, j; System.arraycopy(signal, 0, emd.imfs[0], 0, emd.size); // memcpy(emd->imfs[0], signal, emd->size * sizeof(double)); System.arraycopy(signal, 0, emd.residue, 0, emd.size); // memcpy(emd->residue, signal, emd->size * sizeof(double)); for(i = 0; i < emd.order - 1; i++) { double[] curImf = emd.imfs[i]; // double* curImf = emd->imfs[i]; for(j = 0; j < emd.iterations; j++) { emdMakeExtrema(emd, curImf); if(emd.minSize < 4 || emd.maxSize < 4) break; // can't fit splines emdInterpolate(emd, curImf, emd.min, emd.minPoints, emd.minSize); emdInterpolate(emd, curImf, emd.max, emd.maxPoints, emd.maxSize); emdUpdateImf(emd, curImf); } emdMakeResidue(emd, curImf); System.arraycopy(emd.residue, 0, emd.imfs[i+1], 0, emd.size); // memcpy(emd->imfs[i + 1], emd->residue, emd->size * sizeof(double)); } } // Currently, extrema within (locality) of the boundaries are not allowed. // A better algorithm might be to collect all the extrema, and then assume // that extrema near the boundaries are valid, working toward the center. private void emdMakeExtrema(EmdData emd, double[] curImf) { int i, lastMin = 0, lastMax = 0; emd.minSize = 0; emd.maxSize = 0; for(i = 1; i < emd.size - 1; i++) { if(curImf[i - 1] < curImf[i]) { if(curImf[i] > curImf[i + 1] && (i - lastMax) > emd.locality) { emd.maxPoints[emd.maxSize++] = i; lastMax = i; } } else { if(curImf[i] < curImf[i + 1] && (i - lastMin) > emd.locality) { emd.minPoints[emd.minSize++] = i; lastMin = i; } } } } private void emdInterpolate(EmdData emd, double[] in, double[] out, int[] points, int pointsSize) { int size = emd.size; int i, j, i0, i1, i2, i3, start, end; double a0, a1, a2, a3; double y0, y1, y2, y3, muScale, mu; for(i = -1; i < pointsSize; i++) { i0 = points[mirrorIndex(i - 1, pointsSize)]; i1 = points[mirrorIndex(i, pointsSize)]; i2 = points[mirrorIndex(i + 1, pointsSize)]; i3 = points[mirrorIndex(i + 2, pointsSize)]; y0 = in[i0]; y1 = in[i1]; y2 = in[i2]; y3 = in[i3]; a0 = y3 - y2 - y0 + y1; a1 = y0 - y1 - a0; a2 = y2 - y0; a3 = y1; // left boundary if(i == -1) { start = 0; i1 = -i1; } else start = i1; // right boundary if(i == pointsSize - 1) { end = size; i2 = size + size - i2; } else end = i2; muScale = 1.f / (i2 - i1); for(j = start; j < end; j++) { mu = (j - i1) * muScale; out[j] = ((a0 * mu + a1) * mu + a2) * mu + a3; } } } private void emdUpdateImf(EmdData emd, double[] imf) { int i; for(i = 0; i < emd.size; i++) imf[i] -= (emd.min[i] + emd.max[i]) * .5f; } private void emdMakeResidue(EmdData emd, double[] cur) { int i; for(i = 0; i < emd.size; i++) emd.residue[i] -= cur[i]; } private int mirrorIndex(int i, int size) { if(i < size) { if(i < 0) return -i - 1; return i; } return (size - 1) + (size - i); } public static void main(String[] args) { /* This code implements empirical mode decomposition in C. Required paramters include: - order: the number of IMFs to return - iterations: the number of iterations per IMF - locality: in samples, the nearest two extrema may be If it is not specified, there is no limit (locality = 0). Typical use consists of calling emdCreate(), followed by emdDecompose(), and then using the struct's "imfs" field to retrieve the data. Call emdClear() to deallocate memory inside the struct. */ double[] data = new double[]{229.49,231.94,232.97,234,233.36,235.15,235.64,235.78,238.95,242.09,240.61,240.29,237.88,252.11,259.16,263.4,262.1,254.85,254.42,261.27,253.92,259.04,251.58,248.96,239.49,229.39,247.02,249.48,254.9,251.27,246.85,245.43,241.52,231.23,235.67,239.99,238.49,237.41,246.4,249.83,253.67,256.71,255.9,248.93,244.05,242.49,236.52,243.63,246.55,247.3,252.56,259.91,264.41,266.55,262.75,266.33,263.53,261.62,259.38,260.94,249.14,244.63,241.66,240.16,241.81,251.57,251.01,252.49,250.23,244.89,245.79,244.55,243.04,238.84,244.98,247.26,251.91,252.81,252.16,256.83,253.8,251.03,250.19,254.66,254.74,255.76,254.52,252.95,254.57,252.29,243.32,244.88,242.26,240.84,245.05,246.12,243.02,242.79,239.05,233.34,236.22,233.69,234.99,235.84,236.43,243.46,245.25,251.67,250.73,255.7,255.85,256.18,259.71,260.7,262.8,268.98,267.81,275.46,275.98,279.85,280.99,284.3,283.17,278.99,279.48,275.96,274.77,270.99,281.01,281.25,281.28,286,287.25,290.35,291.9,294.01,306.1,309.27,301,302.01,301.02,299.03,300.36,299.59,299.38,296.86,292.72,295.83,300.87,304.21,309.53,308.43,309.87,307.4,309.3,307.96,299.58,298.61,293.31,292.25,299.96,298.31,304.76,300.26,306.16,306.35,308.17,302.61,307.72,309.42,308.73,311.36,309.48,312.2,310.98,311.76,312.84,311.5,311.57,312.43,311.81,313.37,315.3,316.24,314.72,315.77,316.54,316.36,314.78,313.71,320.52,322.2,324.83,324.57,326.89,333.05,332.26,334.97,336.19,338.92,331.3,329.54,323.55,317.75,328.19,332.03,334.41,333.79,326.88,330.01,335.56,334.87,334.01,336.99,342.22,345.45,348.33,344.81,347.06,349.32,350.02,353.16,348.47,340.94,329.32,333.22,333.47,338.6,343.52,339.72,342.46,349.69,350.12,345.61,346,342.8,337.15,342.33,343.86,335.95,320.95,325.46,321.59,329.99,331.84,329.88,335.5,341.89,340.82,341.33,339.06,338.94,335.1,331.83,329.59,328.76,328.8,325.86,321.72,323.28,326.9,323.3,318.47,322.74,328.59,333.01,341.07,343.32,340.8,340.54,337.23,340.52,336.78,338.64,339.98,337.23,337.15,338.06,339.86,337.7,337.06,331.15,324.15,326.91,330.54,331.18,326.02,325.22,323.07,327.54,325.81,328.15,338.28,336.03,336.6,334.01,328.76,322.93,323.12,322.39,316.96,317.64,323.32,317.78,316.24,311.47,306.67,316.37,313.76,322.14,317.39,322.93,326.06,324.87,326.46,333.84,339.84,342.11,347.4,349.84,344.28,344.04,348.19,347.95,354.9,363.54,366.51,376.28,376.66,382.51,387.56,392.34,381.81,381.07,379.76,385.86,378.24,381.8,367.01,363.37,343.52,363.74,353.71,363.44,366.64,372.89,370.04,370,356,346.26,346.66,363.35,365.85,363.46,373.05,379.27,379.29,374.27,370.57,363.78,369.32,373.39,373.6,367.12,369.51,374.06,378.61,382.17,389.51,400.33,402.1,400.83,390.79,393.2,392.1,388.3,386.11,379.85,370.85,364.32,362.28,367.87,367.01,359.65,378.14,389.3,391.15,397.22,410.42,408.46,410.65,387.68,384.46,382.09,394.63,386.85,389.6,393.58,393.84,393.67,385.63,386.5,392.01,389.25,388.76,395.08,384.43,374.65,374.06,368.85,378.16,374.21,367.05,364.65,358.88,366.18,356.92,353.59,365.8,362.96,371.71,377.28,379,382.22,380.22,378.41,379.94,382.82,381.09,378.14,369.75,368.54,370.56,371.72,385.08,385.57,387.61,392.26,395.37,391.59,394,393.88,399.94,402.09,406.56,410.81,410.15,411.62,410.95,409.82,408.29,413.04,417.33,416.01,408.76,415.68,408.87,434.4,432.43,435,440.58,443.95,443.67,442.63,447.06,451.24,455.96,463.6,479.63,479.88,488.81,495.48,484.01,488.43,488.34,500.72,498.96,502.22,508.07,511.33,520.71,527.55,529.53,530.22,518.53,515.71,516.12,527.11,530.21,536.85,552.51,573.4,569.49,569.5,584.6,589.33,585.96,582.89,579.69,590.32,597.61,600.67,593.12,583.09,601.65,612.05,607.17,616.29,618.77,611.19,609.01,605.68,588.62,564.21,592.97,591.64,571.32,557.25,556.01,544.9,593.26,591.02,586.45,567.95,566.15,569.9,565.85,549.74,553.85,552.59,553.56,554.86,551.16,542.9,537.99,531.09,515.57,515.82,545.87,541.68,554.9,549.8,546.86,556.56,563.27,561.87,545.59,548.8,547.38,555.78,556.03,564.39,555.49,560.35,556.46,555.84,558.37,569.7,571.29,569.66,561.81,566.12,555.1,556.33,558.73,553.43,567.97,576.26,582.96,593.2,589.25,597.04,591.52,587.84,582.46,588.37,590.25,590.28,589.62,597.46,587.71,587.26,584.43,559.19,559.1,569.1}; Emd emd = new Emd(); EmdData emdData = new EmdData(); int order = 4; emd.emdCreate(emdData, data.length, order, 20, 0); emd.emdDecompose(emdData, data); for (int i=0;i<data.length;i++) { System.out.print(data[i]+";"); for (int j=0;j<order; j++) System.out.print(emdData.imfs[j][i] + ";"); System.out.println(); } } private static class EmdData { protected int iterations, order, locality; protected int[] minPoints, maxPoints; protected double[] min, max, residue; protected double[][] imfs; protected int size, minSize, maxSize; } }
Port Matlab's FFT to native Java
I want to port Matlab's Fast Fourier transform function fft() to native Java code. As a starting point I am using the code of JMathLib where the FFT is implemented as follows: // given double[] x as the input signal n = x.length; // assume n is a power of 2 nu = (int)(Math.log(n)/Math.log(2)); int n2 = n/2; int nu1 = nu - 1; double[] xre = new double[n]; double[] xim = new double[n]; double[] mag = new double[n2]; double tr, ti, p, arg, c, s; for (int i = 0; i < n; i++) { xre[i] = x[i]; xim[i] = 0.0; } int k = 0; for (int l = 1; l <= nu; l++) { while (k < n) { for (int i = 1; i <= n2; i++) { p = bitrev (k >> nu1); arg = 2 * (double) Math.PI * p / n; c = (double) Math.cos (arg); s = (double) Math.sin (arg); tr = xre[k+n2]*c + xim[k+n2]*s; ti = xim[k+n2]*c - xre[k+n2]*s; xre[k+n2] = xre[k] - tr; xim[k+n2] = xim[k] - ti; xre[k] += tr; xim[k] += ti; k++; } k += n2; } k = 0; nu1--; n2 = n2/2; } k = 0; int r; while (k < n) { r = bitrev (k); if (r > k) { tr = xre[k]; ti = xim[k]; xre[k] = xre[r]; xim[k] = xim[r]; xre[r] = tr; xim[r] = ti; } k++; } // The result // -> real part stored in xre // -> imaginary part stored in xim Unfortunately it doesn't give me the right results when I unit test it, for example with the array double[] x = { 1.0d, 5.0d, 9.0d, 13.0d }; the result in Matlab: 28.0 -8.0 - 8.0i -8.0 -8.0 + 8.0i the result in my implementation: 28.0 -8.0 + 8.0i -8.0 -8.0 - 8.0i Note how the signs are wrong in the complex part. When I use longer, more complex signals the differences between the implementations affects also the numbers. So the implementation differences does not only relate to some sign-"error". My question: how can I adapt my implemenation to make it "equal" to the Matlab one? Or: is there already a library that does exactly this?
in order to use Jtransforms for FFT on matrix you need to do fft col by col and then join them into a matrix. here is my code which i compared with Matlab fft double [][] newRes = new double[samplesPerWindow*2][Matrixres.numberOfSegments]; double [] colForFFT = new double [samplesPerWindow*2]; DoubleFFT_1D fft = new DoubleFFT_1D(samplesPerWindow); for(int y = 0; y < Matrixres.numberOfSegments; y++) { //copy the original col into a col and and a col of zeros before FFT for(int x = 0; x < samplesPerWindow; x++) { colForFFT[x] = Matrixres.res[x][y]; } //fft on each col of the matrix fft.realForwardFull(colForFFT); //Y=fft(y,nfft); //copy the output of col*2 size into a new matrix for(int x = 0; x < samplesPerWindow*2; x++) { newRes[x][y] = colForFFT[x]; } } hope this what you are looking for. note that Jtransforms represent Complex numbers as array[2*k] = Re[k], array[2*k+1] = Im[k]