Fast Fourier function for image transformation Problems - java

I have coded the following function to apply fast fourier transform on input image. I have got null pointer exception in line "F[2*k]= F[2*k].plus(w_ux.mul(f_even));".
Would anyone give any advice to me, please? [solved]
It takes quite a long time to finish running the transform, the time is similar to normal fourier transform. and the result image is not as expected.
private Complex[] fft(byte[] img, int width, int height) {
// M - height N - width , u - height v - width
Complex[] F = new Complex[width*height]; // one single point
Complex w;
int size = F.length;
double w_ux_exp, w_u_exp;
double f_even, f_odd;
for (int u=0; u<size/2;u++){
for (int k=0; k<size/2; k++){
f_even = (double)(img[2*k]&0xFF)*Math.pow(-1, k); // f(x) for even, centering
f_odd = (double)(img[2*k+1]&0xFF)*Math.pow(-1, k); // f(x) for odd, centering
w_u_exp =-2.0 * Math.PI*2*(img[u]&0xFF) / size;
w_ux_exp =-2.0 * Math.PI*(2*k)*(img[u]&0xFF) / size; //even
Complex w_ux = Complex.fromPolar(1, w_ux_exp);
Complex w_u = Complex.fromPolar(1, w_u_exp);
F[2*u]= F[2*u].plus(w_ux.mul(f_even));
F[2*u+1]=F[2*u+1].plus(w_u.mul(f_odd));
}
}
return F;
}
Thank you very much for your help.

Related

Perlin Noise repeating pattern

My problem that my perlin noise is repeating itself very obviously in very small spaces. Here is an image of what it going on. I know that this does happen after a certain point with all perlin noise, but it seems to be happening almost immediately with mine. I believe that it is caused by my really awful pseudorandom gradient generator, but Im not sure. My code is below.
As a side note, my perlin noise seems to generate very small values, between -.2 and positive .2 and I think this is also caused by my pseudorandom gradient generator.
If anyone has any advice on improving this part of my code, please feel free to tell me. Any ideas would be helpful right now.
Thanks to everyone in advance!
public class Perlin {
int[] p = new int[255];
public Perlin() {
for(int i = 0; i < p.length; i++)
p[i] = i;
shuffle(p);
}
int grads[][] = {
{1,0},{0,1},{-1,0},{0,-1},
{1,1},{-1,1},{1,-1},{-1,-1}
};
public double perlin (double x, double y) {
int unitX = (int)Math.floor(x) & 255; // decide unit square
int unitY = (int)Math.floor(y) & 255; // decide unit square
double relX = x-Math.floor(x); // relative x position
double relY = y-Math.floor(y); // relative y position
// bad pseudorandom gradient -- what i think is causing the problems
int units = unitX+unitY;
int[] gradTL = grads[p[(units)]%(grads.length)];
int[] gradTR = grads[p[(units+1)]%(grads.length)];
int[] gradBL = grads[p[(units+1)]%(grads.length)];
int[] gradBR = grads[p[(units+2)]%(grads.length)];
// distance from edges to point, relative x and y inside the unit square
double[] vecTL = {relX,relY};
double[] vecTR = {relX-1,relY};
double[] vecBL = {relX,relY-1};
double[] vecBR = {relX-1,relY-1};
// dot product
double tl = dot(gradTL,vecTL);
double tr = dot(gradTR,vecTR);
double bl = dot(gradBL,vecBL);
double br = dot(gradBR,vecBR);
// perlins fade curve
double u = fade(relX);
double v = fade(relY);
// lerping the faded values
double x1 = lerp(tl,tr,u);
double y1 = lerp(bl,br,u);
// ditto
return lerp(x1,y1,v);
}
public double dot(int[] grad, double[] dist) {
return (grad[0]*dist[0]) + (grad[1]*dist[1]);
}
public double lerp(double start, double end, double rate){
return start+rate*(end-start);
}
public double fade(double t) {
return t*t*t*(t*(t*6-15)+10);
}
public void shuffle(int[] p) {
Random r = new Random();
for(int i = 0; i < p.length; i++) {
int n = r.nextInt(p.length - i);
// do swap thing
int place = p[i];
p[i] = p[i+n];
p[i+n] = place;
}
}
}
A side note on my gradient generator, I know Ken Perlin used 255 because he was using bits, I just randomly picked it. I dont think it has any effect on the patterns if it is changed.
Your intuition is correct. You calculate:
int units = unitX+unitY;
and then use that as the base of all your gradient table lookups. This guarantees that you get the same values along lines with slope -1, which is exactly what we see assuming (0, 0) is the upper-left corner.
I would suggest using a real hash function to combine your coordinates: xxHash, Murmur3, or even things like CRC32 (which isn't meant to be a hash) would be much better than what you're doing. You could also implement Perlin's original hash function, although it has known issues with anisotropy.

Java Convolution

Hi I am in need of some help. I need to write a convolution method from scratch that takes in the following inputs: int[][] and BufferedImage inputImage. I can assume that the kernel has size 3x3.
My approach is to do the follow:
convolve inner pixels
convolve corner pixels
convolve outer pixels
In the program that I will post below I believe I convolve the inner pixels but I am a bit lost at how to convolve the corner and outer pixels. I am aware that corner pixels are at (0,0), (width-1,0), (0, height-1) and (width-1,height-1). I think I know to how approach the problem but not sure how to execute that in writing though. Please to aware that I am very new to programming :/ Any assistance will be very helpful to me.
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
}
public BufferedImage convolveInner(double center, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 1; x < width - 1; x++) {
for (int y = 1; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) center*red;
int innergreen = (int) center*green;
int innerblue = (int) center*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage1.setRGB(x, y, newRgbvalue);
}
}
return inputImage1;
}
public BufferedImage convolveEdge(double edge, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage2 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) edge*red;
int innergreen = (int) edge*green;
int innerblue = (int) edge*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage2.setRGB(x, y, newRgbvalue);
}
}
return inputImage2;
}
public BufferedImage convolveCorner(double corner, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage3 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) corner*red;
int innergreen = (int) corner*green;
int innerblue = (int) corner*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage3.setRGB(x, y, newRgbvalue);
}
}
return inputImage3;
}
public static void main(String[] args) {
DrawingKit dk = new DrawingKit("Compositor", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p5 = c.convolve();
dk.drawPicture(p5, 0, 100);
}
}
I changed the code a bit but the output comes out as black. What did I do wrong:
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//for every pixel
for (int x = 0; x < width; x ++) {
for (int y = 0; y < height; y ++) {
int colorValue = inputImage.getRGB(x,y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed();
int green = pixelColor.getGreen();
int blue = pixelColor.getBlue();
double gray = 0;
//multiply every value of kernel with corresponding image pixel
for (int i = 0; i < 3; i ++) {
for (int j = 0; j < 3; j ++) {
int imageX = (x - 3/2 + i + width) % width;
int imageY = (x -3/2 + j + height) % height;
int RGB = inputImage.getRGB(imageX, imageY);
int GRAY = (RGB) & 0xff;
gray += (GRAY*kernel[i][j]);
}
}
int out;
out = (int) Math.min(Math.max(gray * 1, 0), 255);
inputImage1.setRGB(x, y, new Color(out,out,out).getRGB());
}
}
return inputImage1;
}
public static void main(String[] args) {
int[][] newArray = {{1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}};
DrawingKit dk = new DrawingKit("Problem28", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p2 = c.convolve(newArray, p1);
dk.drawPicture(p2, 0, 100);
}
}
Welcome ewuzz! I wrote a convolution using CUDA about a week ago, and the majority of my experience is with Java, so I feel qualified to provide advice for this problem.
Rather than writing all of the code for you, the best way to solve this large program is to discuss individual elements. You mentioned you are very new to programming. As the programs you write become more complex, it's essential to write small working snippets before combining them into a large successful program (or iteratively add snippets). With this being said, it's already apparent you're trying to debug a ~100 line program, and this approach will cost you time in most cases.
The first point to discuss is the general approach you mentioned. If you think about the program, what is the simplest and most repeated step? Obviously this is the kernel/mask step, so we can start from here. When you convolute each pixel, you are performing a similar option, regardless of the position (corner, edge, inside). While there are special steps necessary for these edge cases, they share similar underlying steps. If you try to write code for each of these cases separately, you will have to update the code in multiple (three) places with each adjustment and it will make the whole program more difficult to grasp.
To support my point above, here's what happened when I pasted your code into IntelliJ. This illustrates the (yellow) red flag of using the same code in multiple places:
The concrete way to fix this problem is to combine the three convolve methods into a single one and use if statements for edge-cases as necessary.
Our pseudocode with this change:
convolve(kernel, inputImage)
for each pixel in the image
convolve the single pixel and check edge cases
endfor
end
That seems pretty basic right? If we are able to successfully check edge cases, then this extremely simple logic will work. The reason I left it so general above to show how convolve the single pixel and check edge cases is logically grouped. This means it's a good candidate for extracting a method, which could look like:
private void convolvePixel(int x, int y, int[][] kernel, BufferedImage input, BufferedImage output)
Now to implement our method above, we will need to break it into a few steps, which we may then break into more steps if necessary. We'll need to look at the input image, if possible for each pixel accumulate the values using the kernel, and then set this in the output image. For brevity I will only write pseudocode from here.
convolvePixel(x, y, kernel, input, output)
accumulation = 0
for each row of kernel applicable pixels
for each column of kernel applicable pixels
if this neighboring pixel location is within the image boundaries then
input color = get the color at this neighboring pixel
adjusted value = input color * relative kernel mask value
accumulation += adjusted value
else
//handle this somehow, mentioned below
endif
endfor
endfor
set output pixel as accumulation, assuming this convolution method does not require normalization
end
The pseudocode above is already relatively long. When implementing you could write methods for the if and the else cases, but it you should be fine with this structure.
There are a few ways to handle the edge case of the else above. Your assignment probably specifies a requirement, but the fancy way is to tile around, and pretend like there's another instance of the same image next to this input image. Wikipedia explains three possibilities:
Extend - The nearest border pixels are conceptually extended as far as necessary to provide values for the convolution. Corner pixels are extended in 90° wedges. Other edge pixels are extended in lines.
Wrap - (The method I mentioned) The image is conceptually wrapped (or tiled) and values are taken from the opposite edge or corner.
Crop - Any pixel in the output image which would require values from beyond the edge is skipped. This method can result in the output image being slightly smaller, with the edges having been cropped.
A huge part of becoming a successful programmer is researching on your own. If you read about these methods, work through them on paper, run your convolvePixel method on single pixels, and compare the output to your results by hand, you will find success.
Summary:
Start by cleaning-up your code before anything.
Group the same code into one place.
Hammer out a small chunk (convolving a single pixel). Print out the result and the input values and verify they are correct.
Draw out edge/corner cases.
Read about ways to solve edge cases and decide what fits your needs.
Try implementing the else case through the same form of testing.
Call your convolveImage method with the loop, using the convolvePixel method you know works. Done!
You can look up pseudocode and even specific code to solve the exact problem, so I focused on providing general insight and strategies I have developed through my degree and personal experience. Good luck and please let me know if you want to discuss anything else in the comments below.
Java code for multiple blurs via convolution.

Area Under Curve - 1D Array (Java)

I have a quick question, that in most languages (such as python) would be straightforward.
I am looking to obtain the integral (area of curve) from an 1D-array of fixed points. Java apparently has many numerical integration libraries, all of which seem to require a function (f {double(x)}) as input.
However I can not seem to find any which accommodate arrays (double []) such as [1,4,10,11]. I would be integrating over the entirety of the array (x values 1-n, where n represents the size of the array)
Any help is greatly appreciated
Well, they expect functions because its normal to use them with a continuity.
Since you have only a different height every step (1,2,3,4...?) you have rectangles with triangles on top of them. the height of the triangles is the difference between the current height and the previous height. therefore the rectangle s height is the current pint height minus triangle height.
Write a function which calculates and adds both areas.
Do this for every point/item in your Array and you will get the integral of your "function".
EDIT: I wrote a little code. no guarantee, I just coded some easy to understand code of the idea of this integral prob. Further improvements have to be done.
public static double getIntegralFromArray(double[] ar, double xDist)
{
double base = 0;
double prev = 0;
double triHeight = 0;
double rectHeight = 0;
double tri = 0;
double rect = 0;
double integral = 0;
for (int i = 0; i < ar.length; i++) {
triHeight=Math.abs(ar[i]-prev); // get Height Triangle
tri = xDist*triHeight/2; // get Area Triangle
if(ar[i]<=prev){
rectHeight = Math.abs(base-ar[i]); // get Height Rectangle
}else {
rectHeight = Math.abs(base-(ar[i]-triHeight)); // get Height Rectangle
}
rect = xDist*rectHeight; // get Area Rectangle
integral += (rect + tri); // add Whole Area to Integral
prev=ar[i];
}
return integral;
}
double[] ar = new double[]{1,2,3,2,2,3,1,3,0,3,3};
System.out.println(MyMath.getIntegralFromArray(ar, 1));
Area under 'curve': 21.5
By using trapezoidal rule you can simply call below method to get area under a graph(approx.)
public static double trapz(double ar[],double xDist){
if (ar.length==1 || ar.length==0)
return 0;
double integral=0;
double prev=ar[0];
for (int i=1;i<ar.length;i++)
{
integral+=xDist*(prev+ar[i])/2.0;
prev=ar[i];
}
return integral;
}

Optimizing nested for loop in java

I'm becoming crazy by trying to optimize the following function in java with OpenCV:
static Mat testPossibleCentersFormula(int x, int y, Mat weight, double gx, double gy, Mat outSum){
Mat out = outSum;//new Mat(weight.rows(), weight.cols(), CvType.CV_64F);
float weight_array [] = new float [weight.rows()*weight.cols()];
weight.get(0,0,weight_array);
double out_array [] = new double [weight.rows()*weight.cols()];
out.get(0,0,out_array);
for (int cy = 0; cy < out.rows(); ++cy) {
for (int cx = 0; cx < out.cols(); ++cx) {
if (x == cx && y == cy) {
continue;
}
// create a vector from the possible center to the gradient origin
double dx = x - cx;
double dy = y - cy;
// normalize d
double magnitude = Math.sqrt((dx * dx) + (dy * dy));
dx = dx / magnitude;
dy = dy / magnitude;
double dotProduct = dx*gx + dy*gy;
dotProduct = Math.max(0.0,dotProduct);
// square and multiply by the weight
if (kEnableWeight) {
out_array[cy*out.cols()+cx] = out_array[cy*out.cols()+cx] +dotProduct * dotProduct * (weight_array[cy*out.cols()+cx]/kWeightDivisor);
} else {
out_array[cy*out.cols()+cx] = out_array[cy*out.cols()+cx] +dotProduct * dotProduct;
}
} }
out.put(0, 0, out_array);
return out;
}
The function accesses some pictures' values pixel by pixel, for each frame in a video, and makes it impossible to use it in real time.
I've already converted the Mat operations into array operations, and that has made a great difference, but it is still very very slow. Do you see any way to replace the nested for loop?
Thank you very much,
As I have alluded to in my comment above, I think that the allocation of weight_array and out_array is very suspicious: whilst the Javadoc that I can find for Mat is unhelpfully silent on what is put into an array larger than the image depth when you call mat.get(...), it feels like an abuse of the API to assume that it will return the entire image's data.
Allocating such large arrays each time you call the method is unnecessary. You can allocate a much smaller array, and just reuse that on each iteration:
float[] weight_array = new float[weight.depth()];
double[] out_array = new double[out.depth()];
for (int cy = 0; cy < out.rows(); ++cy) {
for (int cx = 0; cx < out.cols(); ++cx) {
// Use weight.get(cx, cy, weight_array)
// instead of weight_array[cy*out.cols()+cx].
// Use out.get(cx, cy, out_array) and out.put(cx, cy, out_array)
// instead of out_array[cy*out.cols()+cx] += ...
}
}
Note that this does still allocate (probably very small) arrays on each iteration. If you needed to, you could allocate the weight_array and out_array outside the method, and pass them in as parameters; but I would try as suggested here first, and optimize further when/if necessary.

Android Fourier Transform Realtime - Renderscript

I am trying to apply a 2D Fourier Transform on incoming preview camera frames.
So here is my renderScript code that executes on each onSurfaceTextureUpdated:
#pragma version(1)
#pragma rs java_package_name(foo.camerarealtimefilters)
rs_allocation inPixels;
int height;
int width;
void root(const uchar4 *in, uchar4 *out, uint32_t x, uint32_t y) {
float3 fourierPixel;
for(int k=0; k<=width; k++){
for(int l=0; l<=height; l++){
float3 pixel = convert_float4(rsGetElementAt_uchar4(inPixels, k, l)).rgb;
float greyOrigPixel = (pixel.r + pixel.g + pixel.b)/3;
float angle = 2 * M_PI * ( ((x * k) / width) + ((y * l) / height) );
fourierPixel.rgb = greyOrigPixel*cos(angle);
};
};
out->xyz = convert_uchar3(fourierPixel);
}
The inPixels is set by this method,
public void setInAllocation(Bitmap bmp) {
inAllocation = Allocation.createFromBitmap(rs, bmp);
fourierScript.set_inPixels(inAllocation);
};
Now the maths behind my code? Basically apply Euler's formula, ignore the phase term as I can't do much with imaginary numbers, and draw the magnitude only, that is the real (cosine) part. I of course grayscale the image as you can see.
Here are my resources:
1) http://homepages.inf.ed.ac.uk/rbf/HIPR2/fourier.htm
"...In image processing, often only the magnitude of the Fourier Transform is displayed, as it contains most of the information of the geometric structure of the spatial domain image.."
2) http://www.nayuki.io/page/how-to-implement-the-discrete-fourier-transform
Where I got the Euler formula, and how I applied it.
My problem, is that when I start my app, it gives me the original image, whatever the camera sees, and nothing more. It also freezes after 2 to 3 seconds.
What is wrong with my code? Is it too much to handle? Is what I am asking possible (I am running this on a Samsung Galaxy S4 Mini)? I just want to apply realtime simple DFT on a camera frame.
It's tough to say why your image would not be showing updates without seeing the Java code. However, here are a few things you might try to help.
If you can handle lower precision, use float instead of double as this will improve performance
If you can handle lower precision, use #pragma rs_fp_relaxed which will help performance
You can re-structure your RS to have a setup function which should be called before it is run for the first time. Use this to setup the width/height and pre-calculate the fixed parts of the FFT equation
It will look something like this:
rs_allocation angles;
uint32_t width;
uint32_t height;
uint32_t total;
void setupPreCalc(uint32_t w, uint32_t h) {
uint32_t x;
uint32_t y;
float curAngle;
width = w;
height = h;
total = w * h;
for (x = 0; x < width; x++) {
for (y = 0; y < height; y++) {
curAngle = 2 * M_PI * (y * width + x);
rsSetElementAt_float(angles, curAngle, x, y);
}
}
}
Re-structure your kernel to get the output Allocation element and the x and y coordinates being operated on:
void __attribute__((kernel))doFft(uchar4 out, uint32_t x, uint32_t y)
Before each frame, set the input allocation similar to what you have done then re-structure your loop to use the pre-calculated parts of the angle.
Previously, the kernel was looping over all coordinates in the input, calculating a greyscale pixel value, running it through something similar to the equation you found, then setting it to be a new pixel value and when done saving that value from the final iteration of the loop to be the output value. This isn't exactly what you want. RS is already giving you a specific location in the output Allocation, so you need to do the summation of all input points in relation to that specific output point.
Using the pre-calc Allocation and the new form of the kernel, it could look like this:
void __attribute__((kernel)) doFft(uchar4 out, uint32_t x, uint32_t y) {
// Loop over all input allocation points
uint32_t inX;
uint32_t inY;
float curAngle;
float4 curPixel;
float4 curSum = 0.0;
for (inX = 0; inX < width; inX++) {
for (inY = 0; inY < height; inY++) {
curPixel = convert_float4(rsGetElementAt_uchar4(inPixels, x, y));
curPixel.rgb = (curPixel.r + curPixel.g + curPixel.b) / 3;
curAngle = rsGetElementAt_float(angles, inX, inY);
curAngle = curAngle * ((x + (y * width)) / total);
curSum += curPixel * cos(curAngle);
}
}
out = convert_uchar4(curSum);
}

Categories