OpenCV mat put method doesn't work as expected - java

I am trying to open grayscale image via OpenCV Mat object and I am getting odd results:
My code:
Mat source = imageLoader.loadImage(srcImage);
Mat mask = imageLoader.loadImage(srcImageMask);
if(!source.size().equals(mask.size())) {
throw new RuntimeException("Size of mask and source differ");
}
//Convert to grayscale format
Mat sourceGrayScaleFormat = ImageConverter.convertToGrayscale(source);
Mat maskGrayScaleFormat = ImageConverter.convertToGrayscale(mask);
int rows = sourceGrayScaleFormat.rows();
int cols = sourceGrayScaleFormat.cols();
for(int row = 0 ; row < rows; ++row) {
for(int col = 0; col < cols; ++col) {
double [] maskPixel = maskGrayScaleFormat.get(row, col);
double [] data = sourceGrayScaleFormat.get(row, col);
if(holeUpperBound > maskPixel[0]/normalizeFactor) {//According to the instructor - we should treat values below 128 values in the mask as hole pixel
data[0] = -1.0; //Treat it a hole
} else {
data[0] = data[0]/255.0;
}
sourceGrayScaleFormat.put(row, col, data );
double [] data1 = sourceGrayScaleFormat.get(row, col);
}
}
I expect that data1 will be equal to data, but is does not. What am I doing wrong?

Related

Convolution produces a very dark image

Edit:
I included an example of k value. Also to be clear I produce three separate arrays from an RGB image . I also include code for loading the image
public static final int[][] SHARPEN = { { -1, -2, -1 }, { 0, 0, 0 }, { 1, 2, 1 } };
Load image
BufferedImage inputImage = ImageIO.read(new File("bridge-rgb.png")); // load the image from this current folder
When I convolute an image in java using 3*3 kernel, the resultant image produced has some of the properties that you would expect from the given kernel but is extremely dark, black being the dominant colour. If I process the image with an identity kernel then identity is returned so I guess that means that Ive selected the correct setting for creating a bufferedImage and hence the problem must be with my convolution algorithm, however I did test the convolution algorithm with a test array and it does seem to be producing accurate output. I wonder could any one make any comment on what I have or point me in the right direction?
for (int j = 0; j < kernelWidth; ++j) {
try {
output+=(input[y-1][x-1+j] * k[0][j]);
counter++;
}catch(Exception e) {
continue;
}
}
for (int j = 0; j < kernelWidth;++j) {
try {
output+=(input[y][x-1+j] * k[1][j]);
counter1++;
}catch(Exception e) {
continue;
}
}
for (int j = 0; j < kernelWidth;++j) {
try {
output+=(input[y+1][x-1+j] * k[2][j]);
counter2++;
}catch(Exception e) {
continue;
}
}
if((output>>bitshiftValue)>255) {
return ((255& 0xff)<<bitshiftValue);
}
else if ((output>>bitshiftValue)<0) {
return 0;
}else {
return output;
} }
I got the arrays to be convoluted with the following method
private static int[][] convertTo2DWithoutUsingGetRGBgreen(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
int[][] result = new int[height][width];
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel + 3 < pixels.length; pixel += pixelLength) {
result[row][col] = ((int) ((pixels[pixel + 2] & 0xff) )<<8);
col++;
if (col == width) {
col = 0;
row++;
}
}
return result;
}
and post convolution I pimply added them together like so
int[][] finalConv = new int[convRedArray.length][convRedArray[0].length];
for(int c =0; c<convRedArray.length;c++) {
for(int p =0;p<convRedArray[0].length;p++) {
finalConv[c][p]=(convBlueArray[c][p])+(convGreenArray[c[p])+(convRedArray[c][p]);

Converting Mat to one dimensional float array for EM?

I'm trying to convert a Python code to Java. However, I'm unable to find a way to create the sample to train the Expectation Maximization as it should be a one-channel matrix with 2 values (S and V from HSV Color Space) as below:
row 0: S, V
row 1: S, V
row 2: S, V
row 3: S, V
In Python, I was able to do it as follow:
def convert_to_samples(image, height, width):
samples = []
for y in range(0, height):
for x in range(0, width):
samples.append(image[y, x])
samples = np.float32(np.vstack(samples))
return samples
I have tried as following without success as the result is not a Mat and I can't find a way to transform it back.
public double[][] convert_to_samples(Mat image) {
double[][] samples = new double[image.height()][];
for(int i = 0; i < image.height(); i++) {
for(int j = 0; j < image.width(); j++) {
samples[i] = image.get(i, j);
}
}
return sortRowWise(samples);
}
private static double[][] sortRowWise(double[][] m) {
for (double[] values : m) Arrays.sort(values);
return m;
}
Could someone help me transform the Mat?
public Mat convert_to_samples(Mat image) {
double[][] samples = new double[image.height()][];
for(int i = 0; i < image.height(); i++) {
for(int j = 0; j < image.width(); j++) {
samples[i] = image.get(i, j);
}
}
sortRowWise(samples);
Mat matSamples = new Mat(image.height(), 2, CvType.CV_64FC1);
for (int i = 0; i < image.height(); i++) {
matSamples.put(i, 0, samples[i]);
}
return matSamples;
}
private static void sortRowWise(double[][] m) {
for (double[] values : m) Arrays.sort(values);
}

I am getting an array out of bound exception for this piece of code

public CompressImage(){
}
// compress image method
public static short[] compress(short image[][]){
// get image dimensions
int imageLength = image.length; // row length
int imageWidth = image[0].length; // column length
// convert vertical to horizontal
// store transposed Image
short[][] transposeImage = new short[imageWidth][imageLength];
// rotate by +90
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageLength; j++)
{
short temp = image[i][j];
transposeImage[i][j] = image[j][i];
transposeImage[j][i] = temp;
}
}
short temp = image[i][j];
transposeImage[i][j] = image[j][i];
transposeImage[j][i] = temp;
Why are you swapping here? That doesn't make sense - transposeImage is a new matrix, so you don't have to do inplace editing. This is guaranteed to break if imageWidth != imageLength - see if you can figure out why.
And, actually, you're not even swapping. The three lines above are equivalent to:
transposeImage[i][j] = image[j][i];
transposeImage[j][i] = image[i][j];
The body of the nested for loop should really just be:
transposeImage[i][j] = image[j][i];

Convert a 2D array of doubles to a BufferedImage

I've got a two-dimensional array of doubles which is filtered values of an image. I want to convert this array back to a BufferedImage. How is it possible to cast an double[][] to a BufferedImage?
BufferedImage b = new BufferedImage(arr.length, arr[0].length, 3);
Graphics c = b.getGraphics();
PrintWriter writer = new PrintWriter("the-file-name.txt", "UTF-8");
for(int i=0; i< arr.length; i++){
for (int j=0; j<arr[0].length; j++){
c.drawString(String.valueOf(arr[i][j]), i, j);
writer.print(arr[i][j]+" \t");
}
writer.println();
}
ImageIO.write(b, "jpg", new File("CustomImage.jpg"));
System.out.println("end");
When I am plot the file-name.txt in matlab with imshow I can see my filtered image. However the CustomImage.jpg contains just one color. Any idea why?
THe result with c.drawString(String.valueOf(arr[i][j]), i, j):
c.drawString(String.valueOf(arr[i][j]), 0+(i*10), 0+(j*10)):
Matlab plor the double of arr first the double of arrays and second the initial gray scaled image:
Your Code
BufferedImage b = new BufferedImage(arr.length, arr[0].length, 3);
Graphics c = b.getGraphics();
for(int i = 0; i<arr.length; i++) {
for(int j = 0; j<arr[0].length; j++) {
c.drawString(String.valueOf(arr[i][j]), 0+(i*10), 0+(i*10));
}
}
ImageIO.write(b, "Doublearray", new File("Doublearray.jpg"));
System.out.println("end");
After Refactoring
int xLenght = arr.length;
int yLength = arr[0].length;
BufferedImage b = new BufferedImage(xLenght, yLength, 3);
for(int x = 0; x < xLenght; x++) {
for(int y = 0; y < yLength; y++) {
int rgb = (int)arr[x][y]<<16 | (int)arr[x][y] << 8 | (int)arr[x][y]
b.setRGB(x, y, rgb);
}
}
ImageIO.write(b, "Doublearray", new File("Doublearray.jpg"));
System.out.println("end");
You need to set the BufferedImagelike this:
double[][] values;///your 2d double array
for(int y=0;y<values.length;y++){
for(int x=0;x<values[y].length;x++){
int Pixel=(int)values[x][y]<<16 | (int)values[x][y] << 8 | (int)values[x][y];
img.setRGB(x, y,Pixel);
}
}

Reading tiff raster data

I'm reading a 2048X2048 pixels tiff file using the method below:
private static int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {
final short[] pixels = ((DataBufferUShort) image.getRaster().getDataBuffer()).getData();
int[][] data = new int[2048][2048];
int col = 0;
int row = 0;
int blockSize = 2048;
for (int i=0; i<pixels.length; i++) {
data[col][row] = pixels[i];
row++;
if (row == blockSize) {
col++;
row = 0;
}
}
return data;
}
But I keep getting negative values on my array, if I use gdal with python, for example:
import gdal # Tiff Image Read
def getArrayFromImage(fileName):
img = gdal.Open(fileName)
return img.ReadAsArray().astype(int)
I get only positive values. In the java method above is there a treatment needed in the raw value to be a valid pixel for the tiff image?
Not sure why, but solved the issue by adding 65536 to the value if it is negative.
int j = pixels[i];
if (j < 0) {
j += 65536;
}
data[col][row] = j;

Categories