Converting Grayscale values from .csv to BufferedImage - java

I'm attempting to convert a .csv file containing grayscale values to an image using BufferedImage.
The csv is read into pixArray[] initially, in which all values are doubles.
I am attempting to use BufferedImage to create a 100x100px output image with the code
BufferedImage image = new BufferedImage(width,height,BufferedImage.
TYPE_BYTE_GRAY);
for(int x = 0; x < width; x++)
{
for(int y = 0; y < height; y++)
{
image.setRGB(x, y, (int)Math.round(pixArray[y]));
}
}
File file_out = new File("output.png");
try {
ImageIO.write(image, "png", file_out);
} catch (IOException e) {
e.printStackTrace();
}
but all I have as output is a 100x100 black square.
I've tried alternatives to TYPE_BYTE_GRAY with no success, as well as the png format for outout, and can't find what is producing this error.

It should be
int g = (int)Math.round(pixArray[y]);
image.setRGB(x,y,new Color(g,g,g).getRGB());
What your current code is doing is setting the alpha to the pixel value but leaving the color components all zero.

Posting an alternative solution. While Jim's answer is correct and works, it is also one of the slowest* ways to put sample values into a gray scale BufferedImage.
A BufferedImage with TYPE_BYTE_GRAY don't need all the conversion to and from RGB colors. To put the gray values directly into the image, do it through the image's raster:
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
WritableRaster raster = image.getRaster();
for (int y = 0; y < height; y++) {
int value = (int) Math.round(pixArray[y])
for (int x = 0; x < width; x++) {
raster.setSample(x, y, 0, value);
}
}
*) Slow because of creating excessive throw-away Color instances, but mostly due to color space conversion to/from sRGB color space. Probably not very noticeable in a 100x100 image, but if you try 1000x1000 or larger, you will notice.
PS: I also re-arranged the loops to loop over x in the inner loop. This is normally faster, especially when reading values, due to data locality and caching in modern CPUs. In your case, it matters mostly because you only need to compute (round, cast) the value for each row.

Related

What is the best approach to images with changing hue?

I have a completely desaturated BufferedImage in Java that I need to display to a JFrame many times a second, however, the image needs to be saturated with a certain hue before being rendered. The problem I am facing is that I don't know any approach that is fast enough performance wise to be able to give a smooth frame rate whilst the hue changes each frame.
I have tried to generate an image with the desired hue value during the rendering pass, but that gives a slow frame rate unless the image is quite small. I have also tried to cache a bunch of saturated images and choose one for each frame, but with just 256 cached images it creates very long load times.
The image is likely to be around 1000x500 pixels in size.
The code I currently have to recolor the image is the following:
private BufferedImage recolored(BufferedImage image, float hue) {
int width = image.getWidth();
int height = image.getHeight();
WritableRaster raster = image.getRaster();
BufferedImage res = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster resRast = res.getRaster();
for (int xx = 0; xx < width; xx++) {
for (int yy = 0; yy < height; yy++) {
Color color = new Color(Color.HSBtoRGB(hue, 0.7f, 0.7f));
int[] pixels = raster.getPixel(xx, yy, (int[]) null);
pixels[0] = color.getRed();
pixels[1] = color.getGreen();
pixels[2] = color.getBlue();
resRast.setPixel(xx, yy, pixels);
}
}
return res;
}
So, my question is:
What is the standard, generally accepted or best way in order to display an image with a constantly changing hue?
Is it reasonable to do so with an image of my size?
Also, I doubt that it matters, but the hue is changing in a linearly over time and wraps back to zero after overflowing. Basically it cycles through all colors and then repeats.
Although your question is not very specific I will point out a few flaws in your code. If you fix these you should see a reasonable improvement in speed.
You have a function that takes a BufferedImage and a Hue value as input.
for (int xx = 0; xx < width; xx++) {
for (int yy = 0; yy < height; yy++) {
Color color = new Color(Color.HSBtoRGB(hue, 0.7f, 0.7f));
int[] pixels = raster.getPixel(xx, yy, (int[]) null);
pixels[0] = color.getRed();
pixels[1] = color.getGreen();
pixels[2] = color.getBlue();
resRast.setPixel(xx, yy, pixels);
}
}
In this nested loop you iterate over every pixel in the image. For every coordinate in your image you calculate a rgb color from the provided hue value.
You then read a pixel from your input image, change its colors and then set that color to a pixel in your output image.
All this function does is fill an image with a certain hue.
You read width * height pixels without using their information.
You calculate an RBG colour width * height times althought it's the same
for every pixel
You read that colors RGB values 3 * height * width times
You assign these values 3 * height * width times.
All you needed to do is calculate a single RGB touple once outside your loop. Then set every pixel in your output image to that values.

Converting a double scripted array of grayscale int values to a BufferedImage

So after hours of searching I am ready to pull my hair out on this one.
I am doing some research in Computer Vision and am working with grayscale images. I need to end up with an "image" (a double scripted double array) of Sobel filtered double values. My Sobel converter is set up to take in a double scripted int array (int[][]) and go from there.
I am reading in a buffered image and I gather the grayscale int values via a method that I am 99% sure works perfectly (I can present it if need be).
Next I am attempting to convert this matrix of int values to a BufferedImage by the below method:
private BufferedImage getBIFromIntArr(int[][] matrix){
BufferedImage img = new BufferedImage(matrix.length * 4, matrix[0].length, BufferedImage.TYPE_INT_ARGB);
- Gather the pixels in the form of [alpha, r, g, b]
- multiply the size of the array by 4 for the model
int[] pixels = new int[(matrix.length * 4 * matrix[0].length)];
int index = 0;
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix[0].length; j++) {
int pixel = matrix[i][j];
pixels[index] = pixel;
index++;
for (int k = 0; k < 3; k++) {
pixels[index] = 0;
index++;
}
}
}
-get the raster
WritableRaster raster = img.getRaster();
-output the amount of pixels and a sample of the array
System.out.println(pixels.length);
for (int i = 0; i < pixels.length; i++) {
System.out.print(pixels[i] + " ");
}
- set the pixels of the raster
raster.setPixels(0, 0, matrix.length, matrix[0].length, pixels);
- paint the image via an external routing to check works (does not)
p.panel.setNewImage(img);
return img;
}
Here is my understanding. The ARGB Type consists of 4 values Alpha, Red, Green, and Blue. I am guessing that setting the alpha values in the new BufferedImage to the greyscale image int values (the matrix values passed in) then this will reproduce the image. Please correct me if I am wrong. So as you can see I create an array of pixels that stores the int values like this: [intValue, 0, 0, 0] repeatedly to try to stay with the 4 value model.
Then I create a writable raster and set the gathered pixels in it using the gathered pixels. The only thing is that I get nothing in the BufferedImage. No error with the code below and Im sure my indeces are correct.
What am I doing wrong? Im sure it is obvious but any help is appreciated because I cant see it. Perhaps my assumption about the model is wrong?
Thanks,
Chronic

What order does PixelGrabber put pixels into the array in java?

What order does PixelGrabber put pixels into the array in java? Does it take the pixels along the width of the image first? Or along the height of the image first?
public static int[] convertImgToPixels(Image img, int width, int height) {
int[] pixel = new int[width * height];
PixelGrabber pixels = new PixelGrabber(img, 0, 0, width, height, pixel, 0, width);
try {
pixels.grabPixels();
} catch (InterruptedException e) {
throw new IllegalStateException("Interrupted Waiting for Pixels");
}
if ((pixels.getStatus() & ImageObserver.ABORT) != 0) {
throw new IllegalStateException("Image Fetch Aborted");
}
return pixel;
}
In the code example provided by the documentation
It has the following for loops:
for (int j = 0; j < h; j++) {
for (int i = 0; i < w; i++) {
handlesinglepixel(x+i, y+j, pixels[j * w + i]);
}
}
The access pixels[j * w + i] shows that it goes first along the row, then by along the columns. It grabs the pixels along the width first.
I'm pretty sure it uses row major order, but the easiest way is to actually grab the pixels, set a sequence of them to a particular color (for easy identification) and then save them out to an image. If the pixel strip appears vertical than the order is column major, otherwise it is row major. You can use code like: here
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
To convert an the int[] to an image.
Also, I use ((DataBufferInt)img.grtRaster().getDataBuffer()).getData() to quickly grab the pixels of the image. Any modifications to that int[] will reflect in the image and vice versa. And that is row major for sure.

Create image from 2D Color Array

I have an array called image[][] and i want to create a BufferedImage out of this so I can have the player store it in a file.
// Initialize Color[][] however you were already doing so.
Color[][] image;
// Initialize BufferedImage, assuming Color[][] is already properly populated.
BufferedImage bufferedImage = new BufferedImage(image.length, image[0].length,
BufferedImage.TYPE_INT_RGB);
// Set each pixel of the BufferedImage to the color from the Color[][].
for (int x = 0; x < image.length; x++) {
for (int y = 0; y < image[x].length; y++) {
bufferedImage.setRGB(x, y, image[x][y].getRGB());
}
}
This is a straightforward way of creating (and potentially storing) an image, if that's what you're trying to get at. However, this is not efficient by any means. Try it with a larger image and you'll see a noticeable speed difference.

Convert 2D array in Java to Image

I need to convert a 2D array of pixel intensity data of a grayscale image back to an image. I tried this:
BufferedImage img = new BufferedImage(
regen.length, regen[0].length, BufferedImage.TYPE_BYTE_GRAY);
for(int x = 0; x < regen.length; x++){
for(int y = 0; y<regen[x].length; y++){
img.setRGB(x, y, (int)Math.round(regen[x][y]));
}
}
File imageFile = new File("D:\\img\\conv.bmp");
ImageIO.write(img, "bmp", imageFile);
where "regen" is a 2D double array. I am getting an output which is similar but not exact. There are few pixels that are totally opposite to what it must be (I get black color for a pixel which has a value of 255). Few gray shades are also taken as white. Can you tell me what is the mistake that I am doing?
Try some code like this:
public void writeImage(int Name) {
String path = "res/world/PNGLevel_" + Name + ".png";
BufferedImage image = new BufferedImage(color.length, color[0].length, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < 200; x++) {
for (int y = 0; y < 200; y++) {
image.setRGB(x, y, color[x][y]);
}
}
File ImageFile = new File(path);
try {
ImageIO.write(image, "png", ImageFile);
} catch (IOException e) {
e.printStackTrace();
}
}
BufferedImage.TYPE_BYTE_GRAY is unsigned and non-indexed. Moreover,
When data with non-opaque alpha is stored in an image of this type, the color data must be adjusted to a non-premultiplied form and the alpha discarded, as described in the AlphaComposite documentation.
At a minimum you need to preclude sign extension and mask off all but the lowest eight bits of the third parameter to setRGB(). Sample data that reproduces the problem would be dispositive.

Categories