Filling spots in am image java where data is equal to 0 - java

Right now I have a composite code that produces and image from recorded data. I am trying to figure out a way that I can fill the spots in the image where no data was recorded (aka where it reads 0.0) with a new color. I have been experimenting a little with Graphics, but am not finding a way that I can just will these empty spots. I would post an image if I had enough points...
But I hope you understand what I am trying to say.

Like the comment suggests we really need more info.
If you use a BufferedImage then you can simply set a single pixel color using this method setRGB(int x, int y, int rgb)

Related

What does the PBO buffer content mean?

I was trying to implement a color picking system by using a PBO(pixel buffer object using OpenGL), and when I finished, I realized the numbers that came out of the PBO when mapped didn't make any sense at all. I made my application render big squares of different colors and the result DID change between these different colors, but even after analyzing the numbers, I can't make sense of it.
For example, clicking on pure red gave me bytes of
(-1,0,0), while pure blue gave (0,0,-1), but against all logic, pure green gives (-1,0,-1), cyan also gives (-1,0,0), and yellow gives (76,-1,0).
Obviously these numbers are wrong, given that two different colors can result in the same byte formation. Shouldn't a fully red color be (127,0,0)?
Here is the code I used for initialization, size 3 because I am only reading one pixel.
pboid = glGenBuffersARB(); //Initialize buffer for pbo
glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, pboid); //bind buffer
glBufferDataARB(GL_PIXEL_PACK_BUFFER_EXT, 3,GL_DYNAMIC_READ); //create a pbo with 3 slots
glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, 0); //unbind buffer
And here is the code I used for reading the pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, pboid); //Bind the pbo
glReadPixels((int)lastMousePosition.x,(int)lastMousePosition.y,1,1, GL_RGB, GL_UNSIGNED_BYTE, 0); //Read 1 pixel
ByteBuffer colorBuffer = glMapBufferARB(GL_PIXEL_PACK_BUFFER_EXT, GL_READ_ONLY_ARB); //Map the buffer so we can read it
for(int x = 0; x < colorBuffer.limit(); x++)
{
System.out.println("color byte: " + colorBuffer.get(x)); //Print out the color byte
}
glUnmapBufferARB(GL_PIXEL_PACK_BUFFER_EXT); //Unmap the buffer so it can be used again
glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, 0); //Unbind the pbo
If I am wrong in any assumptions I have made, please correct me. I am planning perhaps to use this system to tell which gui element is being clicked by rendering each of them to an fbo with a unique color and testing which pixel color was clicked on. Thanks in advance for anyone who can help!
At long last I have finally found the issue!
Firstly, using Byte.toUnsignedInt(byte), you can turn the color that the pbo gives you into your traditional 0-255 range rgb numbers.
Secondly, this being the primary issue, when OpenGL asks for pixel coordinates to fill a pbo, it is relative to bottom right. My issue was that I was using GLFW which gives coordinates relative to top right, meaning color picking in the vertical middle of the screen was accurate, but that it was getting the inverse part of the screen I was looking for when color picking elsewhere. To fix this, simply subtract the mouse click's y coordinate from the window height.
Thanks for the help and ideas!
There are a couple of possibilities, but I don't have my openGL system set up to test - but you can try these anyhow. Also I don't know Java too well ( C,C++ etc is my domain)
EDIT
1) You have asked for GL_UNSIGNED_BYTE data from glReadPixels(), but you are printing out in signed format. GL_UNSIGNED_BYTE has values 0-256, so negative values are not possible! Try to format you printout for UNSIGNED_BYTE and see where that leads. (from your code I can see this is now fixed).
2) As derhass pointed out in his comments, you should not be using the ARB (architecture Review Board) extension versions of OpenGL buffer functions since these are part of OpenGL core for quite a long time now. See https://www.khronos.org/opengl/wiki/History_of_OpenGL for version history. From this I can see glBindBufferARB (for example) was deprecated in 2003. It may or not impact your particular problem, but replace glXXXXXARB() with glXXXXX() thorughout, and make sure your OpenGL libraries are recent (v4 or later).
3) Also credit derhass, and reading your GitHub code, your getMousePosition() via glfwGetCursorPos returns screen coordinates (0,0 is top left of your window) so you need to convert to viewport coordinates (0,0 is bottom left) to read the framebuffer. Your code at GitHub seems not to be making the conversion.
4) Also credit derhass, you dont' need to use PBO at all for basic color picking. glReadPixels() default target is the framebuffer, so you can safely dispense with the VBO and get color, depth and stencil data directly from the framebuffer. (you need to enable the depth and stencil buffers).
5) If you are selecting on a 3D scene, you will also need to convert (unproject) the viewport coordinates and depth back to worldcoordinates to be able to identify which object you have clicked on. See https://en.wikibooks.org/wiki/OpenGL_Programming/Object_selection for some ideas on selection.
I hope all this helps a bit, although it feels a like learning experience for both of us.

3 grayscale images to 1 colour image in Java

Im looking to create a Java application that will take 3 grayscale images (each representing red, green and blue) and then merging them/flattening them/etc. to create one colour image, I was wondering if anyone knew if there are any existing algorithms or methods which I might be able to use?
I understand there is a program called ImageJ which I have used before and it does exactly what im looking to; you choose 3 images and an RGB image can be created from them. I know this is done in Java by using a lookup table, this is something ive never encountered before so wouldnt even know where to begin.
If anyone has any ideas of the best way to approach it, existing algorithms, how I might be able to make my own, etc. that would be great. Im not looking for anyone to code for me, just to guide me in the right direction; my theory of iterating every pixel for each R, G and B grayscale image might not work?
Thanks for the help in advance
Working in the sRGB colourspace, it is easy to implement a method that does this.
Consider the following method:
private static BufferedImage createColorFromGrayscale(BufferedImage red, BufferedImage green, BufferedImage blue){
BufferedImage base = new BufferedImage(red.getWidth(), red.getHeight(), BufferedImage.TYPE_INT_ARGB);
for(int x = 0;x < red.getWidth();x++){
for(int y = 0; y < red.getHeight(); y++){
int rgb = (red.getRGB(x, y) & 0x00FF0000) | (green.getRGB(x, y) & 0x0000FF00) | (blue.getRGB(x, y) & 0x000000FF);
base.setRGB(x, y, (rgb | 0xFF000000));
}
}
return base;
}
Creating a new base image, we create a colour component by using bitwise ANDs and ORs to create a 4 byte integer color in format ARGB which is assigned to the base image. Iterating through the whole image by the means of the for loops we are able to set each pixel of the resultant base image to the colours of each channel respectively.
This method assumes that all three images are equal in size. If images are not equal in size, you must handle that separately (e.g by means of stretching images before input or by modifying the method to accept images of different size.)
P.S: It might be more efficient to directly use one of the bufferedimage instances as the base image to save memory when dealing with large images...

java color image processing

I am trying to get the different amount of colors inside an image in java, but I don't know if there is a library for this propose of not. the project is about finding out the different colors from one image, and then print out the name of the colors. any idea??? please help me if you have any answer.
You can turn an image into a BufferedImage and call getRGB(int x, int y) to get the rgb for each pixel. Then you can use one of the many color websites such as this one or this one to map the rgb to a color name. Just find the named color which is the closest in distance to each rgb in the image.
Java has the Java Advanced Imaging (JAI) library which is built to allow stuff like this.

pixel difference

I'm a beginner in Java programming. I have to submit project of server- client and I am stuck in pixel comparison. .acc to code It accepts BufferedImage and compares pixels. How to store the pixel difference in the 2nd image itself and return it?
Take a look at BufferedImage's getRGB(int x, int y) method. This will provide an approximate RGB value for the given (x, y) location as an int, which can then be compared to the corresponding location in the other image.
If you wish to perform a more detailed comparison you'll need to iterate over each image band separately, comparing the samples for that band with the corresponding band for the other image. (For example, an RGBA encoded image has four individual bands to compare, whereas a greyscale image has just one.)
Obviously you could start by comparing the image dimensions to ensure they are equal before proceding to the more detailed comparison.
Also, you should not expect people to paste detailed code solutions; That's not the way Stack Overflow works - People will be far more willing to help with specific problems so you should try coding the solution and post a code snippet if you get stuck.

How to specify behavior of Java BufferedImage resize: need min for pixel rows instead of averaging

I would like to resize a Java BufferedImage, making it smaller vertically but without using any type of averaging, so that if a pixel-row is "blank" (white) in the source image, there will be a white pixel-row in the corresponding position of the destination image: the "min" operation. The default algorithms (specified in getScaledInstance) do not allow me a fine-grained enough control. I would like to implement the following logic:
for each pixel row in the w-pixels wide destination image, d = pixel[w]
find the corresponding j pixel rows of the source image, s[][] = pixel[j][w]
write the new line of pixels, so that d[i] = min(s[j][i]) over all j, i
I have been reading on RescaleOp, but have not figured out how to implement this functionality -- it is admittedly a weird type of scaling. Can anyone provide me pointers on how to do this? In the worse case, I figure I can just reserve the destination ImageBuffer and copy the pixels following the pseudocode, but I was wondering if there is better way.
The RescaleOp methods include a parameter called RenderingHints. There is a hint called KEY_INTERPOLATION that decides the color to use when scaling an image.
If you use the value VALUE_INTERPOLATION_NEAREST_NEIGHBOR for the KEY_INTERPOLATION, Java will use the original colors, rather than using some type of algorithm to recalculate the new colors.
So, instead of white lines turning to gray or some mix of color, you'll get either white lines, or you won't get any lines at all. It all depends on the scaling factor, and if it's an even or odd row. For example, if you are scaling by half, then each 1 pixel horizontal line has at least a 50% change of appearing in the new image. However, if the white lines were two pixels in height, you'd have a 100% chance of the white line appearing.
This is probably the closest you're going to get besides writing your own scaling method. Unfortunately, I don't see any other hints that might help further.
To implement your own scaling method, you could create a new class that implements the BufferedImageOp interface, and implement the filter() method. Use getRGB() and setRGB() on the BufferedImage object to get the pixels from the original image and set the pixels on the new image.

Categories