I can sucessfully read and write values to an image file which accurately shows an image created.
I simply read the values using getRGB(), and then bit shift them into red, green and blue arrays respectively. Then I simply set them back into another BufferedImage object using the setRGB() method.
Now, I am trying to alter the pixel values, say the very first pixel of the red array. I then print out the first 5 pixels of the red array and the first value is changed as expected before invoking the setRGB() method, but when I read in that image again the first value is now back to its original value?
Does anybody know that using the setRGB() only changes the values in memory, but does not actually write that altered values?
EDITS - THIS IS A SAMPLE REPRESENTATION OF MY CODE (this works perfectly due to getting back an image)
//READ IN IMAGE
BufferedImage imgBuf =null;
imgBuf = ImageIO.read(new File("test.jpg"));
int w = imgBuf.getWidth();
int h = imgBuf.getHeight();
int[] RGBarray = imgBuf.getRGB(0,0,w,h,null,0,w);
//BIT SHIFT VALUES INTO ARRAY
for(int row=0; row<h; row++)
{
for(int col=0; col<w; col++)
{
alphaPixels[row][col] = ((RGBarray[g]>>24)&0xff);
redPixels[row][col] = ((RGBarray[g]>>16)&0xff);
greenPixels[row][col] = ((RGBarray[g]>>8)&0xff);
bluePixels[row][col] = (RGBarray[g]&0xff);
g++;
}
}
//BIT SHIFT VALUES BACK TO INTEGERS
for(int row=0; row<h; row++)
{
for(int col=0; col<w; col++)
{
int rgb = (alphaPixels[row][col] & 0xff) << 24 | (redPixels[row][col] & 0xff) << 16 | (greenPixels[row][col] & 0xff) << 8 | (bluePixels[row][col] & 0xff);
imgBuf.setRGB(col, row, rgb);
}
}
//WRITE IMAGE BACK OUT
ImageIO.write(imgBuf, "jpeg", new File("new-test.jpg"));
Write where? If you change the RGB value of the BufferedImage's raster, then yes the memory value is written to and changed. If you mean does it change it to disk? No, not unless you write the image yourself to disk, often with ImageIO.write(...). Changes to the memory representation of disk data will not mathemagically change the disk representation on it's own; instead you have to explicitly do this with your code. I think that you may be missing this last important step.
Edit
You state in comment:
Currently I can write to an image created on disk with a new name. So if that works, then surely changing a few values should be the same effect? (Using setRBG() )
I'm still not clear on this. Say for instance:
If you have an image on disk, say imageA.jpg,
and say you read this into a BufferedImage via ImageIO.read(...), say into the bufferedImageA variable,
and then you change the data raster via setRGB(...)
and then write your changed BufferedImage to disk with ImageIO.write(...), say to a new file, imageB.jpg,
Then if you read in imageB.jpg, it should show the changes made.
But if you re-read in the unchanged imageA.jpg file, it will remain unchanged.
Related
I have a 2D array which contains values from 0-1.0 and my goal is to create an image where each element corresponds to a colour between white and black. Higher the value, the more white and vice versa.
So far I'm come up with the following Java code:
BufferedImage img = new BufferedImage(fg.getLengthInFrames(), 32 ,BufferedImage.TYPE_BYTE_GRAY);
for(int i =0; i < 32; i++) {
for(int j =0; j < fg.getLengthInFrames(); j++) {
colSTI.setRGB(j, i, 0); // wrong
}
}
Obviously this wont work as it is just a dummy function but say I had an array element, arr[0][5] = 0.85. How would I be able to convert this into a rgb value that is equal to the corresponding colour value?
Use TYPE_INT_ARGB image type (a color image supports gray scale no problem). Use
colSTI.setRGB(j, i, new Color(gray,gray,gray).getRGB());
to set the integer color value for the BufferedImage, here gray is you shade of gray int the range 0 .. 255. If the image is huge, to avoid creating and garbage collecting lots of Color objects, you can implement some kind of gray scale to color caching.
This approach supports 255 levels of gray only. You may need more complex approach if more levels are required. If you are representing some measured physical value in your image, I would propose to use color as well for representing different levels. Such false color images are common in science as they allow to see more levels easier.
I know this is an expensive operation and I already tried to use the robot.getPixelColor() function but it works slow, can only calculate like 5 times in a second.
What I'm saying is that 5 times is too small for what I actually want to do, but 20 should be enough. So I ask you if you can suggest me some optimisations to make to my actual code in order to get this result.
My code is:
while (true) {
color = robot.getPixelColor(x, y);
red = color.getRed();
green = color.getGreen();
blue = color.getBlue();
// do a few other operations in constant time
}
I don't know if this would help, but x and y don't change inside the while loop. So it's all the time the same pixel coordinates.
Thanks in advance!
EDIT: The pixel color will be taken from a game which will run at the same time with the java program, it will keep changing. The only thing is that are always the same coordinates.
I'm assuming the color is represented as a 32-bit int encoded as ARGB. In that case, instead of calling a method, you could just do bit masking to extract the colors, and that may end up being faster because you don't waste the overhead of calling a method. I'd recommend doing something like this:
int color = robot.getPixelColor(x,y);
int redBitMask = 0x00FF0000;
int greenBitMask = 0x0000FF00;
int blueBitMask = 0x000000FF;
int extractedRed = (color & redBitMask) >> 16;
int extractedGreen = (color & greenBitMask) >> 8;
int extractedBlue = (color & blueBitMask);
Bit shifting and bitwise operations tend to be very fast.
[edit] Reformatting into question and answer format following fadden# suggestion.
In ExtractMpegFramesTest_egl14.java.txt, method saveFrame(), there is a loop for reordering RGBA into ARGB for Bitmap png compression (see below quotes from that file), how can this be optimised?
// glReadPixels gives us a ByteBuffer filled with what is essentially big-endian RGBA
// data (i.e. a byte of red, followed by a byte of green...). We need an int[] filled
// with little-endian ARGB data to feed to Bitmap.
//
...
// So... we set the ByteBuffer to little-endian, which should turn the bulk IntBuffer
// get() into a straight memcpy on most Android devices. Our ints will hold ABGR data.
// Swapping B and R gives us ARGB. We need about 30ms for the bulk get(), and another
// 270ms for the color swap.
...
for (int i = 0; i < pixelCount; i++) {
int c = colors[i];
colors[i] = (c & 0xff00ff00) | ((c & 0x00ff0000) >> 16) | ((c & 0x000000ff) << 16);
}
It turns out there's an even faster approach.
Using the suggestion in #elmiguelao's answer, I modified the fragment shader to do the pixel swap. This allowed me to remove the swap code from saveFrame(). Since I no longer needed a temporary copy of the pixels in memory, I eliminated the int[] buffer entirely, switching from this:
int[] colors = [... copy from mPixelBuf, swap ...]
Bitmap.createBitmap(colors, mWidth, mHeight, Bitmap.Config.ARGB_8888);
to this:
Bitmap bmp = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
bmp.copyPixelsFromBuffer(mPixelBuf);
As soon as I did that, all of my colors were wrong.
It turns out that Bitmap#copyPixelsFromBuffer() wants the pixels in RGBA order, not ARGB order. The values coming out of glReadPixels() are already in the right format. So by doing it this way I avoid the swap, avoid an unnecessary copy, and don't need to tweak the fragment shader at all.
[edit] Reformatting into question and answer format following fadden# suggestion
I wanted to suggest that this conversion can happen in the FragmentShader by changing the line
gl_FragColor = texture2D(sTexture, vTextureCoord);
into
gl_FragColor = texture2D(sTexture, vTextureCoord).argb;
which is an efficient shortcut to reorder in GPU the shader's output channels, that works in other ways too: .abgr or even .bggr etc.
I am trying to access the pixels of image using method getRGB(). The image I use for this purose is 8-bit image i.e each pixel is represented by 8-bits, hence the possible values are 0-255.
the image I used was png 8-bit image hence the type 'type_byte_indexed'
if (type == BufferedImage.TYPE_BYTE_INDEXED) {
System.out.println("type.byte.indexed");
System.out.print(h+" "+w);
sourceImage.getRGB(0, 0, w, h, rgbs, 0, w); //rgbs is integer array
for (i = 0; i <10; i++) {
System.out.print(" "+rgbs[i]);
}
System.out.println("rgbs len: " + rgbs.length);
}
The output of the for loop is something ilke:
-12048344 -12174804 -12048344 -12174804 -12174804 .......
I obtain the r,g,b components from it and store in array :
Color c=new Color(rgbs[i]);
r=c.getRed();
g=c.getGreen();
b=c.getBlue();
Now how do I combine again these values so that I can use the setRGB method? Like for 24 bit image we can use
int rgb=65536*pixel[i]+256*pixel[i+1]+pixel[i+2];
The documentation clearly states that the returned values are in ARGB-form:
Returns an array of integer pixels in the default RGB color model (TYPE_INT_ARGB) and default sRGB color space
You can access the underlying buffer (that contains indexed pixels) with
byte[] data=((DataBufferByte)bufferedImage.getRaster().getDataBuffer()).getData(0);
I am using Java with swing in FSE mode. I want to load a completely black-and-white image into binary format (a 2d array preferably) and use it for mask-based per-pixel collision detection. I don't even know where to start here, I've been researching for the past hour and haven't found anything relevant.
Just read it into a BufferedImage using ImageIO#read() and get the individual pixels by BufferedImage#getRGB(). A value of 0xFFFFFFFF is white and the remnant is color. Assuming that you want to represent white as byte 0 and color (black) as byte 1, here's a kickoff example:
BufferedImage image = ImageIO.read(new File("/some.jpg"));
byte[][] pixels = new byte[image.getWidth()][];
for (int x = 0; x < image.getWidth(); x++) {
pixels[x] = new byte[image.getHeight()];
for (int y = 0; y < image.getHeight(); y++) {
pixels[x][y] = (byte) (image.getRGB(x, y) == 0xFFFFFFFF ? 0 : 1);
}
}
See also:
The Java Tutorials - 2D Graphics - Working with images
If you're reading the image from a URL, it will already be in a binary format. Just download the data and ignore the fact that it's an image. The code which is involved in download it won't care, after all. Assuming you want to write it to a file or something similar, just open the URLConnection and open the FileOutputStream, and repeatedly read from the input stream from the web, writing the data you've read to the output stream.
You can also use ImageIO if you are not downloading it from some resource.