I have to add some constant value to all pixels in my image - for gray image and colored. But I don't know how can I do that. I read image by BufferedImage, and I'm trying to get 2d array of pixels.
I found something like BufferedImage.getRGB() but it returns weird values (negative and huge). How to add some value to my bufferedimage?
You can use:
byte[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
To get a byte[] of all pixels in the image and then loop over the byte[] adding your constant to each byte element.
If you want the bytes converted to a 2-dimensional byte[], I found an example that does just that (Get Two Dimensional Pixel Array) .
In summary the code looks like:
private static int[][] convertToArrayLocation(BufferedImage inputImage) {
final byte[] pixels = ((DataBufferByte) inputImage.getRaster().getDataBuffer()).getData(); // get pixel value as single array from buffered Image
final int width = inputImage.getWidth(); //get image width value
final int height = inputImage.getHeight(); //get image height value
int[][] result = new int[height][width]; //Initialize the array with height and width
//this loop allocates pixels value to two dimensional array
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel++) {
int argb = 0;
argb = (int) pixels[pixel];
if (argb < 0) { //if pixel value is negative, change to positive
argb += 256;
}
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
return result; //return the result as two dimensional array
} //!end of method!//
To add a constant value to all pixels, you can use RescaleOp. Your constant will be the offset for each channel. Leave scale at 1.0 and hints may be null.
// Positive offset makes the image brighter, negative values makes it darker
int offset = 100; // ...or whatever your constant value is
BufferedImage brighter = new RescaleOp(1, offset, null)
.filter(image, null);
To change the current image, instead of creating a new one, you may use:
new RescaleOp(1, offset, null)
.filter(image, image);
Related
I have an arary of double value (or can be float values as well). The range of the values are between 0-255. The array is in shape of [128][128][3], thus a RGB array of image. Now I want save this array as image (png or jpg). How this can be done in Java?
Ok, finally I could understand the image in Java.
after having an array of int[][][] nwimage = new int[128][128][3], convert all values from the float array to integer. The float array has shape of [128][128][3] as well.
now create a buffered image with the shape of nwimage as 2D which is 128x128.
BufferedImage bfImage = new BufferedImage(128, 128, BufferedImage.TYPE_INT_RGB);
now setRGB for each index of the bfImage as below;
for(int i = 0; i < 128; i++) {
for(int j = 0; j < 128; j++) {
Color myRGB = new Color(nwimage[i][j][0], nwimage[i][j][1], nwimage[i][j][2]);
int rgb = myRGB.getRGB();
bfImage.setRGB(i, j, rgb);
}
}
I'm trying to get an byte array of pixels. I'm using the ARGB_8888 for
decodeByteArray function. The getPixels() or copyPixelsToBuffer(),
return a array in R G B A form. Is it possible to get only R G B from them, without creating a new array and copying bytes that i don't need. I know there is a RGB_565, but it is not optimal for my case where i need a byte per color.
Thanks.
Usecolor=bitmap.getPixel(x,y) to obtain a Color integer at the specified location. Next, use the red(color), green(color) and blue(color) methods from the Color class, which represents each color value in the [0..255] range.
With regards to the alpha channel, one could multiply its ratio into every other color.
Here is an example implementation:
int width = bitmap.getWidth();
int height = bitmap.getHeight();
ByteBuffer b = ByteBuffer.allocate(width*height*3);
for (int y=0;y<height;y++)
for (int x=0;x<width;x++) {
int index = (y*width + x)*3;
int color = bitmap.getPixel(x,y);
float alpha = (float) Color.alpha(color)/255;
b.put(index, (byte) round(alpha*Color.red(color)));
b.put(index+1, (byte) round(alpha*Color.green(color)));
b.put(index+2, (byte) round(alpha*Color.blue(color)));
}
byte[] pixelArray = b.array();
So after hours of searching I am ready to pull my hair out on this one.
I am doing some research in Computer Vision and am working with grayscale images. I need to end up with an "image" (a double scripted double array) of Sobel filtered double values. My Sobel converter is set up to take in a double scripted int array (int[][]) and go from there.
I am reading in a buffered image and I gather the grayscale int values via a method that I am 99% sure works perfectly (I can present it if need be).
Next I am attempting to convert this matrix of int values to a BufferedImage by the below method:
private BufferedImage getBIFromIntArr(int[][] matrix){
BufferedImage img = new BufferedImage(matrix.length * 4, matrix[0].length, BufferedImage.TYPE_INT_ARGB);
- Gather the pixels in the form of [alpha, r, g, b]
- multiply the size of the array by 4 for the model
int[] pixels = new int[(matrix.length * 4 * matrix[0].length)];
int index = 0;
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix[0].length; j++) {
int pixel = matrix[i][j];
pixels[index] = pixel;
index++;
for (int k = 0; k < 3; k++) {
pixels[index] = 0;
index++;
}
}
}
-get the raster
WritableRaster raster = img.getRaster();
-output the amount of pixels and a sample of the array
System.out.println(pixels.length);
for (int i = 0; i < pixels.length; i++) {
System.out.print(pixels[i] + " ");
}
- set the pixels of the raster
raster.setPixels(0, 0, matrix.length, matrix[0].length, pixels);
- paint the image via an external routing to check works (does not)
p.panel.setNewImage(img);
return img;
}
Here is my understanding. The ARGB Type consists of 4 values Alpha, Red, Green, and Blue. I am guessing that setting the alpha values in the new BufferedImage to the greyscale image int values (the matrix values passed in) then this will reproduce the image. Please correct me if I am wrong. So as you can see I create an array of pixels that stores the int values like this: [intValue, 0, 0, 0] repeatedly to try to stay with the 4 value model.
Then I create a writable raster and set the gathered pixels in it using the gathered pixels. The only thing is that I get nothing in the BufferedImage. No error with the code below and Im sure my indeces are correct.
What am I doing wrong? Im sure it is obvious but any help is appreciated because I cant see it. Perhaps my assumption about the model is wrong?
Thanks,
Chronic
the value of R or G or B is store in int from 0~255.
I already have all the rgb value of every pixel of the picuture, I want to display the pics base on the r,g,b I already know.
BufferedImage imgnew = new BufferedImage(width, height,BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
//I can get access to the rgb of every pixel by R[x][y],G[x][y],B[x][y]
//how to calculate the rgb of every pixel?
imgnew.setRGB(x, y, rgb);
}
}
JFrame frame = new JFrame();
JLabel labelnew = new JLabel(new ImageIcon(imgnew));
frame.getContentPane().add(labelnew, BorderLayout.CENTER);
frame.pack();
frame.setVisible(true);
my question is how to calculate the right pix of every pixel, as the rgb is store as int, should I transfer it to byte? if it is, how to do it, if not, is there any other way to calculate pix?
I know someone use
int rgb = 0xff000000 | ((R[x][y] & 0xff) << 16) | (((G[x][y] & 0xff)<< 8) | ((B[x][y] & 0xff);//the way I calcualte the pix is wrong, which lead to the wrong color of pics
to calculate rgb, but here the R[x][y] G,B is store in type bytes
The BufferedReader class returns one or list of pixel by getRGB() method, and I have to mention that you don't get it as a 2-demition array like int[width][height], for example if you request pixels from 0,0 to 10,20, then you will get it as a 200-length int[] array.
then you need to break up each int value into 4 byte which represents (argb) of each pixel, so you would do it with ByteBuffer class.
here a simple example
int imgWidth=1920,imgHeight=1080;
int[] row=new int[imgWidth];//for storing a line of pixels
for(int i=0;i<imgHeight;i++){
row=img.getRGB(0,i,imgWidth,1,null,0,imgWidth);//get pixel from the current row
for(int k=0;k<row.length;k++){
byte[] argb=ByteBuffer.allocate(4).putInt(4).array();//break up int(color) to 4 byte (argb)
//doing some business with pixel....
}
//setting the processed pixel
//////////////////////////////////////////UPDATED!
//Preparing each pixel using ByteBuffer class, make an int(pixel) using a 4-lenght byte array
int rgb=ByteBuffer.wrap(new byte[]{0xff,R[x][y]&0xff,G[x][y]&0xff,B[x][y]&0xff}).getInt();
imgnew.setRGB(x,y,rgb);//this is bettrer to buffer some pixel then set it to the image, instead of set one-by-one
//////////////////////////////////////////
//img.setRGB(0,i,imgWidth,1,row,0,imgWidth)
}
also check this example too
I want to extract the pixel values of the jpeg image using the JAVA language, and need to store it in array(bufferdArray) for further manipulation. So how i can extract the pixel values from jpeg image format?
Have a look at BufferedImage.getRGB().
Here is a stripped-down instructional example of how to pull apart an image to do a conditional check/modify on the pixels. Add error/exception handling as necessary.
public static BufferedImage exampleForSO(BufferedImage image) {
BufferedImage imageIn = image;
BufferedImage imageOut =
new BufferedImage(imageIn.getWidth(), imageIn.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
int width = imageIn.getWidth();
int height = imageIn.getHeight();
int[] imageInPixels = imageIn.getRGB(0, 0, width, height, null, 0, width);
int[] imageOutPixels = new int[imageInPixels.length];
for (int i = 0; i < imageInPixels.length; i++) {
int inR = (imageInPixels[i] & 0x00FF0000) >> 16;
int inG = (imageInPixels[i] & 0x0000FF00) >> 8;
int inB = (imageInPixels[i] & 0x000000FF) >> 0;
if ( conditionChecker_inRinGinB ){
// modify
} else {
// don't modify
}
}
imageOut.setRGB(0, 0, width, height, imageOutPixels, 0, width);
return imageOut;
}
The easiest way to get a JPEG into a java-readable object is the following:
BufferedImage image = ImageIO.read(new File("MyJPEG.jpg"));
BufferedImage provides methods for getting RGB values at exact pixel locations in the image (X-Y integer coordinates), so it'd be up to you to figure out how you want to store that in a single-dimensional array, but that's the gist of it.
There is a way of taking a buffered image and converting it into an integer array, where each integer in the array represents the rgb value of a pixel in the image.
int[] pixels = ((DataBufferInt)image.getRaster().grtDataBuffer()).getData();
The interesting thing is, when an element in the integer array is edited, the corresponding pixel in the image is as well.
In order to find a pixel in the array from a set of x and y coordinates, you would use this method.
public void setPixel(int x, int y ,int rgb){
pixels[y * image.getWidth() + x] = rgb;
}
Even with the multiplication and addition of coordinates, it is still faster than using the setRGB() method in the BufferedImage class.
EDIT:
Also keep in mind, the image needs type needs to be that of TYPE_INT_RGB, and isn't by default. It can be converted by creating a new image of the same dimensions, and of the type of TYPE_INT_RGB. Then using the graphics object of the new image to draw the original image to the new one.
public BufferedImage toIntRGB(BufferedImage image){
if(image.getType() == BufferedImage.TYPE_INT_RGB)
return image;
BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight, BufferedImage.TYPE_INT_RGB);
newImage.getGraphics().drawImage(image, 0, 0, null);
return newImage;
}