hey i have using j2me to read an image
i want to do some process on that image like Darkenes , lightens
i already read image as an input stream
InputStream iStrm = getClass().getResourceAsStream("/earth.PNG");
ByteArrayOutputStream bStrm = new ByteArrayOutputStream();
int ch;
while ((ch = iStrm.read()) != -1){
bStrm.write(ch);
byte imageData[] = bStrm.toByteArray();
Image im = Image.createImage(imageData, 0, imageData.length);
how can i get RGB values or how can i add some values to the array of pixles
imageData[] so it can more lightens or darkness ,
is there header data including in the input stream i had read , that cause me error when iam adding some values to it ?
I think you should be able to do the following:
int width = im.getWidth();
int height = im.getHeight();
int[] rgbData = new int[width*height]; // since we are working with rgba
im.getRGB(rgbData, 0, width, 0, 0, width, height);
// now, the data is stored in each integer as 0xAARRGGBB,
// so high-order bits are alpha channel for each integer
Now, if you want to put them into three arrays, one for each channel, you could do the following:
int red[][] = new int[width][height];
int green[][] = new int[width][height];
int blue[][] = new int[width][height];
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
red[i][j] = rgb[i*width + j] & 0xFF0000;
green[i][j] = rgb[i*width + j] & 0xFF00;
blue[i][j] = rgb[i+width + j] & 0xFF;
}
}
Related
I want to save a video of what I am showing with openGL using JOGL. To do this, I am writing my frames to pictures as follows and then, once I have saved all frames I'll use ffmpeg. I know that this is not the best approach but I still don't have much clear how to accelerate with tex2dimage and PBOs. Any help in that direction would be very useful.
Anyway, my problem is that if I run the opengl class it works but, if I call this class from another class, then I see that the glReadPixels is trhowing me an error. It always returns more data to buffer than memory has been allocated to my buffer "pixelsRGB". Does anyone know why?
As an example: width = 1042; height=998. Allocated=3.119.748 glPixels returned=3.121.742
public void display(GLAutoDrawable drawable) {
//Draw things.....
//bla bla bla
t++; //This is a time variable for the animation (it says to me the frame).
//Save frame
int width = drawable.getSurfaceWidth();
int height = drawable.getSurfaceHeight();
ByteBuffer pixelsRGB = Buffers.newDirectByteBuffer(width * height * 3);
gl.glReadPixels(0, 0, width,height, gl.GL_RGB, gl.GL_UNSIGNED_BYTE, pixelsRGB);
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
int[] pixels = new int[width * height];
int firstByte = width * height * 3;
int sourceIndex;
int targetIndex = 0;
int rowBytesNumber = width * 3;
for (int row = 0; row < height; row++) {
firstByte -= rowBytesNumber;
sourceIndex = firstByte;
for (int col = 0; col < width; col++) {
int iR = pixelsRGB.get(sourceIndex++);
int iG = pixelsRGB.get(sourceIndex++);
int iB = pixelsRGB.get(sourceIndex++);
pixels[targetIndex++] = 0xFF000000
| ((iR & 0x000000FF) << 16)
| ((iG & 0x000000FF) << 8)
| (iB & 0x000000FF);
}
}
bufferedImage.setRGB(0, 0, width, height, pixels, 0, width);
File a = new File(t+".png");
ImageIO.write(bufferedImage, "PNG", a);
}
NOTE: With pleluron's answer now it works. The good code is:
public void display(GLAutoDrawable drawable) {
//Draw things.....
//bla bla bla
t++; //This is a time variable for the animation (it says to me the frame).
//Save frame
int width = drawable.getSurfaceWidth();
int height = drawable.getSurfaceHeight();
ByteBuffer pixelsRGB = Buffers.newDirectByteBuffer(width * height * 4);
gl.glReadPixels(0, 0, width,height, gl.GL_RGBA, gl.GL_UNSIGNED_BYTE, pixelsRGB);
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
int[] pixels = new int[width * height];
int firstByte = width * height * 4;
int sourceIndex;
int targetIndex = 0;
int rowBytesNumber = width * 4;
for (int row = 0; row < height; row++) {
firstByte -= rowBytesNumber;
sourceIndex = firstByte;
for (int col = 0; col < width; col++) {
int iR = pixelsRGB.get(sourceIndex++);
int iG = pixelsRGB.get(sourceIndex++);
int iB = pixelsRGB.get(sourceIndex++);
sourceIndex++;
pixels[targetIndex++] = 0xFF000000
| ((iR & 0x000000FF) << 16)
| ((iG & 0x000000FF) << 8)
| (iB & 0x000000FF);
}
}
bufferedImage.setRGB(0, 0, width, height, pixels, 0, width);
File a = new File(t+".png");
ImageIO.write(bufferedImage, "PNG", a);
}
The default value of GL_PACK_ALIGNMENT set with glPixelStore is 4. It means that each row of pixelsRGB should start at an address that is a multiple of 4, and the width of your buffer (1042) times the number of bytes in a pixel (3) isn't a multiple of 4. Adding a little padding so the next row starts at a multiple of 4 will make the total byte size of your buffer larger than what you expected.
To fix it, set GL_PACK_ALIGNMENT to 1. You could also read the pixels with GL_RGBA and use a larger buffer, since the data is most likely to be stored that way both on the GPU and in BufferedImage.
Edit: BufferedImage doesn't have a convenient 'setRGBA', too bad.
I've been working on an extension of a current application to stream webcam data to an Android device. I can obtain the raw image data, in the form of a RGB byte array. The color space is sRGB. I need to send that array over the network to an Android client, who constructs it into a Bitmap image to display on the screen. My problem is that the color data is skewed. The arrays have the same hashcode before and after being sent, so I'm positive this isn't a data loss problem. I've attached a sample image of how the color looks, you can see that skin tones and darker colors reconstruct okay, but lighter colors end up with a lot of yellow/red artifacts.
Server (Windows 10) code :
while(socket.isConnected()) {
byte[] bufferArray = new byte[width * height * 3];
ByteBuffer buff = cam.getImageBytes();
for(int i = 0; i < bufferArray.length; i++) {
bufferArray[i] = buff.get();
}
out.write(bufferArray);
out.flush();
}
Client (Android) code :
while(socket.isConnected()) {
int[] colors = new int[width * height];
byte[] pixels = new byte[(width * height) * 3];
int bytesRead = 0;
for(int i = 0; i < (width * height * 3); i++) {
int temp = in.read();
if(temp == -1) {
Log.d("WARNING", "Problem reading");
break;
}
else {
pixels[i] = (byte) temp;
bytesRead++;
}
}
int colorIndex = 0;
for(int i = 0; i < pixels.length; i += 3 ) {
int r = pixels[i];
int g = pixels[i + 1];
int b = pixels[i + 2];
colors[colorIndex] = Color.rgb( r, g, b);
colorIndex++;
}
Bitmap image = Bitmap.createBitmap(colors, width, height, Bitmap.Config.ARGB_8888);
publishProgress(image);
}
The cam.getImageBytes() is from an external library, but I have tested it and it works properly. Reconstructing the RAW data into a BufferedImage works perfectly, using the code :
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
image.getRaster().setPixels(0,0,width,height, pixels);
But, of course, BufferedImages are not supported on Android.
I'm about to tear my hair out with this one, I've tried everything I can think of, so any and all insight would be extremely helpful!
I'm trying to implement a DCT on an image, and so far I have only been able to successfully read an image, grayscale it, turn it into a byte array and then output the byte array as an image. This works fine. However, in order to work on the image I need to convert the byte array to an int array and then back again, and this is where the problem comes in.
This first part reads the image and converts the image to grayscale.
BufferedImage image = ImageIO.read(new File("C:\\Users\\A00226084\\Desktop\\image.jpg"));
int width = image.getWidth();
int height = image.getHeight();
for(int i=0; i<height; i++){
for(int j=0; j<width; j++){
Color c = new Color(image.getRGB(j, i));
int red = (int)(c.getRed() * 0.299);
int green = (int)(c.getGreen() * 0.587);
int blue = (int)(c.getBlue() *0.114);
Color newColor = new Color(red+green+blue,
red+green+blue,red+green+blue);
image.setRGB(j,i,newColor.getRGB());
}
}
This part converts the image to a byte array.
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(image, "jpg", baos);
byte[] pixels = baos.toByteArray();
This part then converts the byte array to an int array. I have manually done the wrapping because nothing else worked.
for(int i = 0; i < pixels.length; i ++){
if(pixels[i] < 0){
byteConverted[i] = 127 + (128 - (pixels[i] * -1));
}else{
byteConverted[i] = pixels[i];
}
}
This part converts the int array back to a byte array, somewhere between the byte > int and int > byte conversions that everything goes wrong. I have output both the before and after byte array to a file and they are identical, so I don't know why I get 'image == null' instead of the image.
for(int i = 0; i < pixels.length; i ++){
if(byteConverted[i] > 127){
pixels[i] = (byte) ( -128 + (byteConverted[i] - 127));
}else{
pixels[i] = (byte) byteConverted[i];
}
}
write2("final byte array.txt", pixels);
ByteArrayInputStream bais = new ByteArrayInputStream(pixels);
try {
BufferedImage img = ImageIO.read(bais);
System.out.println("Image Out");
System.out.println(pixels.length);
ImageIO.write(img, "jpg", new File("C:\\Users\\A00226084\\Desktop\\newImage.jpg"));
} catch (IOException e) {
e.printStackTrace();
}
I want to copy a gray image using BufferedImage from getRGB() to int[][] and then to setRGB(). The problem is that the size of image is different from the size of the one that the program outputs it. The original image has file size = 176 KB, whereas the output image has file size = 154 KB. I have to say that when you see the two image, all of the human-being would say it is the same, but in terms of binary bits, there are different in something that I would like to know.
Maybe some of you will say it doesn't matter, as long as image is the same when you look at it. In fact, during the processing of some noise project, this is a huge problem, and I suspect that this is the reason why I have the problem.
I just want to know if there are other method than BufferedImage to produce int[][] and then to create the output?
This is the code that I'm using:
public int[][] Read_Image(BufferedImage image)
{
width = image.getWidth();
height = image.getHeight();
int[][] result = new int[height][width];
for (int row = 0; row < height; row++)
for (int col = 0; col < width; col++)
result[row][col] = image.getRGB(row, col);
return result;
}
public BufferedImage Create_Gray_Image(int [][] pixels)
{
BufferedImage Ima = new BufferedImage(512,512, BufferedImage.TYPE_BYTE_GRAY);
for (int x = 0; x < 512; x++)
{
for (int y = 0; y < 512; y++)
{
int rgb = pixels[x][y];
int r = (rgb >> 16) & 0xFF;
int g = (rgb >> 8) & 0xFF;
int b = (rgb & 0xFF);
int grayLevel = (r + g + b) / 3;
int gray = (grayLevel << 16) + (grayLevel << 8) + grayLevel;
Ima.setRGB(x, y, pixels[x][y]);
}
}
return Ima;
}
public void Write_Image(int [][] pixels) throws IOException
{
File outputfile;
outputfile = new File("Y0111.png");
BufferedImage BI = this.Create_Gray_Image(pixels);
ImageIO.write(BI, "png", outputfile);
System.out.println("We finished writing the file");
}
See the figure, you see file size = 176 KB (this is the original image) and file size = 154 KB (this is the output image).
The difference of size is not a problem. It's certainly because of different compression/encoding.
A BufferedImage is in fact a 1D array of size width * height * channel. getRGB is not the easiest/fastest way to manipulate a BufferedImage. You can use the Raster (faster than getRGB, not the fastest, but it takes care of the encoding for you). For a gray level image:
int[][] my array = new int[myimage.getHeight()][myimage.getWidth()] ;
for (int y=0 ; y < myimage.getHeight() ; y++)
for (int x=0 ; x < myimage.getWidth() ; x++)
myarray[y][x] = myimage.getRaster().getSample(x, y, 0) ;
The opposite way:
for (int y=0 ; y < myimage.getHeight() ; y++)
for (int x=0 ; x < myimage.getWidth() ; x++)
myimage.getRaster().setSample(x, y, 0, myarray[y][x]) ;
The fastest way to do it is to use the DataBuffer, but then you have to handle the image encoding.
I'm new to image processing in Java. I'm trying to compare two images with the code below and getting the message following the code. Any help is greatly appreciated. Thanks.
BufferedImage imgOrig = ImageIO.read(new URL(imgOrigUrl));
BufferedImage imgComp = ImageIO.read(new URL(imgCompUrl));
byte[] pixelsOrig = ((DataBufferByte) imgOrig.getRaster().getDataBuffer()).getData();
byte[] pixelsComp = ((DataBufferByte) imgComp.getRaster().getDataBuffer()).getData();
//System.out.println("Number of pixels orig:"+pixelsOrig.length);
//System.out.println("Number of pixels comp:"+pixelsComp.length);
ColorModel cmImgOrig = imgOrig.getColorModel();
ColorModel cmImgComp = imgComp.getColorModel();
int sum1 = 0;
int sum2 = 0;
for(int i:pixelsOrig){
System.out.println(cmImgOrig.getGreen(i)); //ERROR OCCURS HERE
//System.out.println(i);
}
ERROR:
Testcase: testCompareImages(com.myapp.img.compare.service.CompareServiceTest): Caused an ERROR
More than one component per pixel
java.lang.IllegalArgumentException: More than one component per pixel
at java.awt.image.ComponentColorModel.getRGBComponent(ComponentColorModel.java:594)
at java.awt.image.ComponentColorModel.getGreen(ComponentColorModel.java:675)
at com.scottmacri.img.compare.service.CompareService.compareImages(CompareService.java:42)
at com.scottmacri.img.compare.service.CompareServiceTest.testCompareImages(CompareServiceTest.java:45)
Like #Nathan Villaescusa said, the method you are using is expecting a single channel. Do you need the byte array or the color channel? If you only need color components you can do the following:
BufferedImage imgOrig = ImageIO.read(new URL(imgOrigUrl));
BufferedImage imgComp = ImageIO.read(new URL(imgCompUrl));
for (int y = 0; y < imgOrig.getHeight(); y++)
{
for (int x = 0; x < imgOrig.getWidth(); x++)
{
System.out.println(imgOrig.getRGB(x, y) >> 8 & 0xff);
}
}
where the int returned by getRGB(x, y) can be shifted to get the RGB and alpha components like so:
int a = rgb >> 32 & 0xff;
int r = rgb >> 16 & 0xff;
int g = rgb >> 8 & 0xff;
int b = rgb & 0xff;
It looks like that error is being thrown because your ColorSpace has more than 1 component, yet you are only passing in a single value to check.
You want to use the getGreen() method of ColorComponentModel that accepts a Object, not the one that accepts an int. I think the method one that accepts an int is for use with gray scale.
According to this answer, here is how to get pixel data using this method:
Raster r = imgOrig.getData();
SampleModel sm = r.getSampleModel();
int width = sm.getWidth();
int height = sm.getHeight();
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
Object pixel = sm.getPixel(x, u, (int[])null, r.getDataBuffer());
System.out.println(cmImgOrig.getGreen(pixel));
}
}