Array RGB manipulation - java

Lets say I have an int[][] arrayA and an int[][] arrayB. At any given coordinate in this array lies an RGB value. What I want to do is merge the RGB values from arrayA and array2 into a new array, newArray, using a weighted average method.
So what I'm doing is extracting the red, green, and blue values from each RGB value like this:
curColA=RGB //suppose RGB is just the RGB in any given point
int curRedA = (curCol >> 16) & 0xFF;
int curGreenA = (curCol >> 8) & 0xFF;
int curBlueA= curCol & 0xFF;
I do the same for arrayB, and now I want to merge them. Here's where I'm having trouble. Do I just do newRed=(curRedA+curRedB)/2 or is there some other way to do this?
arrayA values: { { 0, 0x44, 0x5500, 0x660000 } };
arrayB values: { { 2, 4, 6, 8 } };
newArray expected values: { 0, 0x44, 6, 0x660000 } };

A weighted average is usually done something like:
newRed = (curRedA * redWeight + curRedB * (1 - redWeight));
...where redWeight is in the range [0, 1], and represents the weight (or bias) towards red values in the 'A' array. It also represents the inverse of the bias toward the red values in the 'B' array.

Related

Change bufferedImage pixels in java

I have a small problem.
I have a BufferedImage which has pixels that have an RGB value.
This code is used to create an RGB value from de R, G, B, A values.
private int getRGB(int r, int g, int b, int a){
return (a << 24) | (r << 16) | (g << 8) | b;
}
The first question is, how can i get the R, G, B, A values from a Pixel, because the pixel RGB will return the number that is generated by the code above.
And if I have those values, can I put 2 pixels on top of each other so it calculates the new values or do I have to do that by manual?
With automatic calculation you can think of Java 2D when you put 2 transport things on top it will merge the colors.
Thank you very much.
The first question is, how can i get the R, G, B, A values from a Pixel, because the pixel RGB will return the number that is generated by the code above.
You can reverse the process as follows:
int a = (argb >> 24) & 0xFF;
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = (argb >> 0) & 0xFF;
And if I have those values, can I put 2 pixels on top of each other so it calculates the new values or do I have to do that by manual? With automatic calculation you can think of Java 2D when you put 2 transport things on top it will merge the colors.
You will need to do this manually (or invoking a utility method per-pixel), since you need to decide how to composite the colors, e.g. you could use the maximum, sum, mean etc pixel value.

image.getRaster().getDataBuffer() returns array of negative values

This answer suggests that it's over 10 times faster to loop pixel array instead of using BufferedImage.getRGB. Such difference is too important to by ignored in my computer vision program. For that reason, O rewritten my IntegralImage method to calculate integral image using the pixel array:
/* Generate an integral image. Every pixel on such image contains sum of colors or all the
pixels before and itself.
*/
public static double[][][] integralImage(BufferedImage image) {
//Cache width and height in variables
int w = image.getWidth();
int h = image.getHeight();
//Create the 2D array as large as the image is
//Notice that I use [Y, X] coordinates to comply with the formula
double integral_image[][][] = new double[h][w][3];
//Variables for the image pixel array looping
final int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
//final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
//If the image has alpha, there will be 4 elements per pixel
final boolean hasAlpha = image.getAlphaRaster() != null;
final int pixel_size = hasAlpha?4:3;
//If there's alpha it's the first of 4 values, so we skip it
final int pixel_offset = hasAlpha?1:0;
//Coordinates, will be iterated too
//It's faster than calculating them using % and multiplication
int x=0;
int y=0;
int pixel = 0;
//Tmp storage for color
int[] color = new int[3];
//Loop through pixel array
for(int i=0, l=pixels.length; i<l; i+=pixel_size) {
//Prepare all the colors in advance
color[2] = ((int) pixels[pixel + pixel_offset] & 0xff); // blue;
color[1] = ((int) pixels[pixel + pixel_offset + 1] & 0xff); // green;
color[0] = ((int) pixels[pixel + pixel_offset + 2] & 0xff); // red;
//For every color, calculate the integrals
for(int j=0; j<3; j++) {
//Calculate the integral image field
double A = (x > 0 && y > 0) ? integral_image[y-1][x-1][j] : 0;
double B = (x > 0) ? integral_image[y][x-1][j] : 0;
double C = (y > 0) ? integral_image[y-1][x][j] : 0;
integral_image[y][x][j] = - A + B + C + color[j];
}
//Iterate coordinates
x++;
if(x>=w) {
x=0;
y++;
}
}
//Return the array
return integral_image;
}
The problem is that if I use this debug output in the for loop:
if(x==0) {
System.out.println("rgb["+pixels[pixel+pixel_offset+2]+", "+pixels[pixel+pixel_offset+1]+", "+pixels[pixel+pixel_offset]+"]");
System.out.println("rgb["+color[0]+", "+color[1]+", "+color[2]+"]");
}
This is what I get:
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
...
So how should I properly retrieve pixel array for BufferedImage images?
A bug in the code above, that is easily missed, is that the for loop doesn't loop as you'd expect. The for loop updates i, while the loop body uses pixel for its array indexing. Thus, you will only ever see the values of pixel 1, 2 and 3.
Apart from that:
The "problem" with the negative pixel values, is most likely that the code assumes a BufferedImage that stores its pixels in "pixel interleaved" form, however, they are stored "pixel packed". That is, all samples (R, G, B and A) for one pixel is stored in a single sample, an int. This will be the case for all BufferedImage.TYPE_INT_* types (while the BufferedImage.TYPE_nBYTE_* types are stored interleaved).
It's completely normal to have negative values in the raster, this will happen for any pixel that is less than 50% transparent (more than or equal to 50% opaque), because of how the 4 samples are packed into the int, and because int is a signed type in Java.
In this case, use:
int[] color = new int[3];
for (int i = 0; i < pixels.length; i++) {
// Assuming TYPE_INT_RGB, TYPE_INT_ARGB or TYPE_INT_ARGB_PRE
// For TYPE_INT_BGR, you need to reverse the colors.
// You seem to ignore alpha, is that correct?
color[0] = ((pixels[i] >> 16) & 0xff); // red;
color[1] = ((pixels[i] >> 8) & 0xff); // green;
color[2] = ( pixels[i] & 0xff); // blue;
// The rest of the computations...
}
Another possibility, is that you have created a custom type image (BufferedImage.TYPE_CUSTOM) that really uses a 32 bit unsigned int per sample. This is possible, however, int is still a signed entity in Java, so you need to mask off the sign bit. To complicate this a little, in Java -1 & 0xFFFFFFFF == -1 because any computation on an int will still be an int, unless you explicitly say otherwise (doing the same on a byte or short value would have "scaled up" to int). To get a positive value, you need to use a long value like this: -1 & 0xFFFFFFFFL (which is 4294967295).
In this case, use:
long[] color = new long[3];
for(int i = 0; i < pixels.length / pixel_size; i += pixel_size) {
// Somehow assuming BGR order in input, and RGB output (color)
// Still ignoring alpha
color[0] = (pixels[i + pixel_offset + 2] & 0xFFFFFFFFL); // red;
color[1] = (pixels[i + pixel_offset + 1] & 0xFFFFFFFFL); // green;
color[2] = (pixels[i + pixel_offset ] & 0xFFFFFFFFL); // blue;
// The rest of the computations...
}
I don't know what type of image you have, so I can't say for sure which one is the problem, but it's one of those. :-)
PS: BufferedImage.getAlphaRaster() is a possibly an expensive and also inaccurate way to tell if the image has alpha. It's better to just use image.getColorModel().hasAlpha(). See also hasAlpha vs getAlphaRaster.

Can someone explain this code step by step?

I sort of understand what it's doing, but what is the logic behind the steps in the code provided below? It's a way of loading a texture in LWJGL. But what is happening in the for loop? Wouldn't you just multiply x and y to get the location of a pixel? Any explanation of whats going on from the for loop to the end of the code would be helpful, as the comments are vary vague when it gets to the for loop. I don't understand the weird symbols when putting pixel info into the buffers.
public class TextureLoader {
private static final int BYTES_PER_PIXEL = 4;//3 for RGB, 4 for RGBA
public static int loadTexture(BufferedImage image){
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, image.getWidth(), image.getHeight(), pixels, 0, image.getWidth());
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * BYTES_PER_PIXEL); //4 for RGBA, 3 for RGB
for(int y = 0; y < image.getHeight(); y++){
for(int x = 0; x < image.getWidth(); x++){
int pixel = pixels[y * image.getWidth() + x];
buffer.put((byte) ((pixel >> 16) & 0xFF)); // Red component
buffer.put((byte) ((pixel >> 8) & 0xFF)); // Green component
buffer.put((byte) (pixel & 0xFF)); // Blue component
buffer.put((byte) ((pixel >> 24) & 0xFF)); // Alpha component. Only for RGBA
}
}
buffer.flip(); //FOR THE LOVE OF GOD DO NOT FORGET THIS
// You now have a ByteBuffer filled with the color data of each pixel.
// Now just create a texture ID and bind it. Then you can load it using
// whatever OpenGL method you want, for example:
int textureID = glGenTextures(); //Generate texture ID
glBindTexture(GL_TEXTURE_2D, textureID); //Bind texture ID
//Setup wrap mode
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);
//Setup texture scaling filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//Send texel data to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, image.getWidth(), image.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Return the texture ID so we can bind it later again
return textureID;
}
public static BufferedImage loadImage(String loc)
{
try {
return ImageIO.read(DefenseStep.class.getResource(loc));
} catch (IOException e) {
//Error Handling Here
}
return null;
}
}
All it's doing is loading the colors from the image into a buffer pixel-by-pixel.
This code is doing it using the bitwise operators from Java. See this Java trail.
When you see >>, this means "shift the binary of this number to the right," and when you see num >> n, this means "shift the binary of num's value n bits to the right. For example:
System.out.println(4 >> 2); // Prints "1"
This prints 1 because 4 in binary is 0100, and when shifted right by 2 bits, you get 0001, which is 1 in decimal.
Now, that being said, colors in an image are represented using ARGB. This means that every 32 bits from the image has 8 bits dedicated to each of A, R, G, and B (alpha, red, green, and blue), so its hexadecimal form looks like this:
0xAARRGGBB
Where each letter is a hexadecimal digit.
The code you've posted is using binary logic to retrieve each group of AA, RR, etc. Each group is exactly one byte, or 8 bits, so that's where the 8, 16, and 24 come from. & does a bitwise logical AND on the two numbers, where only bit positions that are 1 in both numbers remain 1, and every other position becomes 0.
For a concrete example, let's retrieve the RR byte from purple in ARGB.
In ARGB, yellow is A=255, R=127, G=0, and B=127, so our hexadecimal version of this is:
0xAARRGGBB
0xFF7F007F
Looking at the hexadecimal value, we see that RR is the third byte from the end. To get RR when the ARGB value is in a variable, let's start by putting this into int pixel:
int pixel = 0xFF7F007F;
Note the similarity to your code. Each int in the pixel matrix is an ARGB color.
Next, we'll shift the number right by 2 bytes so RR is the lowest byte, which gives us:
0x0000AARR
0x0000FF7F
This is done with this code:
int intermediate = pixel >> 16;
The 16 here comes from the fact that we want to shift right by 2 bytes, and each byte contains 8 bits. The >> operator expects bits, so we have to give it 16 instead of 2.
Next, we want to get rid of AA, but keep RR. To do this, we use what's called a bitmask along with the & operator. Bitmasks are used to single out the particular bits of a binary number. Here, we want 0xFF. This is exactly eight 1's in binary. (Each F in hexadecimal is 1111 in binary.)
Bear with me, cause this will look ugly. When we do int red = intermediate & 0xFF, what it's doing (in binary) is:
0000 0000 0000 0000 1111 1111 0111 1111 (0x00007F7F)
& 0000 0000 0000 0000 0000 0000 1111 1111 (0x000000FF)
= 0000 0000 0000 0000 0000 0000 0111 1111 (0x0000007F)
Remember that & means that the resulting bit is only 1 if both input bits are 1.
So we get that red = 0x7F or red = 127, which is exactly what we had above.
Edit:
Why is he looping through the image's pixels starting from y, then x, instead of x then y? And when he creates the variable pixel, why would he multiply y by width and add x? Shouldn't it just be x * y to get the pixel?
Let's use a simple 3x3 image to demonstrate. In a 3x3 image, you have 9 pixels, which means the pixels array has 9 elements. These elements are created by getRGB in a row-by-row order with respect to the image, so the pixel/index relationship looks like this:
0 1 2
3 4 5
6 7 8
The positions correspond to the index used to get that pixel. So to get the top-left pixel of the image, (0, 0), I use pixel[0]. To get the center pixel, (1, 1), I use pixel[4]. To get the pixel under the center pixel, (1, 2), I use pixel[7].
Notice that this produces a 1:1 mapping for image coordinate to index, like so:
Coord. -> Index
---------------
(0, 0) -> 0
(1, 0) -> 1
(2, 0) -> 2
(0, 1) -> 3
(1, 1) -> 4
(2, 1) -> 5
(0, 2) -> 6
(1, 2) -> 7
(2, 2) -> 8
The coordinates are (x, y) pairs, so we need to figure out a mathematical way to turn x and y pairs into an index.
I could get into some fun math, but I won't do that for sake of simplicity. Let's just start with your proposal, using x * y to get the index. If we do that, we get:
Coord. -> Index
-------------------
(0, 0) -> 0 * 0 = 0
(1, 0) -> 1 * 0 = 0
(2, 0) -> 2 * 0 = 0
(0, 1) -> 0 * 1 = 0
(1, 1) -> 1 * 1 = 1
(2, 1) -> 2 * 1 = 2
(0, 2) -> 0 * 2 = 0
(1, 2) -> 1 * 2 = 2
(2, 2) -> 2 * 2 = 4
This isn't the mapping we have above, so using x * y won't work. Since we can't change how getRGB orders the pixels, we need it to match the mapping above.
Let's try his solution. His equation is x = y * w, where w is the width, in this case, 3:
Coord. -> Index
-----------------------
(0, 0) -> 0 + 0 * 3 = 0
(1, 0) -> 1 + 0 * 3 = 1
(2, 0) -> 2 + 0 * 3 = 2
(0, 1) -> 0 + 1 * 3 = 3
(1, 1) -> 1 + 1 * 3 = 4
(2, 1) -> 2 + 1 * 3 = 5
(0, 2) -> 0 + 2 * 3 = 6
(1, 2) -> 1 + 2 * 3 = 7
(2, 2) -> 2 + 2 * 3 = 8
See how the mappings line up to those above? This is what we wanted. Basically what y * w is doing here is skipping the first y * w pixels in the array, which is exactly the same as skipping y rows of pixels. Then by iterating through x, we're iterating through each pixel of the current row.
In case it's not clear from the explanation above, we iterate over y then x because the pixels are added row-by-row to the array in horizontal (x) order, so the inner loop should iterate over the x value so that we aren't jumping around. If we used the same y * w + x, then iterating over x then y would cause the iteration to go 0, 3, 6, 1, 4, 7, 2, 5, 8, which is undesirable since we need to add the colors to the byte buffer in the same order as the pixel array.
Each pixel is represented by a 32-bit integer. The leftmost eight bits of that integer are its alpha component, followed by red, followed by green, followed by blue.
(pixel >> 16) & 0xFF shifts the integer sixteen bits to the right, so the rightmost eight bits in it are now the red component. It then uses a bit mask to set all other bits to zero, so you're left with just the red component. The same logic applies for the other components.
Further reading.
Weird symbols you are referring to are shift operators, and bitwise AND operators I think.
>> n shifts right with n bits
&& 0xFF means that you take the lowest 8 bits of a given binary value
So in short: The for loop decomposes the pixel variable into 4 different 8bit parts: the highest 8 bit will be the alpha, the second the red, the third the green, and the last the blue
So this is the map of the 32bits:
AAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB
Where:
A: alpha
R: red
G: green
B: blue
well each component of RGBA (red, green, blue, alpha) has 256 = 2^8 (= 1 byte) different values. Concatenating each component yields a 32 bit binary string which the for loop is loading into buffer byte-wise.

Getting RGB value out of negative int value

i had to convert from rgb to hsb so as to perform histogram equalization on an image. I have converted it back into rgb and i am getting a negative value like -158435. Can anyone please help me understand how to convert this into a colour so i can set it to my pixel? Thanks
Simply make use of the bit-shifting. It works.
int rgb = 0x00F15D49;
int r = (rgb >>> 16) & 0xFF;
int g = (rgb >>> 8) & 0xFF;
int b = rgb & 0xFF;
Then use this method
Color.RGBtoHSB(int r, int g, int b, float[] hsbvals); like this:
float[] hsb = Color.RGBtoHSB(r, g, b, null);
To convert it back, simply use the other method (edited, you were right):
int rgb = Color.HSBtoRGB(hsb[0], hsb[1], hsb[2]);
System.out.println(Integer.toHexString(rgb));
The negative value appears because you are storing colors as ARGB (Alpha Red Green Blue).
The alpha channel is then often just 100% opaque, which is 255 = 0xFF.
Therefore, when the color is converted to an 32 bit int, it appears negative.
Example: Opaque Black = ARGB(255, 0, 0, 0) = 0xFF000000 = -16777216

Convert RGB values to Integer

So in a BufferedImage, you receive a single integer that has the RGB values represented in it. So far I use the following to get the RGB values from it:
// rgbs is an array of integers, every single integer represents the
// RGB values combined in some way
int r = (int) ((Math.pow(256,3) + rgbs[k]) / 65536);
int g = (int) (((Math.pow(256,3) + rgbs[k]) / 256 ) % 256 );
int b = (int) ((Math.pow(256,3) + rgbs[k]) % 256);
And so far, it works.
What I need to do is figure out how to get an integer so I can use BufferedImage.setRGB(), because that takes the same type of data it gave me.
I think the code is something like:
int rgb = red;
rgb = (rgb << 8) + green;
rgb = (rgb << 8) + blue;
Also, I believe you can get the individual values using:
int red = (rgb >> 16) & 0xFF;
int green = (rgb >> 8) & 0xFF;
int blue = rgb & 0xFF;
int rgb = ((r&0x0ff)<<16)|((g&0x0ff)<<8)|(b&0x0ff);
If you know that your r, g, and b values are never > 255 or < 0 you don't need the &0x0ff
Additionaly
int red = (rgb>>16)&0x0ff;
int green=(rgb>>8) &0x0ff;
int blue= (rgb) &0x0ff;
No need for multipling.
if r, g, b = 3 integer values from 0 to 255 for each color
then
rgb = 65536 * r + 256 * g + b;
the single rgb value is the composite value of r,g,b combined for a total of 16777216 possible shades.
int rgb = new Color(r, g, b).getRGB();
To get individual colour values you can use Color like following for pixel(x,y).
import java.awt.Color;
import java.awt.image.BufferedImage;
Color c = new Color(buffOriginalImage.getRGB(x,y));
int red = c.getRed();
int green = c.getGreen();
int blue = c.getBlue();
The above will give you the integer values of Red, Green and Blue in range of 0 to 255.
To set the values from RGB you can do so by:
Color myColour = new Color(red, green, blue);
int rgb = myColour.getRGB();
//Change the pixel at (x,y) ti rgb value
image.setRGB(x, y, rgb);
Please be advised that the above changes the value of a single pixel. So if you need to change the value entire image you may need to iterate over the image using two for loops.

Categories