What does this java bitshift do? - java

I found some bitshift code in the Java implementation of Hough Transform on Rosetta code, I understand in general what the code does, except this part:
rgbValue = (int)(((rgbValue & 0xFF0000) >>> 16) * 0.30 + ((rgbValue & 0xFF00) >>> 8) * 0.59 + (rgbValue & 0xFF) * 0.11);
I think it takes the average of all 3 pixels, that is at least what it looks like when I output the result. But how does this work? What are these magic numbers?
Method in which this function is used, pasted for convenience:
public static ArrayData getArrayDataFromImage(String filename) throws IOException
{
BufferedImage inputImage = ImageIO.read(new File(filename));
int width = inputImage.getWidth();
int height = inputImage.getHeight();
int[] rgbData = inputImage.getRGB(0, 0, width, height, null, 0, width);
ArrayData arrayData = new ArrayData(width, height);
// Flip y axis when reading image
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
int rgbValue = rgbData[y * width + x];
// What does this do?
rgbValue = (int)(((rgbValue & 0xFF0000) >>> 16) * 0.30 + ((rgbValue & 0xFF00) >>> 8) * 0.59 + (rgbValue & 0xFF) * 0.11);
arrayData.set(x, height - 1 - y, rgbValue);
}
}
return arrayData;
}

This is the trick that converts a 24-bit RGB value to a grayscale value using coefficients 0.3, 0.59, and 0.11 (note that these values add up to 1).
This operation (rgbValue & 0xFF0000) >>> 16 cuts out bits 17..24, and shifts them right to position 0..7, producing a value between 0 and 255, inclusive. Similarly, (rgbValue & 0xFF00) >>> 8 cuts out bits 8..16, and shifts them to position 0..7.
This Q&A talks about the coefficients, and discusses other alternatives.

Related

How many iterations of the Mandelbrot set for an accurate picture at a certain zoom?

I have implemented the mandelbrot set algorithm in Java which I am using to make an animation of zooming into the set. My problem is that the algorithm is performing very slowly since I have it set such that the maximum number of iterations is high (1000) so that clarity will be preserved when zooming in closely. However, when on a more zoomed-out picture, only around 100 iterations are required to have an accurate picture.
My question is: is there some function f(x) such that for x screen width, clarity will be acceptable? (This only needs to be approximate since the definition of "clear" isn't itself very clear, but the trendline should follow the rate of increase of accuracy which follows the set itself)
Here is my current implementation of the algorithm:
private double[] target = {-1.256640565451168862869, -0.382386428889165027247};
// Returns an integer RGB value (0xRRGGBB) representing the colour which should be drawn at a certain position on the screen
private int getMandleRGB(int x, int y, int w, int h) {
Complex c = new Complex();
c.r = lerp(lerp(-2, target[0], 1-zoom), lerp(1, target[0], 1-zoom), x/(double) w);
c.i = lerp(lerp(-1.5, target[1], 1-zoom), lerp(1.5, target[1], 1-zoom), y/(double) h);
Complex z = new Complex();
z.r = c.r;
z.i = c.i;
int i = 0;
for (i = 0; i < 1000; i++) {
z = Complex.add(Complex.multiply(z, z), c);
if (z.i*z.i+z.r*z.r > 4) {
double t = Math.log(i)/Math.log(1000d);
return (int) lerp((gradient[0] & 0xff0000) >> 16,
(gradient[1] & 0xff0000) >> 16, t)*0x10000
+ (int) lerp((gradient[0] & 0xff00) >> 8,
(gradient[1] & 0xff00) >> 8, t)*0x100
+ (int) lerp((gradient[0] & 0xff),
(gradient[1] & 0xff), t);
}
}
return 0x000000; // black
}

Color quantization diluting pure whites

I have a simple quantization function
public static int quantize(int oldpixel) {
int r = (oldpixel >> 16) & 0xff;
int g = (oldpixel >> 8) & 0xff;
int b = (oldpixel >> 0) & 0xff;
int color = 0xff << 24 | (((int) ((r) / 32) * 32) & 0xff) << 16 |
(((int) ((g) / 32) * 32) & 0xff) << 8 |
(((int) ((b) / 32) * 32)& 0xff) << 0;
return color;
}
What it does is reduce a color to a lower detail color, then expands it, this artificially limits the pallet and I'll use it for a dither filter, an image through the function produces this
In:
Unquantized hue wheel
Out:
Quantized hue wheel
This is almost perfect as the result, except the whites are reduced to a gray, I understand the cause is my flooring of divided colors in the algorithm, but I do not know how to fix this, any suggestions would be appreciated
After dividing each component by 32, you have an integer between 0 and 7. You are trying to map this back to the range 0 to 255, so that 0 is 0 and 7 is 255.
You can do this by multiplying it by 255/7, which happens to be about 36.428.
You could use something like (int)((r / 32) * (255.0 / 7.0)), but the cast is ugly in Java. To improve that you could wrap it in a function, and have quantizeChannel(r), quantizeChannel(g) and quantizeChannel(b). Or you could swap the order and use integer arithmetic: r / 32 * 255 / 7.

Converting grayscale image pixels to defined scale

I'm looking to use a very crude heightmap I've created in Photoshop to define a tiled isometric grid for me:
Map:
http://i.imgur.com/jKM7AgI.png
I'm aiming to loop through every pixel in the image and convert the colour of that pixel to a scale of my choosing, for example 0-100.
At the moment I'm using the following code:
try
{
final File file = new File("D:\\clouds.png");
final BufferedImage image = ImageIO.read(file);
for (int x = 0; x < image.getWidth(); x++)
{
for (int y = 0; y < image.getHeight(); y++)
{
int clr = image.getRGB(x, y) / 99999;
if (clr <= 0)
clr = -clr;
System.out.println(clr);
}
}
}
catch (IOException ex)
{
// Deal with exception
}
This works to an extent; the black pixel at position 0 is 167 and the white pixel at position 999 is 0. However when I insert certain pixels into the image I get slightly odd results, for example a gray pixel that's very close to white returns over 100 when I would expect it to be in single digits.
Is there an alternate solution I could use that would yield more reliable results?
Many thanks.
Since it's a grayscale map, the RGB parts will all be the same value (with range 0 - 255), so just take one out of the packed integer and find out what percent of 255 it is:
int clr = (int) ((image.getRGB(x, y) & 0xFF) / 255.0 * 100);
System.out.println(clr);
getRGB returns all channels packed into one int so you shouldn't do arithmetic with it. Maybe use the norm of the RGB-vector instead?
for (int x = 0; x < image.getWidth(); ++x) {
for (int y = 0; y < image.getHeight(); ++y) {
final int rgb = image.getRGB(x, y);
final int red = ((rgb & 0xFF0000) >> 16);
final int green = ((rgb & 0x00FF00) >> 8);
final int blue = ((rgb & 0x0000FF) >> 0);
// Norm of RGB vector mapped to the unit interval.
final double intensity =
Math.sqrt(red * red + green * green + blue * blue)
/ Math.sqrt(3 * 255 * 255);
}
}
Note that there is also the java.awt.Color class that can be instantiated with the int returned by getRGB and provides getRed, getGreen and getBlue methods if you don't want to do the bit manipulations yourself.

Incorrect result of image subtraction

I wanted to subtract two images pixel by pixel to check how much they are similar. Images have the same size one is little darker and beside brightness they don't differ. But I get those little dots in the result. Did I subtract those two images rigth? Both are bmp files.
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
public class Main2 {
public static void main(String[] args) throws Exception {
int[][][] ch = new int[4][4][4];
BufferedImage image1 = ImageIO.read(new File("1.bmp"));
BufferedImage image2 = ImageIO.read(new File("2.bmp"));
BufferedImage image3 = new BufferedImage(image1.getWidth(), image1.getHeight(), image1.getType());
int color;
for(int x = 0; x < image1.getWidth(); x++)
for(int y = 0; y < image1.getHeight(); y++) {
color = Math.abs(image2.getRGB(x, y) - image1.getRGB(x, y));
image3.setRGB(x, y, color);
}
ImageIO.write(image3, "bmp", new File("image.bmp"));
}
}
Image 1
Image 2
Result
The problem here is that you can't subtract the colors direcly. Each pixel is represented by one int value. This int value consists of 4 bytes. These 4 bytes represent the color components ARGB, where
A = Alpha
R = Red
G = Green
B = Blue
(Alpha is the opacity of the pixel, and always 255 (that is, the maximum value) in BMP images).
Thus, one pixel may be represented by
(255, 0, 254, 0)
When you subtract another pixel from this one, like (255, 0, 255, 0), then the third byte will underflow: It would become -1. But since this is part of ONE integer, the resulting color will be something like
(255, 0, 254, 0) -
(255, 0, 255, 0) =
(255, 255, 255, 0)
and thus, be far from what you would expect in this case.
The key point is that you have to split your color into the A,R,G and B components, and perform the computation on these components. In the most general form, it may be implemented like this:
int argb0 = image0.getRGB(x, y);
int argb1 = image1.getRGB(x, y);
int a0 = (argb0 >> 24) & 0xFF;
int r0 = (argb0 >> 16) & 0xFF;
int g0 = (argb0 >> 8) & 0xFF;
int b0 = (argb0 ) & 0xFF;
int a1 = (argb1 >> 24) & 0xFF;
int r1 = (argb1 >> 16) & 0xFF;
int g1 = (argb1 >> 8) & 0xFF;
int b1 = (argb1 ) & 0xFF;
int aDiff = Math.abs(a1 - a0);
int rDiff = Math.abs(r1 - r0);
int gDiff = Math.abs(g1 - g0);
int bDiff = Math.abs(b1 - b0);
int diff =
(aDiff << 24) | (rDiff << 16) | (gDiff << 8) | bDiff;
result.setRGB(x, y, diff);
Since these are grayscale images, the computations done here are somewhat redundant: For grayscale images, the R, G and B components are always equal. And since the opacity is always 255, it does not have to be treated explicitly here. So for your particular case, it should be sufficient to simplify this to
int argb0 = image0.getRGB(x, y);
int argb1 = image1.getRGB(x, y);
// Here the 'b' stands for 'blue' as well
// as for 'brightness' :-)
int b0 = argb0 & 0xFF;
int b1 = argb1 & 0xFF;
int bDiff = Math.abs(b1 - b0);
int diff =
(255 << 24) | (bDiff << 16) | (bDiff << 8) | bDiff;
result.setRGB(x, y, diff);
You did not "subtract one pixel from the other" correctly. getRGB returns "an integer pixel in the default RGB color model (TYPE_INT_ARGB)". What you are seeing is an "overflow" from one byte into the next, and thus from one color into the next.
Suppose you have colors 804020 - 404120 -- this is 3FFF00; the difference in the G component, 1 gets output as FF.
The correct procedure is to split the return value from getRGB into separate red, green, and blue, subtract each one, make sure they fit into unsigned bytes again (I guess your Math.abs is okay) and then write out a reconstructed new RGB value.
I found this which does what you want. It does seem to do the same thing and it may be more "correct" than your code. I assume it's possible to extract the source code.
http://tutorial.simplecv.org/en/latest/examples/image-math.html
/Fredrik Wahlgren

Not sure how to implement the following algorithm

I am trying to implement Histogram/image Equalization on a coloured image. I am not sure if I have implemented it correct because the screen just goes black every time I apply it to a bitmap image. The algorithm is called histogram equalization.
The part of my code that does the Histogram Equalization calculation:
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
A = (pixels[index] >> 24) & 0xFF;
R = (pixels[index] >> 16) & 0xFF;
G = (pixels[index] >> 8) & 0xFF;
B = pixels[index] & 0xFF;
R = Math.round(((R - cumR[minR]) / (cumR[maxR] - cumR[minR])) * 255);
G = Math.round(((G - cumG[minG]) / (cumG[maxG] - cumG[minG])) * 255);
B = Math.round(((B - cumB[minB]) / (cumB[maxB] - cumB[minB])) * 255);
returnBitmap.setPixel(x, y, Color.argb(A, R, G, B));
++index;
}
}
The image appears black once my code is applied, why doesnt it display an equalized image?
You're not calculating the histograms correctly. You shouldn't have a histogram spot for each pixel, you have one for each value[0..255]. You want to count how many pixels have that value, not the total "value" of red.
Here's a good way to get the histogram(and cumulative) for an image. It should get you started on the right path.
// generate histogram channels
// histogram arrays should be [0...255]
for (int i = 0; i < pixels.length; i++) {
R = (pixels[i] >> 16) & 0xFF;
G = (pixels[i] >> 8) & 0xFF;
B = pixels[i] & 0xFF;
histoR[R]++;
histoG[G]++;
histoB[B]++;
}
// generate cumulative histograms
cumR[0] = histoR[0];
cumG[0] = histoG[0];
cumB[0] = histoB[0];
for(int i=1;i<histoR.length;i++){
cumR[i] = histoR[i] + histoR[i-1];
cumG[i] = histoG[i] + histoG[i-1];
cumB[i] = histoB[i] + histoB[i-1];
}
After some research, I was able to find a Histogram Equalization using a LUT example for Java and it is a better option than converting it to another Color Space such as RGB to YUV.
With minimal modification, I was able to use the following code:
Histogram Equalization for Java

Categories