Anyone know of a simple way of converting the RGBint value returned from <BufferedImage> getRGB(i,j) into a greyscale value?
I was going to simply average the RGB values by breaking them up using this;
int alpha = (pixel >> 24) & 0xff;
int red = (pixel >> 16) & 0xff;
int green = (pixel >> 8) & 0xff;
int blue = (pixel) & 0xff;
and then average red,green,blue.
But i feel like for such a simple operation I must be missing something...
After a great answer to a different question, I should clear up what i want.
I want to take the RGB value returned from getRGB(i,j), and turn that into a white-value in the range 0-255 representing the "Darkness" of that pixel.
This can be accomplished by averaging and such but I am looking for an OTS implementation to save me a few lines.
This tutorial shows 3 ways to do it:
By changing ColorSpace
ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY);
ColorConvertOp op = new ColorConvertOp(cs, null);
BufferedImage image = op.filter(bufferedImage, null);
By drawing to a grayscale BufferedImage
BufferedImage image = new BufferedImage(width, height,
BufferedImage.TYPE_BYTE_GRAY);
Graphics g = image.getGraphics();
g.drawImage(colorImage, 0, 0, null);
g.dispose();
By using GrayFilter
ImageFilter filter = new GrayFilter(true, 50);
ImageProducer producer = new FilteredImageSource(colorImage.getSource(), filter);
Image image = this.createImage(producer);
this isn't as simple as it sounds because there is no 100% correct answer for how to map a colour to greyscale.
the method i'd use is to convert RGB to HSL then 0 the S portion (and optionally convert back to RGB) but this might not be exactly what you want. (it is equivalent to the average of the highest and lowest rgb value so a little different to the average of all 3)
Averaging sounds good, although Matlab rgb2gray uses weighted sum.
Check Matlab rgb2gray
UPDATE
I tried implementing Matlab method in Java, maybe i did it wrong, but the averaging gave better results.
Related
I'm trying to build one of those composite image programs where you feed it X images and it'll make another photo out of those images. Now, I've got everything from the comparison equation to getting files out of the source folder for comparison, but I can't seem to figure out how to replace a singular pixel with the images I already have. If this can be done with a bufferedImage/file, how? If not is there another way I could accomplish it?
//create simple image
BufferedImage img = new BufferedImage(320, 240, BufferedImage.TYPE_INT_RGB);
//set pixel color RED at x=10, y=10
img.setRGB(10, 10, Color.red.getRGB());
//get pixel color as RGB
int rgb = img.getRGB(50, 50);
//get components of RGB as separated R, G and B values
int red = (rgb >> 16) & 0xFF;
int green = (rgb >> 8) & 0xFF;
int blue = rgb & 0xFF;
Context:
I'm trying to create an animation in java.
The animation is simply take an image and make it appear from the darkest pixels to the lightest.
The Problem:
The internal algorithm defining the pixels transformations is not my issue.
I'm new to Java and Computing in general. I've done a bit of research, and know that there are plenty of APIs that helps with image filters/transformations.
My problem is performance, understanding it.
To the implementation i've created a method that do the following:
Receives an BufferedImage.
Get the WritableRaster of the BufferedImage.
Using setSample and getSample, process and change pixel by pixel.
Return the BufferedImage.
After that, i use a Timer to call the method.
The returned BufferedImage is attached to a JButton via setIcon after each call.
With a 500x500 image my machine takes around 3ms to process each call.
For standard 1080p images it takes around 30ms, wich is about 33 frames per second.
My goal is to process/animate FullHD images at 30fps... And i will not be able to with the path I'm following. Not in most computers.
What i'm getting wrong? How i can make it faster? Using getDataBuffer or getPixels instead of getSample can improve it?
Thanks in advance! And sorry my english.
Partial Conclusions:
Thanks to some help here. I've changed concept. Instead of using getSample and setSample I've stored the pixels ARGB informations of the BufferedImage into an array. So i process the array and copy it all at once into a Raster of another BufferedImage.
The process time reduced from 30ms ( get/set sample ) to 1ms. ( measured poorly, but in the same machine, enviroment and code ).
Below is a little class i coded to implement it. The class can filter pixels only below a Brightness level, the other pixels become transparent ( alpha = 0 ).
Hope it help's who search for the same solution in the future. Be wary that I'm below rookie level in Java, so the code might be poorly organized/optimized.
import java.awt.Graphics2D;
import java.awt.image.*;
/**
* #author Psyny
*/
public class ImageAppearFX {
//Essencial Data
BufferedImage imgProcessed;
int[] RAWoriginal;
int[] RAWprocessed;
WritableRaster rbgRasterProcessedW;
//Information about the image
int x,y;
int[] mapBrightness;
public ImageAppearFX(BufferedImage inputIMG) {
//Store Dimensions
x = inputIMG.getWidth();
y = inputIMG.getHeight();
//Convert the input image to INT_ARGB and store it.
this.imgProcessed = new BufferedImage(x, y, BufferedImage.TYPE_INT_ARGB);
Graphics2D canvas = this.imgProcessed.createGraphics();
canvas.drawImage(inputIMG, 0, 0, x, y, null);
canvas.dispose();
//Create an int Array of the pixels informations.
//p.s.: Notice that the image was converted to INT_ARGB
this.RAWoriginal = ((DataBufferInt) this.imgProcessed.getRaster().getDataBuffer()).getData();
//Dupplication of original pixel array. So we can make changes based on original image
this.RAWprocessed = this.RAWoriginal.clone();
//Get Raster. We will need the raster to write pixels on
rbgRasterProcessedW = imgProcessed.getRaster();
//Effect Information: Store brightness information
mapBrightness = new int[x*y];
int r,g,b,a,greaterColor;
// PRocess all pixels
for(int i=0 ; i < this.RAWoriginal.length ; i++) {
a = (this.RAWoriginal[i] >> 24) & 0xFF;
r = (this.RAWoriginal[i] >> 16) & 0xFF;
g = (this.RAWoriginal[i] >> 8) & 0xFF;
b = (this.RAWoriginal[i] ) & 0xFF;
//Search for Stronger Color
greaterColor = r;
if( b > r ) {
if( g > b ) greaterColor = g;
else greaterColor = b;
} else if ( g > r ) {
greaterColor = g;
}
this.mapBrightness[i] = greaterColor;
}
}
//Effect: Show only in a certain percent of brightness
public BufferedImage BrightnessLimit(float percent) {
// Adjust input values
percent = percent / 100;
// Pixel Variables
int hardCap = (int)(255 * percent);
int r,g,b,a,bright;
// Process all pixels
for(int i=0 ; i < this.RAWoriginal.length ; i++) {
//Get information of a pixel of the ORIGINAL image
a = (this.RAWoriginal[i] >> 24) & 0xFF;
r = (this.RAWoriginal[i] >> 16) & 0xFF;
g = (this.RAWoriginal[i] >> 8) & 0xFF;
b = (this.RAWoriginal[i] ) & 0xFF;
//Brightness information of that same pixel
bright = this.mapBrightness[i];
//
if( bright > hardCap ) {
a = 0;
}
this.RAWprocessed[i] = ((a << 24) + (r << 16) + (g << 8) + ( b )); //Write ARGB in byte format
}
//Copy the processed array into the raster of processed image
rbgRasterProcessedW.setDataElements(0, 0, x, y, RAWprocessed);
return imgProcessed;
}
//Return reference to the processed image
public BufferedImage getImage() {
return imgProcessed;
}
}
While the time difference resulting from the change doesn't prove that the repeated searching is the bottleneck, it does strongly implicate it.
If you are willing/able to trade memory for time, I would first sort a list of all the pixel locations by brightness. Next, I would use the sorted list during the animation to look up the next pixel to copy.
An extra piece of advice: use one of Java's built in sorting methods. It's educational to make your own, but learning how to sort doesn't seem to be your goal here. Also, if my guess about the bottleneck is wrong, you'll want to minimize your time pursuing this answer.
I am looking to replace pixels in an image that are black to some degree (semi-black) to become fully black.
The method to do this is setRGB(int x, int y, int rgb). I know this. What I do not know is how to detect pixels that are semi-black.
I have tried (i is a BufferedImage):
final int rgb = i.getRGB(x, y);
if (rgb == -16777216) {
i.setRGB(x, y, -16777216);
}
To do this, but it only replaces the pixels that are pure black with pure black.
I have also tried dimming the image, but that does not work either.
Any ideas on how I test for generic blackness?
My goal: the image I am reading is thin text. I wish to make this bolder text by this.
The integer that you receive represents the combined red, green, blue and alpha values. Essentially, you need to:
break that integer down into its component red, green, blue values
from those values, assess the overall "brightness" of the pixel
As a rough implementation, you could do something like this:
int pixVal = ... getRGB() as you have
int red = (pixVal >>> 16);
int green = (pixVal >>> 8) & 0xff;
int blue = pixVal & 0xff;
int brightness = (red + green + blue) / 3;
if (brightness < 16) {
// pixel is black
}
Now, the value 16 is a rough value: ideally, you would tailor this to the particular image.
Purists might also whinge that the perceived "brightness" of a pixel isn't literally the mean of the red/green/blue pixels (because the human eye is not equally sensitive to these components). But that's the rough idea to work from.
How do I take an image file and convert it into a raster and then access its data (RBG values) pixel by pixel?
BufferedImage img = ImageIO.read(new File("lol"));
int rgb = img.getRGB(x, y);
Color c = new Color(rgb);
Now you can use Color.getRed(), getGreen(), getBlue() and getAlpha() to get the different values
BufferedImage image = ImageIO.read(new File(myFilename));
int pixel = image.getRGB(0, 0); // Top left pixel.
// Access the color components, valued 0-255.
int alpha = (pixel >>> 24) & 0xff; // If applicable to image format.
int r = (pixel >>> 16) & 0xff;
int g = (pixel >>> 8) & 0xff;
int b = pixel & 0xff;
[Edit] Note that #Sibbo's answer is correct and conveniently uses the Color class color accessor methods; however, extracting the colors directly via bit manipulation as I have demonstrated will likely be considerably faster since it avoids the overhead of repeated constructor calls.
Use ImageIO.read to read the image file in as a BufferedImage, and then use one of the getData methods to obtain the image's Raster. And therein, you'll find methods to obtain pixel data.
Don't use the rgb values after you completed turning the image into a raster use the rasters .getData method
Use this:
Image img.getRGB(x, y);
Color c = new Color(rgb);
I want to do a simple color to grayscale conversion using java.awt.image.BufferedImage. I'm a beginner in the field of image processing, so please forgive if I confused something.
My input image is an RGB 24-bit image (no alpha), I'd like to obtain a 8-bit grayscale BufferedImage on the output, which means I have a class like this (details omitted for clarity):
public class GrayscaleFilter {
private BufferedImage colorFrame;
private BufferedImage grayFrame =
new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
I've succesfully tried out 2 conversion methods until now, first being:
private BufferedImageOp grayscaleConv =
new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_GRAY), null);
protected void filter() {
grayscaleConv.filter(colorFrame, grayFrame);
}
And the second being:
protected void filter() {
WritableRaster raster = grayFrame.getRaster();
for(int x = 0; x < raster.getWidth(); x++) {
for(int y = 0; y < raster.getHeight(); y++){
int argb = colorFrame.getRGB(x,y);
int r = (argb >> 16) & 0xff;
int g = (argb >> 8) & 0xff;
int b = (argb ) & 0xff;
int l = (int) (.299 * r + .587 * g + .114 * b);
raster.setSample(x, y, 0, l);
}
}
}
The first method works much faster but the image produced is very dark, which means I'm losing bandwidth which is unacceptable (there is some color conversion mapping used between grayscale and sRGB ColorModel called tosRGB8LUT which doesn't work well for me, as far as I can tell but I'm not sure, I just suppose those values are used). The second method works slower, but the effect is very nice.
Is there a method of combining those two, eg. using a custom indexed ColorSpace for ColorConvertOp? If yes, could you please give me an example?
Thanks in advance.
public BufferedImage getGrayScale(BufferedImage inputImage){
BufferedImage img = new BufferedImage(inputImage.getWidth(), inputImage.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics g = img.getGraphics();
g.drawImage(inputImage, 0, 0, null);
g.dispose();
return img;
}
There's an example here which differs from your first example in one small aspect, the parameters to ColorConvertOp. Try this:
protected void filter() {
BufferedImageOp grayscaleConv =
new ColorConvertOp(colorFrame.getColorModel().getColorSpace(),
grayFrame.getColorModel().getColorSpace(), null);
grayscaleConv.filter(colorFrame, grayFrame);
}
Try modifying your second approach. Instead of working on a single pixel, retrieve an array of argb int values, convert that and set it back.
The second method is based on pixel's luminance therefore it obtains more favorable visual results. It could be sped a little bit by optimizing the expensive floating point arithmetic operation when calculate l using lookup array or hash table.
Here is a solution that has worked for me in some situations.
Take image height y, image width x, the image color depth m, and the integer bit size n. Only works if (2^m)/(x*y*2^n) >= 1.
Keep a n bit integer total for each color channel as you process the initial gray scale values. Divide each total by the (x*y) for the average value avr[channel] of each channel. Add (192 - avr[channel]) to each pixel for each channel.
Keep in mind that this approach probably won't have the same level of quality as standard luminance approaches, but if you're looking for a compromise between speed and quality, and don't want to deal with expensive floating point operations, it may work for you.