So this method I wrote gets the mean and standard deviation of an image. I am trying to modify this to get the mean and standard deviation of the red, green and blue values of an image. But I am stumped on how to do to so. I tried to use a colorModel but I was getting nowhere. Could anyone give me some advice on how to modify this code to get the mean and standard deviation of the red, green and blue values?
public static double redValues(BufferedImage image){
Raster raster = image.getRaster();
double mean = meanValue(image);
double sumOfDiff = 0.0;
Color c = new Color(image.getRGB(0, 0));
int red = c.getRed();
for (int y = 0; y < image.getHeight(); ++y){
for (int x = 0; x < image.getWidth(); ++x){
double r = raster.getSample(x, y, red) - mean;
sumOfDiff += Math.pow(r, 2);
}
}
return sumOfDiff / ((image.getWidth() * image.getHeight()) - 1);
}
But this still returns a mean brightness value. When I want to get the red value.
Related
Let's say I have an image called testimage.jpg, and want to cross-stitch it with a DMC 208 floss (RGB value: 148,91,128)
I got these hints:
Read each pixel's RGB value
Compare it with the floss
Use bilinear interpolation for image scaling
Use Euclidean distance to get the closest color
I think I already figured out how to do step 1. Now I'm wondering how to do step 2, and 4.
BufferedImage img = ImageIO.read(new File("testimage.jpg"));
int imgWidth = img.getWidth();
int imgHeight = img.getHeight();
Color dmc208 = new Color(148,91,128);
Color currentPixel = null;
for (int x = 0; x < imgWidth; x++){
for (int y = 0; y < imgHeight; y++){
currentPixel = new Color(img.getRGB(x, y));
}
}
I know how to calculate the Eulicdean distance, but how to get the"closest" color at that distance?
when I tested my code with JUnit, the following error occured:
java.lang.IllegalArgumentException: Color parameter outside of expected range: Red Green Blue
Honestly, I don't know why. My code is not very long, so I would like to post it for better help.
BufferedImage img = ImageIO.read(f);
for (int w = 0; w < img.getWidth(); w++) {
for (int h = 0; h < img.getHeight(); h++) {
Color color = new Color(img.getRGB(w, h));
float greyscale = ((0.299f * color.getRed()) + (0.587f
* color.getGreen()) + (0.144f * color.getBlue()));
Color grey = new Color(greyscale, greyscale, greyscale);
img.setRGB(w, h, grey.getRGB());
When I run the JUnit test, eclipse marks up the line with
Color grey = new Color(greyscale, greyscale, greyscale);
So, I suppose the problem might be, that I work with floating numbers and as you can see I recalculate the red, green and blue content of the image.
Could anyone help me to solve that problem?
You are calling the Color constructor with three float parameters so the values are allowed to be between 0.0 and 1.0.
But color.getRed() (Blue, Green) can return a value up to 255. So you can get the following:
float greyscale = ((0.299f *255) + (0.587f * 255) + (0.144f * 255));
System.out.println(greyscale); //262.65
Which is far to high for 1.0f and even for 252 which the Color(int,int,int) constructor allows. So scale your factors like dasblinkenlight said and cast the greyscale to an int or else you will call the wrong constructor of Color.`
new Color((int)greyscale,(int)greyscale,(int)greyscale);
I'm trying to understand why a
bufferedImg.setRGB(x, y, color.getRGB());
sets no data (white pixels) at all, if I print one immediately before it by
System.out.println(color.getRGB());
as in following java code:
...
int height = img.getHeight();
int width = img.getWidth();
for(int i = 0; i < height; i++){
for(int j = 0; j < width; j++){
Color c = new Color(img.getRGB(j, i));
int red = (int)(c.getRed() * 0.299);
int green = (int)(c.getGreen() * 0.587);
int blue = (int)(c.getBlue() *0.114);
Color newColor = new Color(red + green + blue,
red + green + blue, red + green + blue);
System.out.println(newColor.getRGB()); // resets data
img.setRGB(j, i, newColor.getRGB());
}
}
Additional infos:
Its an implementation for converting RGB to grayscale
it works perfectly fine by removing the print/log line
(multiple) calls of println() before or/and after shows correct data
buffered image source is an openCV mat
i didnt find any specific reasons on the internet
Hoping for a person to give me some insight.
I'm looking to use a very crude heightmap I've created in Photoshop to define a tiled isometric grid for me:
Map:
http://i.imgur.com/jKM7AgI.png
I'm aiming to loop through every pixel in the image and convert the colour of that pixel to a scale of my choosing, for example 0-100.
At the moment I'm using the following code:
try
{
final File file = new File("D:\\clouds.png");
final BufferedImage image = ImageIO.read(file);
for (int x = 0; x < image.getWidth(); x++)
{
for (int y = 0; y < image.getHeight(); y++)
{
int clr = image.getRGB(x, y) / 99999;
if (clr <= 0)
clr = -clr;
System.out.println(clr);
}
}
}
catch (IOException ex)
{
// Deal with exception
}
This works to an extent; the black pixel at position 0 is 167 and the white pixel at position 999 is 0. However when I insert certain pixels into the image I get slightly odd results, for example a gray pixel that's very close to white returns over 100 when I would expect it to be in single digits.
Is there an alternate solution I could use that would yield more reliable results?
Many thanks.
Since it's a grayscale map, the RGB parts will all be the same value (with range 0 - 255), so just take one out of the packed integer and find out what percent of 255 it is:
int clr = (int) ((image.getRGB(x, y) & 0xFF) / 255.0 * 100);
System.out.println(clr);
getRGB returns all channels packed into one int so you shouldn't do arithmetic with it. Maybe use the norm of the RGB-vector instead?
for (int x = 0; x < image.getWidth(); ++x) {
for (int y = 0; y < image.getHeight(); ++y) {
final int rgb = image.getRGB(x, y);
final int red = ((rgb & 0xFF0000) >> 16);
final int green = ((rgb & 0x00FF00) >> 8);
final int blue = ((rgb & 0x0000FF) >> 0);
// Norm of RGB vector mapped to the unit interval.
final double intensity =
Math.sqrt(red * red + green * green + blue * blue)
/ Math.sqrt(3 * 255 * 255);
}
}
Note that there is also the java.awt.Color class that can be instantiated with the int returned by getRGB and provides getRed, getGreen and getBlue methods if you don't want to do the bit manipulations yourself.
I have an image that is stored as an array of pixel values. I want to be able to apply a brightness or contrast filter to this image. Is there any simple way, or algorithm, that I can use to achieve this.
Here is my code...
PlanarImage img=JAI.create("fileload","C:\\aimages\\blue_water.jpg");
BufferedImage image = img.getAsBufferedImage();
int w = image.getWidth();
int h = image.getHeight();
int k = 0;
int[] sbins = new int[256];
int[] pixel = new int[3];
Double d = 0.0;
Double d1;
for (int x = 0; x < bi.getWidth(); x++) {
for (int y = 0; y < bi.getHeight(); y++) {
pixel = bi.getRaster().getPixel(x, y, new int[3]);
k = (int) ((0.2125 * pixel[0]) + (0.7154 * pixel[1]) + (0.072 * pixel[2]));
sbins[k]++;
}
}
My suggestion would be to use the built-in methods of Java to adjust the brightness and contrast, rather than trying to adjust the pixel values yourself. It seems pretty easy by doing something like this...
float brightenFactor = 1.2f
PlanarImage img=JAI.create("fileload","C:\\aimages\\blue_water.jpg");
BufferedImage image = img.getAsBufferedImage();
RescaleOp op = new RescaleOp(brightenFactor, 0, null);
image = op.filter(image, image);
The float number is a percentage of the brightness. In my example it would increase the brightness to 120% of the existing value (ie. 20% brighter than the original image)
See this link for a similar question...
Adjust brightness and contrast of BufferedImage in Java
See this link for an example application...
http://www.java2s.com/Code/Java/Advanced-Graphics/BrightnessIncreaseDemo.htm