when I tested my code with JUnit, the following error occured:
java.lang.IllegalArgumentException: Color parameter outside of expected range: Red Green Blue
Honestly, I don't know why. My code is not very long, so I would like to post it for better help.
BufferedImage img = ImageIO.read(f);
for (int w = 0; w < img.getWidth(); w++) {
for (int h = 0; h < img.getHeight(); h++) {
Color color = new Color(img.getRGB(w, h));
float greyscale = ((0.299f * color.getRed()) + (0.587f
* color.getGreen()) + (0.144f * color.getBlue()));
Color grey = new Color(greyscale, greyscale, greyscale);
img.setRGB(w, h, grey.getRGB());
When I run the JUnit test, eclipse marks up the line with
Color grey = new Color(greyscale, greyscale, greyscale);
So, I suppose the problem might be, that I work with floating numbers and as you can see I recalculate the red, green and blue content of the image.
Could anyone help me to solve that problem?
You are calling the Color constructor with three float parameters so the values are allowed to be between 0.0 and 1.0.
But color.getRed() (Blue, Green) can return a value up to 255. So you can get the following:
float greyscale = ((0.299f *255) + (0.587f * 255) + (0.144f * 255));
System.out.println(greyscale); //262.65
Which is far to high for 1.0f and even for 252 which the Color(int,int,int) constructor allows. So scale your factors like dasblinkenlight said and cast the greyscale to an int or else you will call the wrong constructor of Color.`
new Color((int)greyscale,(int)greyscale,(int)greyscale);
Related
I am using a method that takes an image and blends every pixel of it with a given color. My problem is that every time I run the method using the same image, the result is more and more saturated. Like this:
example
I'm storing the image returned by the method as a different variable than the original one, and I'm passing through the original as the image parameter every time.
This is the blending method I'm using:
public static BufferedImage blendImage (BufferedImage image, Color blend) {
BufferedImage newImage = image;
for (int i = 0; i < image.getWidth(); i ++) for (int j = 0; j < image.getHeight(); j ++) {
Color c1 = new Color(image.getRGB(i, j), true);
Color c2 = blend;
float r1 = ((float)c1.getRed()) / 255.0F;
float g1 = ((float)c1.getGreen()) / 255.0F;
float b1 = ((float)c1.getBlue()) / 255.0F;
float a1 = ((float)c1.getAlpha()) / 255.0F;
float r2 = ((float)c2.getRed()) / 255.0F;
float g2 = ((float)c2.getGreen()) / 255.0F;
float b2 = ((float)c2.getBlue()) / 255.0F;
float a2 = ((float)c2.getAlpha()) / 255.0F;
Color c3 = new Color(r1 * r2, g1 * g2, b1 * b2, a1 * a2);
newImage.setRGB(i, j, c3.getRGB());
}
return newImage;
}
I'd like to know if there is some of fixing this or if there is a better way to blend images that anyone knows.
EDIT: It turns out that the method was changing the original image. I'm not sure how but it had something to do with the line BufferedImage newImage = image;. My solution was setting newImage to a new BufferedImage object, and making it the same width, height, and type as the image passed through. I don't know why the original image was being modified though.
It's more accurate to say that your image is getting darker.
Here's what's happening. For each channel, you're normalizing the values of the image and the blend color to the range 0..1, and then multiplying them together. Since both numbers have a maximum of 1, the output value can never be larger than either of them and will probably be smaller. If you repeatedly blend with some color that's not pure white (255,255,255), the image will get progressively darker, even if the blend color is a bright one.
Maybe try averaging the channel values instead of multiplying them.
Or just draw a rectangle of the blend color over the whole image with 50% opacity.
I am manipulating code of a image renderer that is making output image from Color[] array and my code simply update it with additional stuff right before saving, that is when the original image is actually prepared (all pixels positions prepared to be filled with RGBs in that Color[] array ready for final saving).
Reason why I am doing this is to have ability to insert text describing my render without need of another external graphics program that would do that (I want to have it all in one-go! action without need of another external app).
For that cause - as I have no reach/access for the original prepared BufferedImage (but I have access to actual Color[] that it is created from) I had to make my own class method that:
convert that original Color[] to my own temporary BufferedImage
update that temp. BufferedImage with my stuff via Graphics2D (adding some text to image)
convert my result (temp. BufferedImage with Graphics2D) back to Color[]
send that final Color[] back to the original image rendering method
that would actually make it to be the final image that is rendered out
and saved as png
Now everything works just fine as I expected except one really annoying thing that I cannot get rid off: my updated image looks very bleached-like/pale (almost no depth or shadows presented) compared to the original un-watermarked version...
To me it simply looks like after the image2color[] conversion (using #stacker's solution from here Converting Image to Color array) something goes wrong/is not right so the colors become pale and I do not have any clue why.
Here is the main part of my code that is in question:
BufferedImage sourceImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
// Color[] to BufferedImage
for (int k = 0; k < multiArrayList.size(); k++) {
// PREPARE...
int x = (int) multiArrayList.get(k)[0];
int y = (int) multiArrayList.get(k)[1];
int w = (int) multiArrayList.get(k)[2];
int h = (int) multiArrayList.get(k)[3];
Color[] data = (Color[]) multiArrayList.get(k)[4];
int border = BORDERS[k % BORDERS.length];
for (int by = 0; by < h; by++) {
for (int bx = 0; bx < w; bx++) {
if (bx == 0 || bx == w - 1) {
if (5 * by < h || 5 * (h - by - 1) < h) {
sourceImage.setRGB(x + bx, y + by, border);
}
} else if (by == 0 || by == h - 1) {
if (5 * bx < w || 5 * (w - bx - 1) < w) {
sourceImage.setRGB(x + bx, y + by, border);
}
}
}
}
// UPDATE...
for (int j = 0, index = 0; j < h; j++) {
for (int i = 0; i < w; i++, index++) {
sourceImage.setRGB(x + i, y + j, data[index].copy().toNonLinear().toRGB());
}
}
}
Graphics2D g2d = (Graphics2D) sourceImage.getGraphics();
// paints the textual watermark
drawString(g2d, text, centerX, centerY, sourceImage.getWidth());
// when saved to png at this point ALL IS JUST FINE
ImageIO.write(sourceImage, "png", new File(imageSavePath));
g2d.dispose();
// BufferedImage to Color array
int[] dt = ((DataBufferInt) sourceImage.getRaster().getDataBuffer()).getData();
bucketFull = new Color[dt.length];
for (int i = 0; i < dt.length; i++) {
bucketFull[i] = new Color(dt[i]);
}
// update and repaint output image - THIS OUTPUT IS ALREADY BLEACHED/PALE
d.ip(0, 0, width, height, renderThreads.length + 1);
d.iu(0, 0, width, height, bucketFull);
// reset objects
g2d = null;
sourceImage = null;
bucketFull = null;
multiArrayList = new ArrayList<>();
I have tested (by saving it to another .png file right after the Graphics2D addition) that before it gets that 2nd conversion it looks absolutely OK 1:1 to the original image incl. my text on that image.
But as I said when it is send for render it becomes bleached/pale that is a problem I am trying to solve.
BTW I first thought it might be that Graphics2D addition so I did try it without it but the result was the same, that is bleached/pale version.
Although my process and code is completely different the output image is basically suffering exactly the same way as in this topic (still not solved) BufferedImage color saturation
Here are my 2 examples - 1st ORIGINAL, 2nd UPDATED (bleached/pale)
As suspected, the problem is that you convert the color values from linear RGB to gamma-corrected/sRGB values when setting the RGB values to the BufferedImage, but the reverse transformation (back to linear RGB) is not done when you put the values back into the Color array.
Either change the line (inside the double for loop):
sourceImage.setRGB(x + i, y + j, data[index].copy().toNonLinear().toRGB());
to
sourceImage.setRGB(x + i, y + j, data[index].toRGB());
(you don't need the copy() any more, as you no longer mutate the values, using toNonLinear()).
This avoids the conversion altogether.
... or you could probably also change the line setting the values back, from:
bucketFull[i] = new Color(dt[i]);
to
bucketFull[i] = new Color(dt[i]).toLinear();
Arguably, this is more "correct" (as AWT treats the values as being in the sRGB color space, regardless), but I believe the first version is faster, and the difference in color is negligible. So I'd probably try the first suggested fix first, and use that unless you experience colors that are off.
So this method I wrote gets the mean and standard deviation of an image. I am trying to modify this to get the mean and standard deviation of the red, green and blue values of an image. But I am stumped on how to do to so. I tried to use a colorModel but I was getting nowhere. Could anyone give me some advice on how to modify this code to get the mean and standard deviation of the red, green and blue values?
public static double redValues(BufferedImage image){
Raster raster = image.getRaster();
double mean = meanValue(image);
double sumOfDiff = 0.0;
Color c = new Color(image.getRGB(0, 0));
int red = c.getRed();
for (int y = 0; y < image.getHeight(); ++y){
for (int x = 0; x < image.getWidth(); ++x){
double r = raster.getSample(x, y, red) - mean;
sumOfDiff += Math.pow(r, 2);
}
}
return sumOfDiff / ((image.getWidth() * image.getHeight()) - 1);
}
But this still returns a mean brightness value. When I want to get the red value.
I'm looking to use a very crude heightmap I've created in Photoshop to define a tiled isometric grid for me:
Map:
http://i.imgur.com/jKM7AgI.png
I'm aiming to loop through every pixel in the image and convert the colour of that pixel to a scale of my choosing, for example 0-100.
At the moment I'm using the following code:
try
{
final File file = new File("D:\\clouds.png");
final BufferedImage image = ImageIO.read(file);
for (int x = 0; x < image.getWidth(); x++)
{
for (int y = 0; y < image.getHeight(); y++)
{
int clr = image.getRGB(x, y) / 99999;
if (clr <= 0)
clr = -clr;
System.out.println(clr);
}
}
}
catch (IOException ex)
{
// Deal with exception
}
This works to an extent; the black pixel at position 0 is 167 and the white pixel at position 999 is 0. However when I insert certain pixels into the image I get slightly odd results, for example a gray pixel that's very close to white returns over 100 when I would expect it to be in single digits.
Is there an alternate solution I could use that would yield more reliable results?
Many thanks.
Since it's a grayscale map, the RGB parts will all be the same value (with range 0 - 255), so just take one out of the packed integer and find out what percent of 255 it is:
int clr = (int) ((image.getRGB(x, y) & 0xFF) / 255.0 * 100);
System.out.println(clr);
getRGB returns all channels packed into one int so you shouldn't do arithmetic with it. Maybe use the norm of the RGB-vector instead?
for (int x = 0; x < image.getWidth(); ++x) {
for (int y = 0; y < image.getHeight(); ++y) {
final int rgb = image.getRGB(x, y);
final int red = ((rgb & 0xFF0000) >> 16);
final int green = ((rgb & 0x00FF00) >> 8);
final int blue = ((rgb & 0x0000FF) >> 0);
// Norm of RGB vector mapped to the unit interval.
final double intensity =
Math.sqrt(red * red + green * green + blue * blue)
/ Math.sqrt(3 * 255 * 255);
}
}
Note that there is also the java.awt.Color class that can be instantiated with the int returned by getRGB and provides getRed, getGreen and getBlue methods if you don't want to do the bit manipulations yourself.
I'm trying to implement Matlab's rgb2gray in Java according to http://www.mathworks.com/help/toolbox/images/ref/rgb2gray.html . I have the following code:
public BufferedImage convert(BufferedImage bi){
int heightLimit = bi.getHeight();
int widthLimit = bi.getWidth();
BufferedImage converted = new BufferedImage(widthLimit, heightLimit,
BufferedImage.TYPE_BYTE_GRAY);
for(int height = 0; height < heightLimit; height++){
for(int width = 0; width < widthLimit; width++){
// Remove the alpha component
Color c = new Color(bi.getRGB(width, height) & 0x00ffffff);
// Normalize
int newRed = (int) 0.2989f * c.getRed();
int newGreen = (int) 0.5870f * c.getGreen();
int newBlue = (int) 0.1140f * c.getBlue();
int roOffset = newRed + newGreen + newBlue;
converted.setRGB(width, height, roOffset);
}
}
return converted;
}
Now, I do get a grayscale image but it is too dark compared to what I get from Matlab. AFAIK, the easiest way to turn an image to grayscale is have a BufferedImage of type TYPE_BYTE_GRAY and then just copy the pixels of a BufferedImage of TYPE_INT_(A)RGB. But even this method gives an image that is darker than Matlab's though grayscale decently enough. I've also looked into using RescaleOp. However, I don't find anyway in RescaleOp to set the grayness per pixel.
As an added test, I print out the image matrices produced by Java nad by Matlab. In Java, I get figures like 6316128 6250335 6118749 6118749 6250335 6447714 while in Matlab, I only get something like 116 117 119 120 119 115 (first six figures of both matrices).
How do I get an output similar to Matlab's?
The operator precedence in Java specifies that type-casting is higher than multiplication. You're casting your floating-point constants to 0, so I don't understand how you're getting a grayscale result at all. Easy to fix:
int newRed = (int) (0.2989f * c.getRed());
int newGreen = (int) (0.5870f * c.getGreen());
int newBlue = (int) (0.1140f * c.getBlue());
I would also replace 0.2989 with 0.2990 as it appears to be a typo in the documentation.