I have a completely desaturated BufferedImage in Java that I need to display to a JFrame many times a second, however, the image needs to be saturated with a certain hue before being rendered. The problem I am facing is that I don't know any approach that is fast enough performance wise to be able to give a smooth frame rate whilst the hue changes each frame.
I have tried to generate an image with the desired hue value during the rendering pass, but that gives a slow frame rate unless the image is quite small. I have also tried to cache a bunch of saturated images and choose one for each frame, but with just 256 cached images it creates very long load times.
The image is likely to be around 1000x500 pixels in size.
The code I currently have to recolor the image is the following:
private BufferedImage recolored(BufferedImage image, float hue) {
int width = image.getWidth();
int height = image.getHeight();
WritableRaster raster = image.getRaster();
BufferedImage res = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster resRast = res.getRaster();
for (int xx = 0; xx < width; xx++) {
for (int yy = 0; yy < height; yy++) {
Color color = new Color(Color.HSBtoRGB(hue, 0.7f, 0.7f));
int[] pixels = raster.getPixel(xx, yy, (int[]) null);
pixels[0] = color.getRed();
pixels[1] = color.getGreen();
pixels[2] = color.getBlue();
resRast.setPixel(xx, yy, pixels);
}
}
return res;
}
So, my question is:
What is the standard, generally accepted or best way in order to display an image with a constantly changing hue?
Is it reasonable to do so with an image of my size?
Also, I doubt that it matters, but the hue is changing in a linearly over time and wraps back to zero after overflowing. Basically it cycles through all colors and then repeats.
Although your question is not very specific I will point out a few flaws in your code. If you fix these you should see a reasonable improvement in speed.
You have a function that takes a BufferedImage and a Hue value as input.
for (int xx = 0; xx < width; xx++) {
for (int yy = 0; yy < height; yy++) {
Color color = new Color(Color.HSBtoRGB(hue, 0.7f, 0.7f));
int[] pixels = raster.getPixel(xx, yy, (int[]) null);
pixels[0] = color.getRed();
pixels[1] = color.getGreen();
pixels[2] = color.getBlue();
resRast.setPixel(xx, yy, pixels);
}
}
In this nested loop you iterate over every pixel in the image. For every coordinate in your image you calculate a rgb color from the provided hue value.
You then read a pixel from your input image, change its colors and then set that color to a pixel in your output image.
All this function does is fill an image with a certain hue.
You read width * height pixels without using their information.
You calculate an RBG colour width * height times althought it's the same
for every pixel
You read that colors RGB values 3 * height * width times
You assign these values 3 * height * width times.
All you needed to do is calculate a single RGB touple once outside your loop. Then set every pixel in your output image to that values.
Related
The border needs to be made out of the closest pixel of the given image, I saw some code online and came up with the following. What am I doing wrong? I'm new to java, and I am not allowed to use any methods.
/**
* TODO Method to be done. It contains some code that has to be changed
*
* #param enlargeFactorPercentage the border in percentage
* #param dimAvg the radius in pixels to get the average colour
* of each pixel for the border
*
* #return a new image extended with borders
*/
public static BufferedImage addBorders(BufferedImage image, int enlargeFactorPercentage, int dimAvg) {
// TODO method to be done
int height = image.getHeight();
int width = image.getWidth();
System.out.println("Image height = " + height);
System.out.println("Image width = " + width);
// create new image
BufferedImage bi = new BufferedImage(width, height, image.getType());
// copy image
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixelRGB = image.getRGB(x, y);
bi.setRGB(x, y, pixelRGB);
}
}
// draw top and bottom borders
// draw left and right borders
// draw corners
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixelRGB = image.getRGB(x, y);
for (enlargeFactorPercentage = 0; enlargeFactorPercentage < 10; enlargeFactorPercentage++){
bi.setRGB(width, enlargeFactorPercentage, pixelRGB * dimAvg);
bi.setRGB(enlargeFactorPercentage, height, pixelRGB * dimAvg);
}
}
}
return bi;
I am not allowed to use any methods.
What does that mean? How can you write code if you can't use methods from the API?
int enlargeFactorPercentage
What is that for? To me, enlarge means to make bigger. So if you have a factor of 10 and your image is (100, 100), then the new image would be (110, 110), which means the border would be 5 pixels?
Your code is creating the BufferedImage the same size as the original image. So does that mean you make the border 5 pixels and chop off 5 pixels from the original image?
Without proper requirements we can't help.
#return a new image extended with borders
Since you also have a comment that says "extended", I'm going to assume your requirement is to return the larger image.
So the solution I would use is to:
create the BufferedImage at the increased size
get the Graphics2D object from the BufferImage
fill the entire BufferedImage with the color you want for the border using the Graphics2D.fillRect(….) method
paint the original image onto the enlarged BufferedImage using the Graphics2D.drawImage(…) method.
Hello and welcome to stackoverflow!
Not sure what you mean with "not allowed using methods". Without methods you can not even run a program because the "thing" with public static void main(String[] args) is a method (the main method) and you need it, because it is the program starting point...
But to answer your question:
You have to load your image. A possibility would be to use ImageIO. Then you create a 2D graphics object and then you can to drawRectangle() to create a border rectangle:
BufferedImage bi = //load image
Graphics2D g = bi.getGraphics();
g.drawRectangle(0, 0, bi.getHeight(), bi.getWidth());
This short code is just a hint. Try it out and read the documentation from Bufferedimage see here and from Graphics2D
Edit: Please notice that this is not quite correct. With the code above you overdraw the outer pixel-line from the image. If you don't want to cut any pixel of, then you have to scale it up and draw with bi.getHeight()+2 and bi.getWidth()+2. +2 because you need one pixel more at each side of the image.
I'm attempting to convert a .csv file containing grayscale values to an image using BufferedImage.
The csv is read into pixArray[] initially, in which all values are doubles.
I am attempting to use BufferedImage to create a 100x100px output image with the code
BufferedImage image = new BufferedImage(width,height,BufferedImage.
TYPE_BYTE_GRAY);
for(int x = 0; x < width; x++)
{
for(int y = 0; y < height; y++)
{
image.setRGB(x, y, (int)Math.round(pixArray[y]));
}
}
File file_out = new File("output.png");
try {
ImageIO.write(image, "png", file_out);
} catch (IOException e) {
e.printStackTrace();
}
but all I have as output is a 100x100 black square.
I've tried alternatives to TYPE_BYTE_GRAY with no success, as well as the png format for outout, and can't find what is producing this error.
It should be
int g = (int)Math.round(pixArray[y]);
image.setRGB(x,y,new Color(g,g,g).getRGB());
What your current code is doing is setting the alpha to the pixel value but leaving the color components all zero.
Posting an alternative solution. While Jim's answer is correct and works, it is also one of the slowest* ways to put sample values into a gray scale BufferedImage.
A BufferedImage with TYPE_BYTE_GRAY don't need all the conversion to and from RGB colors. To put the gray values directly into the image, do it through the image's raster:
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
WritableRaster raster = image.getRaster();
for (int y = 0; y < height; y++) {
int value = (int) Math.round(pixArray[y])
for (int x = 0; x < width; x++) {
raster.setSample(x, y, 0, value);
}
}
*) Slow because of creating excessive throw-away Color instances, but mostly due to color space conversion to/from sRGB color space. Probably not very noticeable in a 100x100 image, but if you try 1000x1000 or larger, you will notice.
PS: I also re-arranged the loops to loop over x in the inner loop. This is normally faster, especially when reading values, due to data locality and caching in modern CPUs. In your case, it matters mostly because you only need to compute (round, cast) the value for each row.
I am using a method that takes an image and blends every pixel of it with a given color. My problem is that every time I run the method using the same image, the result is more and more saturated. Like this:
example
I'm storing the image returned by the method as a different variable than the original one, and I'm passing through the original as the image parameter every time.
This is the blending method I'm using:
public static BufferedImage blendImage (BufferedImage image, Color blend) {
BufferedImage newImage = image;
for (int i = 0; i < image.getWidth(); i ++) for (int j = 0; j < image.getHeight(); j ++) {
Color c1 = new Color(image.getRGB(i, j), true);
Color c2 = blend;
float r1 = ((float)c1.getRed()) / 255.0F;
float g1 = ((float)c1.getGreen()) / 255.0F;
float b1 = ((float)c1.getBlue()) / 255.0F;
float a1 = ((float)c1.getAlpha()) / 255.0F;
float r2 = ((float)c2.getRed()) / 255.0F;
float g2 = ((float)c2.getGreen()) / 255.0F;
float b2 = ((float)c2.getBlue()) / 255.0F;
float a2 = ((float)c2.getAlpha()) / 255.0F;
Color c3 = new Color(r1 * r2, g1 * g2, b1 * b2, a1 * a2);
newImage.setRGB(i, j, c3.getRGB());
}
return newImage;
}
I'd like to know if there is some of fixing this or if there is a better way to blend images that anyone knows.
EDIT: It turns out that the method was changing the original image. I'm not sure how but it had something to do with the line BufferedImage newImage = image;. My solution was setting newImage to a new BufferedImage object, and making it the same width, height, and type as the image passed through. I don't know why the original image was being modified though.
It's more accurate to say that your image is getting darker.
Here's what's happening. For each channel, you're normalizing the values of the image and the blend color to the range 0..1, and then multiplying them together. Since both numbers have a maximum of 1, the output value can never be larger than either of them and will probably be smaller. If you repeatedly blend with some color that's not pure white (255,255,255), the image will get progressively darker, even if the blend color is a bright one.
Maybe try averaging the channel values instead of multiplying them.
Or just draw a rectangle of the blend color over the whole image with 50% opacity.
I'm working on a simple image program where the user can alter the HSB values of an image. However, when I change the HSB values of an images and convert back to RGB, it seems to lose it's transparency or alpha values (it goes black where the transparency is). Here's what I have below (I've put the relevant parts together):
public static BufferedImage getEnhancedImagesHSB(BufferedImage image, float[] hsbOrg)
{
int height = image.getHeight();
int width = image.getWidth();
float[] hsb = new float[]{0,0,0,0};
int[] originalPixels = image.getRGB(0,0, width, height, null, 0, width);
int[] enhancedImagePixels = image.getRGB(0,0, width, height, null, 0, width);
for (int i = 0; i < originalPixels.length; i++)
{
Color c = new Color( originalPixels[i]);
int red =c.getRed();
int green = c.getGreen();
int blue = c.getBlue();
hsb = Color.RGBtoHSB(red, green, blue, hsb);
hsb[ 3 ] = c.getAlpha() / 255f;
hsb[0] = (float)(hsb[0] +( hsbOrg[0]/360.0));//hue
hsb[1] *= (hsbOrg[1]/100);
if(hsb[1] > 1.0)
hsb[1] = (float)0.9;
hsb[2] *= (hsbOrg[2]/100);
if(hsb[2] > 1.0)
{hsb[2] = (float)0.9;}
enhancedImagePixels[i] = Color.HSBtoRGB(hsb[0], hsb[1], hsb[2]);
}
BufferedImage newImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB );
newImage.setRGB(0, 0, width, height, enhancedImagePixels, 0, width);
return newImage;
}
According to the docs getRGB(), setRGB() use the default RGB color model (TYPE_INT_ARGB) so the alpha values should be preserved. But changing the images HSB values results in the new buffered image having a black color where the transparency should be. How can I edit the images HSB values and then create a new image without losing the images transparency?
Edit:
Below is an image from before and after some random Hue, saturation and brightness has been applied. As you can see, the image has lost its transparency.
Color c2 = Color.HSBtoRGB(hsb[0], hsb[1], hsb[2]);
enhancedImagePixels[i] = new Color(c2.getRed(), c2.getGreen(), c2.getBlue(),
c.getAlpha());
Which is ugly. There seems to be no conversion for hsb[3] (alpha).
Using a image.getAlphaRaster() might be the solution.
Thanks to Joop Eggen for pointing me into the right direction. I wrote directly into the image raster (using setPixel()) the adjusted Hue, saturation, brightness and alpha values. Below is a great article discussing the subject matter.
Article.
I'm trying to get a small section of image on the screen and read any pixel to compare the other pixels.The code to get screen image is:
Rectangle captureSize = new Rectangle(x, y, height, width);
BufferedImage image = robot.createScreenCapture(captureSize);
And, to read pixel by pixel I used
for (int y = 0; y < image.getHeight(); y = y + 1) {
for (int x = 0; x < image.getWidth(); x = x + 1) {
color = image.getRGB(x, y);
// Some methods etc
{
{
However, when I ran it I was shocked. Because createScreenCapture took about 40 ms and using getRGB of each pixel took around 350 ms which is very inefficient to create an application for 60 fps. By the way, my image is 800x400 pixels size. I didn't try
rgbArray = image.getRGB(startX, startY, w, h, rgbArray,offset, scansize) ;
method because I don't know how efficient it is and to reorder my code would be a bit difficult. So, any help would be appreciated.
Use
rgbArray = image.getRGB(startX, startY, w, h, rgbArray,offset, scansize) ;
It will be much faster to read the pixel values from the array than to do the method call to get each pixel value, and the single call to getRGB to fetch the array is not slow.