I'm trying to code a class to seam carve images in x and y direction. The x direction is working, and to reduce the y direction I thought about simply rotating the image 90° and run the same code over the already rescaled image (in x direction only) and after that, rotate it back to its initial state.
I found something with AffineTransform and tried it. It actually produced a rotated image, but messed up the colors and I don't know why.
This is all the code:
import java.awt.image.BufferedImage;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.io.File;
import java.io.IOException;
import javafx.scene.paint.Color;
import javax.imageio.ImageIO;
public class example {
/**
* #param args the command line arguments
*/
public static void main(String[] args) throws IOException {
// TODO code application logic here
BufferedImage imgIn = ImageIO.read(new File("landscape.jpg"));
BufferedImage imgIn2 = imgIn;
AffineTransform tx = new AffineTransform();
tx.rotate(Math.PI/2, imgIn2.getWidth() / 2, imgIn2.getHeight() / 2);//(radian,arbit_X,arbit_Y)
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_BILINEAR);
BufferedImage last = op.filter(imgIn2, null);//(sourse,destination)
ImageIO.write(last, "JPEG", new File("distortedColors.jpg"));
}
}
Just alter the filename in
BufferedImage imgIn = ImageIO.read(new File("landscape.jpg")); and try it.
When executed, you get 4 images: a heatmap, an image with seams in it and a rescaled image. The last image is a test to see if the rotation worked and it should show a rotated image but with distorted colors...
Help would be greatly appreciated!
EDIT:
The problem is with the AffineTransformOp You need :
AffineTransformOp.TYPE_NEAREST_NEIGHBOR
instead of the BILINEAR you have now.
Second paragraph from documentation hints towards this.
This class uses an affine transform to perform a linear mapping from
2D coordinates in the source image or Raster to 2D coordinates in the
destination image or Raster. The type of interpolation that is used is
specified through a constructor, either by a RenderingHints object or
by one of the integer interpolation types defined in this class. If a
RenderingHints object is specified in the constructor, the
interpolation hint and the rendering quality hint are used to set the
interpolation type for this operation.
The color rendering hint and
the dithering hint can be used when color conversion is required. Note
that the following constraints have to be met: The source and
destination must be different. For Raster objects, the number of bands
in the source must be equal to the number of bands in the destination.
So this works
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
It seems like there's a color conversion happening due to passing null to op.filter(imgIn2, null);.
If you change it like that it should work:
BufferedImage last = new BufferedImage( imgIn2.getWidth(), imgIn2.getHeight(), imgIn2.getType() );
op.filter(imgIn2, last );
Building on what bhavya said...
Keep it simple and you should use the dimensions expected from the operation:
AffineTransformOp op = new AffineTransformOp(transform, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
BufferedImage destinationImage = op.filter(bImage, op.createCompatibleDestImage(bImage, null));
Related
I am trying to segment a simple image using watershed function provided by BoofCV in Java. So I have writen (copied, edited and adjusted) the following code :
package alltestshere;
import boofcv.alg.filter.binary.BinaryImageOps;
import boofcv.alg.filter.binary.Contour;
import boofcv.alg.filter.binary.GThresholdImageOps;
import boofcv.gui.ListDisplayPanel;
import boofcv.gui.binary.VisualizeBinaryData;
import boofcv.gui.image.ShowImages;
import boofcv.io.UtilIO;
import boofcv.io.image.ConvertBufferedImage;
import boofcv.io.image.UtilImageIO;
import boofcv.struct.ConnectRule;
import boofcv.struct.image.GrayS32;
import boofcv.struct.image.GrayU8;
import java.awt.image.BufferedImage;
import java.util.List;
import boofcv.alg.segmentation.watershed.WatershedVincentSoille1991;
import boofcv.factory.segmentation.FactorySegmentationAlg;
import boofcv.gui.feature.VisualizeRegions;
public class examp {
public static void main( String args[] ) {
// load and convert the image into a usable format
BufferedImage image = UtilImageIO.loadImage(UtilIO.pathExample("C:\\\\Users\\\\Caterina\\\\Downloads\\\\boofcv\\\\data\\\\example\\\\shapes\\\\shapes02.png"));
// convert into a usable format
GrayU8 input = ConvertBufferedImage.convertFromSingle(image, null, GrayU8.class);
//declare some of my working data
GrayU8 binary = new GrayU8(input.width,input.height);
GrayS32 markers = new GrayS32(input.width,input.height);
// Select a global threshold using Otsu's method.
GThresholdImageOps.threshold(input, binary, GThresholdImageOps.computeOtsu(input, 0, 255),true);
//through multiple erosion you can obtain the sure foreground and use it as marker in order to segment the image
GrayU8 filtered = new GrayU8 (input.width, input.height);
GrayU8 filtered2 = new GrayU8 (input.width, input.height);
GrayU8 filtered3 = new GrayU8 (input.width, input.height);
BinaryImageOps.erode8(binary, 1, filtered);
BinaryImageOps.erode8(filtered, 1, filtered2);
BinaryImageOps.erode8(filtered2, 1, filtered3);
//count how many markers you have (one for every foreground part +1 for the background
int numRegions = BinaryImageOps.contour(filtered3, ConnectRule.EIGHT, markers).size()+1 ;
// Detect foreground imagea using an 8-connect rule
List<Contour> contours = BinaryImageOps.contour(binary, ConnectRule.EIGHT, markers);
//Watershed function which takes the original b&w image as input and the markers
WatershedVincentSoille1991 watershed = FactorySegmentationAlg.watershed(ConnectRule.FOUR);
watershed.process(input, markers);
//get the results of the watershed as output
GrayS32 output = watershed.getOutput();
// display the results
BufferedImage visualBinary = VisualizeBinaryData.renderBinary(input, false, null);
BufferedImage visualFiltered = VisualizeBinaryData.renderBinary(filtered3, false, null);
BufferedImage visualLabel = VisualizeBinaryData.renderLabeledBG(markers , contours.size(), null);
BufferedImage outLabeled = VisualizeBinaryData.renderLabeledBG(output, numRegions, null);
ListDisplayPanel panel = new ListDisplayPanel();
panel.addImage(visualBinary, "Binary Original");
panel.addImage(visualFiltered, "Binary Filtered");
panel.addImage(visualLabel, "Markers");
panel.addImage(outLabeled, "Watershed");
ShowImages.showWindow(panel,"Watershed");
}
}
This code, however does not work well. Specifically, instead of colouring with different colours the foreground objects and leave the background as it may, it just splits all the image into region while each regions consists of only one foreground object and some part of the background and paints all this part with the same colour (picture 3). So, what do I do wrong?
I am uploading the Original Picture Markers Picture and Watershed Picture
Thanks in advance,
Katerina
You get this result because you are not processing the background as a region. The markers you provide to watershed are only the contour of your shapes. Since the background isn't a region, the watershed algorithm splits it equally to each region. It is done equally because the distance in your original image of each shape to the background is the same (binary image).
If you want to get the background as another region, then provide to the watershed algorithm some points of the background as markers such as the corners for example.
I have been struggling to find and answer to this issue. I am trying to change the color of a pixel in a large BufferedImage with the imageType of TYPE_BYTE_BINARY. By default when I create the image it will create a black image which is fine but I cannot seem to be able to change pixel color to white.
This is the basic idea of what I want to do.
BufferedImage bi = new BufferedImage(dim[0], dim[1], BufferedImage.TYPE_BYTE_BINARY);
bi.setRBG(x, y, 255)
This seems weird to me as a TYPE_BYTE_BINARY image will not have RGB color, so I know that that is not the correct solution.
Another idea that I had was to create multiple bufferedImage TYPE_BYTE_BINARY with the createGraphics() method and then combine all of those buffered images into one large bufferedImage but I could not find any information about that when using the TYPE_BYTE_BINARY imageType.
When reading up on this I came across people saying that you need to use createGraphics() method on the BufferedImage but I don't want to do that as it will use up too much memory.
I came across this link http://docs.oracle.com/javase/7/docs/api/java/awt/image/Raster.html specifically for this method createPackedRaster()(the second one). This seems like it might be on the right track.
Are those the only options to be able to edit a TYPE_BYTE_BINARY image? Or is there another way that is similar to the way that python handles 1 bit depth images?
In python this is all that needs to be done.
im = Image.new("1", (imageDim, imageDim), "white")
picture = im.load()
picture[x, y] = 0 # 0 or 1 to change color black or white
All help or guidance is appreciated.
All works. I am able to get a white pixel on the image.
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.IOException;
import java.awt.Color;
import java.io.File;
public class MakeImage
{
public static void main(String[] args)
{
BufferedImage im = new BufferedImage(100, 100, BufferedImage.TYPE_BYTE_BINARY);
im.setRGB(10, 10, Color.WHITE.getRGB());
try
{
ImageIO.write(im, "png", new File("image.png"));
}
catch (IOException e)
{
System.out.println("Some exception occured " + e);
}
}
}
** Important update, see below! **
I am creating a program that changes the pixels of a BufferedImage to a certain color when that pixel fulfills a set of conditions in Java. However, when I write the image to disk, the pixels that should be colored are instead black.
First I define the color, using RGB codes:
Color purple = new Color(82, 0, 99);
int PURPLE = purple.getRGB();
Then I read the image I want to alter from a File into a BufferedImage called "blank":
BufferedImage blank = ImageIO.read(new File("some path"));
Now, loop through the pixels, and when a pixel at location (x, y) matches a criteria, change its color to purple:
blank.setRGB(x, y, PURPLE);
Now, write "blank" to the disk.
File output = new File("some other path");
ImageIO.write(blankIn, "png", output); // try-catch blocks intentionally left out
The resulting file should be "blank" with some purple pixels, but the pixels in question are instead black. I know for a fact that the issue is with setRGB and NOT any import or export functions, because "blank" itself is a color image, and gets written to file as such. I read around and saw a lot of posts recommending that I use Graphics2D and to avoid setRGB, but with no discussion of pixel-by-pixel color changing.
I also tried direct bit manipulation, like this:
blank.setRGB(x, y, ((82 << 16) + (0 << 8) + 99));
I'm probably doing that wrong, but if I put it in correctly it wouldn't matter, because the pixels are getting set to transparent when I do this (regardless of what the numbers say, which is very strange, to say the least).
** When I try this:
blank.setRGB(x, y, Color.RED.getRGB());
My output file is grayscale, so that means setRGB is, in fact, modifying my picture in grayscale. I think this is actually a rather simple issue, but the solution eludes me.
Based on the insights in https://stackoverflow.com/a/21981173 that you found yourself ... (a few minutes after posting the question) ... it seems that it should be sufficient to simply convert the image into ARGB directly after it was loaded:
public static BufferedImage convertToARGB(BufferedImage image)
{
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(),
BufferedImage.TYPE_INT_ARGB);
Graphics2D g = newImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
The original image that was imported into Java was actually grayscale, so when Java read it into the BufferedImage, it simply imported it as a grayscale BufferedImage. By adding a very small but imperceptible colored dot in the corner of my image, I was able to get Java to output a correctly colored image.
Unfortunately, this is only a half solution, because I do not know how to fix this programmatically.
SOLUTION:
Convert the BufferedImage from grayscale to ARGB with this snippet:
BufferedImage blank2 = blank;
// Create temporary copy of blank
blank = new BufferedImage(blank.getWidth(), blank.getHeight(), BufferedImage.TYPE_INT_ARGB);
// Recreate blank as an ARGB BufferedImage
ColorConvertOp convert = new ColorConvertOp(null);
// Now create a ColorConvertOp object
convert.filter(blank2, blank);
// Convert blank2 to the blank color scheme and hold it in blank
You want to add this right after blank = ImageIO.read(new File("some path")).
I am using the program ImageResizer with the XBR4x algorithm to upscale .gif images from an old 2D game from 32x32 to 48x48.
The exact procedure:
Manually rename all images to .jpeg because the program wont open .gif
Resize the images, they are saved by the program as .bmp
Manually rename the images to .gif again.
The problem:
When looking at the images in Paint they look very good, when drawn in my RGB BufferedImage they suddenly all have a white/grey ~1px border which is not the Background Color, the images are placed directly next to each other. As I have a whole mosaic of those images the white borders are a no go.
Image 32x32:
Image 48x48 after upscaling:
Ingame screenshot of 4 of those earth images with white borders:
The question:
How do those borders originate? And if not possible to answer this, are there more reliable methods of upscaling low resolution game images making them look less pixelated?
I think that is an artifact of the image resizing algorithm, the borders are actually visible one the upscaled image before it is combined, if you look at them in XnView, for example.
The best way to fix that would be to use another tool to resize the image, one which allows the user to control such borderline effects, but if you have to use this one you could still work around the problem by constructing a 3x3 grid of the original image (which would be 96x96), scaling it up to 144x144 and then cutting out the central 48x48 piece. This would eliminate the borderline effects.
The border is a result of a scaling procedure performed by the mentioned tool. Consider this demo that shows tiles based on scaled image from the question and scaled image created using Image.getScaledInstance().
Note that if you choose to stay with your own scaling method check out The Perils of Image.getScaledInstance() for more optimized solutions.
import java.awt.Graphics;
import java.awt.GraphicsEnvironment;
import java.awt.Image;
import java.awt.Transparency;
import java.awt.image.BufferedImage;
import java.net.URL;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JLabel;
import javax.swing.JOptionPane;
import javax.swing.JPanel;
public class TestImageScale {
public static void main(String[] args) {
try {
BufferedImage original = ImageIO.read(new URL(
"http://i.stack.imgur.com/rY2i8.gif"));
Image scaled = original.getScaledInstance(48, 48,
Image.SCALE_AREA_AVERAGING);
BufferedImage scaledOP = ImageIO.read(new URL(
"http://i.stack.imgur.com/Argxi.png"));
BufferedImage tilesOP = buildTiles(scaledOP, 3, 3);
BufferedImage tiles = buildTiles(scaled, 3, 3);
JPanel panel = new JPanel();
panel.add(new JLabel(new ImageIcon(tilesOP)));
panel.add(new JLabel(new ImageIcon(tiles)));
JOptionPane.showMessageDialog(null, panel,
"Tiles: OP vs getScaledInstance",
JOptionPane.INFORMATION_MESSAGE);
} catch (Exception e) {
JOptionPane.showMessageDialog(null, e.getMessage(), "Failure",
JOptionPane.ERROR_MESSAGE);
e.printStackTrace();
}
}
static BufferedImage buildTiles(Image tile, int rows, int columns) {
int width = tile.getWidth(null);
int height = tile.getHeight(null);
BufferedImage dest = GraphicsEnvironment
.getLocalGraphicsEnvironment()
.getDefaultScreenDevice()
.getDefaultConfiguration()
.createCompatibleImage(width * rows, height * columns,
Transparency.TRANSLUCENT);
Graphics g = dest.getGraphics();
for (int row = 0; row < rows; row++) {
for (int col = 0; col < columns; col++) {
g.drawImage(tile, row * width, col * width, null);
}
}
g.dispose();
return dest;
}
}
Just a wild guess: Do the original images have an Alpha channel (or do you implicitly create one when resizing)? When resizing an image with alpha, the scaling process may assume the area outside the image to be transparent and the border pixels may become partially transparent, too.
I emailed Hawkynt, the developer of the tool and it seems the error is not in the tool but in Microsofts implementation and he fixed it (actually even bigger tools like Multiple Image Resizer .NET have the problem). This is what he said about his program:
"When you entered width and/or height manually, the image got resized by the chosen algorithm where everything went fine.
Afterwards I used the resample command from GDI+ which implements a Microsoft version of the bicubic resize algorithm.
This implementation is flawed, so it produces one pixel on the left and upper side for images under 300px.
I fixed it by simply making the resized image one pixel larger than wanted and shifting it to the left and up one pixel, so the white border is no longer visible and the target image hast he expected dimensions."
I have some java code that needs to programmatically render text onto an image. So I use BufferedImage, and write text on the Graphics object.
However, when configuring the font instance, one would specify the font size in points. When a piece of text is rendered onto an image, AWT will translate the points into pixels, based on the resolution of the Graphics object. I don't want to get myself involved in computing the pixel/point ratio, since it's really the task for the AWT. The image that is being produced is for a high resolution device (higher than any desktop monitors).
But, I don't seem to find a way to specify what the resolution of the Graphics is. It inherits it from the local graphics environment, which is beyond my control. I don't really want this code to be dependent on anything local, and I'm not even sure it's "sane", to use local graphics environment for determining the resolution of off screen rasters, who knows what people would want them for.
So, any way I can specify the resolution for an off screen image of any kind (preferably the one that can create Graphics object so I can use standard AWT rendering API)?
(update)
Here is a (rather long) sample problem that renders a piece of text on an image, with predefined font size in pixels (effectively, the target device DPI is 72). What bugs me, is that I have to use local screen DPI to make the calculation of the font size in points, though I'm not using the screen in any way, so it's not relevant, and plain fails on headless systems all together. What I would loved in this case instead, is being able to create an off screen image (graphics, raster), with DPI of 72, which would make points, by value, be equal to pixels.
Sample way to run the code:
$ java FontDisplay Monospace 150 "Cat in a bag" 1.png
This would render "Cat in a bag message", with font size of 150 pixels, on a 150 pixel tall image, and save the result in 1.png.
import java.awt.*;
import java.awt.image.*;
import java.awt.font.*;
import javax.imageio.*;
import javax.imageio.stream.*;
import java.io.*;
import java.util.*;
public class FontDisplay {
public static void main(String a[]) throws Exception {
// args: <font_name> <pixel_height> <text> <image_file>
// image file must have supported extension.
int height = Integer.parseInt(a[1]);
String text = a[2];
BufferedImage bi = new BufferedImage(1, 1,
BufferedImage.TYPE_INT_ARGB);
int dpi = Toolkit.getDefaultToolkit().getScreenResolution();
System.out.println("dpi : "+dpi);
float points = (float)height * 72.0F / (float)dpi;
System.out.println("points : "+points);
Map m = new HashMap();
m.put(TextAttribute.FAMILY, a[0]);
m.put(TextAttribute.SIZE, points);
Font f = Font.getFont(m);
if (f == null) {
throw new Exception("Font "+a[0]+" not found on your system");
}
Graphics2D g = bi.createGraphics();
FontMetrics fm = g.getFontMetrics(f);
int w = fm.charsWidth(text.toCharArray(), 0, text.length());
bi = new BufferedImage(w, height, BufferedImage.TYPE_INT_ARGB);
g = bi.createGraphics();
g.setColor(Color.BLACK);
g.fillRect(0, 0, w, height);
g.setColor(Color.WHITE);
g.setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING,
RenderingHints.VALUE_TEXT_ANTIALIAS_LCD_HRGB);
g.setFont(f);
g.drawString(text, 0, fm.getMaxAscent());
String fName = a[3];
String ext = fName.substring(fName.lastIndexOf('.')+1).toLowerCase();
File file = new File(fName);
ImageWriter iw = ImageIO.getImageWritersBySuffix(ext).next();
ImageOutputStream ios = ImageIO.createImageOutputStream(file);
iw.setOutput(ios);
iw.write(bi);
ios.flush();
ios.close();
}
}
Comparing points to pixels is like kg to Newton where the acceleration may give varying conversions. AWT lets you elect a device (screen, printer), but in your case you definitely have to determine your ratio.
You may of course use Photoshop or Gimp and create a normative image for java.
After elaborated question:
Ah, I think I see the misunderstanding. An image does only concern pixels, never points, mm, DPI, or whatever. (Sometimes only as metainfo added separately to the image.)
So if you know the DPI of your device, the inches you want to use, then the dots/pixels are clear. points/dpi may shed more light.