I am doing some basic experimentation on picture filtering using convolution matrix, based on the Wikipedia page about kernels in image processing.
In order to compute the RGB transformations, I am reading the bitmap via a BufferedImage then get the pixels with getRgb(). While testing the simplest identity filter I noticed that for a specific picture I was getting some grey instead of the original black, while for some other picture, the black was OK.
After more testing, I found that without any transform, a simple BufferedImage -> int[] -> BufferedImage results in the greyed result.
What am I missing ? ImageMagick identify shows that both are 8-bit 256 colors pictures without alpha channels.
betty1.png PNG 339x600 339x600+0+0 8-bit Gray 256c 24526B 0.000u 0:00.000
betty2.jpg JPEG 603x797 603x797+0+0 8-bit Gray 256c 126773B 0.000u 0:00.001
With this picture the result is as expected.
With this one, the result is unexpectedly greyed.
Here is a simple sscce test class to show the problem:
import java.awt.BorderLayout;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.WindowConstants;
/* simple test class for convolution matrix */
public class CopyPic {
public static void main(String args[]) throws FileNotFoundException, IOException {
if (args.length < 1) {
System.err.println("Usage: CopyPic <picture_file>");
System.exit(1);
}
String imgPath = args[0];
String inputName = imgPath.substring(0, imgPath.lastIndexOf("."));
File ifile = new File(imgPath);
InputStream fis_in = new FileInputStream(ifile);
BufferedImage bi_in = ImageIO.read(fis_in);
fis_in.close();
int width = bi_in.getWidth();
int height = bi_in.getHeight();
System.out.println(String.format("%s = %d x %d", imgPath, width, height));
int[] rgb_in = new int[width * height];
bi_in.getRGB(0, 0, width, height, rgb_in, 0, width);
BufferedImage bi_out = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
// for (int y = 0; y < height; y++) {
// for (int x = 0; x < width; x++) {
// bi_out.setRGB(x, y, rgb_out[y * width + x]);
// }
// }
bi_out.setRGB(0, 0, width, height, rgb_in, 0, width);
display(bi_in, bi_out);
String outputName = inputName + "-copy.png";
File ofile = new File(outputName);
OutputStream fos_out = new FileOutputStream(ofile);
ImageIO.write(bi_out, "PNG", fos_out);
fos_out.flush();
fos_out.close();
System.out.println("Wrote " + outputName);
}
// use that to have internal viewer
private static JFrame frame;
private static JLabel label1, label2;
private static void display(BufferedImage img1, BufferedImage img2) {
if (frame == null) {
frame = new JFrame();
frame.setTitle(String.format("%dx%d Original / Copy", img1.getWidth(), img1.getHeight()));
frame.setSize(img1.getWidth() + img2.getWidth(), img1.getHeight());
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
label1 = new JLabel();
label1.setIcon(new ImageIcon(img1));
frame.getContentPane().add(label1, BorderLayout.WEST);
label2 = new JLabel();
label2.setIcon(new ImageIcon(img2));
frame.getContentPane().add(label2, BorderLayout.EAST);
frame.setLocationRelativeTo(null);
frame.pack();
frame.setVisible(true);
} else {
label1.setIcon(new ImageIcon(img1));
label2.setIcon(new ImageIcon(img2));
}
}
}
When the ImageIO.read function creates a BufferedImage it uses the type that it thinks is best suited. This type might not be what you expect. In particular, for a JPG image the type might not be TYPE_INT_ARGB.
This is the case for your second image and becomes evident when you print the type of that image:
System.out.println(bi_in.getType());
For that image, this prints 10 on my machine, which represents TYPE_BYTE_GRAY.
So, to fix your problem you should use:
BufferedImage bi_out = new BufferedImage(width, height, bi_in.getType());
Related
On the website where I screenshoted the image the numbers will change continuously.
What I'm trying to do is to read the image text(which is numbers, dot and X on the image) using Tesseract, tess4j java.
The problem is I'm getting inconsistent results, sometimes I get letters sometimes letters with numbers.
After I blacklisted letters excerpts the letter X and special character . I now get 4.0 if I'm not getting the correct results from the picture.
I added a code below to GrayScale the image but still I'm getting the same inconsistent results.
import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
public class GrayScalingImage {
public static void main(String args[]) throws Exception {
try {
File inputImage = new File("image.jpg"); BufferedImage image = ImageIO.read(inputImage);
for(int i=0; i<image.getHeight(); i++) {
for(int j=0; j<image.getWidth(); j++) {
Color color = new Color(image.getRGB(j, i));
int red = (int)(color.getRed() * 0.299);
int green = (int)(color.getGreen() * 0.587);
int blue = (int)(color.getBlue() * 0.114);
Color newColor = new Color(red+green+blue, red+green+blue,red+green+blue);
image.setRGB(j, i, newColor.getRGB());
}
}
File ouptut = new File("newImage.jpg"); ImageIO.write(image, "jpg", ouptut);
}
catch (Exception e) {
}
}
}
Recently I am trying to implement an image object detection tool based on YOLO. To start with, I have used the codes here. Things sounds fine except the fact the program doesnt pass the following line of code (line 72) and will not go into the loop. :
if (cap.read(frame))
In other words, if a break point is placed at that line, the program wont go to next step.. Any idea how to fix this?
package yoloexample;
import org.opencv.core.*;
import org.opencv.dnn.*;
import org.opencv.utils.*;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.videoio.VideoCapture;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.awt.image.WritableRaster;
import java.io.ByteArrayInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
public class Yoloexample {
private static List<String> getOutputNames(Net net) {
List<String> names = new ArrayList<>();
List<Integer> outLayers = net.getUnconnectedOutLayers().toList();
List<String> layersNames = net.getLayerNames();
outLayers.forEach((item) -> names.add(layersNames.get(item - 1)));//unfold and create R-CNN layers from the loaded YOLO model//
System.out.println(names);
return names;
}
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here
System.load("\\opencv\\opencv\\build\\java\\x64\\opencv_java420.dll"); // Load the openCV 4.0 dll //
String modelWeights = "g:\\yolov3.weights"; //Download and load only wights for YOLO , this is obtained from official YOLO site//
String modelConfiguration = "g:\\yolov3.cfg";//Download and load cfg file for YOLO , can be obtained from official site//
String filePath = "test.mp4"; //My video file to be analysed//
VideoCapture cap = new VideoCapture(filePath);// Load video using the videocapture method//
Mat frame = new Mat(); // define a matrix to extract and store pixel info from video//
//cap.read(frame);
JFrame jframe = new JFrame("Video"); // the lines below create a frame to display the resultant video with object detection and localization//
JLabel vidpanel = new JLabel();
jframe.setContentPane(vidpanel);
jframe.setSize(600, 600);
jframe.setVisible(true);// we instantiate the frame here//
Net net = Dnn.readNetFromDarknet(modelConfiguration, modelWeights); //OpenCV DNN supports models trained from various frameworks like Caffe and TensorFlow. It also supports various networks architectures based on YOLO//
//Thread.sleep(5000);
//Mat image = Imgcodecs.imread("D:\\yolo-object-detection\\yolo-object-detection\\images\\soccer.jpg");
Size sz = new Size(288, 288);
List<Mat> result = new ArrayList<>();
List<String> outBlobNames = getOutputNames(net);
while (true) {
if (cap.read(frame)) {
Mat blob = Dnn.blobFromImage(frame, 0.00392, sz, new Scalar(0), true, false); // We feed one frame of video into the network at a time, we have to convert the image to a blob. A blob is a pre-processed image that serves as the input.//
net.setInput(blob);
net.forward(result, outBlobNames); //Feed forward the model to get output //
// outBlobNames.forEach(System.out::println);
// result.forEach(System.out::println);
float confThreshold = 0.6f; //Insert thresholding beyond which the model will detect objects//
List<Integer> clsIds = new ArrayList<>();
List<Float> confs = new ArrayList<>();
List<Rect> rects = new ArrayList<>();
for (int i = 0; i < result.size(); ++i) {
// each row is a candidate detection, the 1st 4 numbers are
// [center_x, center_y, width, height], followed by (N-4) class probabilities
Mat level = result.get(i);
for (int j = 0; j < level.rows(); ++j) {
Mat row = level.row(j);
Mat scores = row.colRange(5, level.cols());
Core.MinMaxLocResult mm = Core.minMaxLoc(scores);
float confidence = (float) mm.maxVal;
Point classIdPoint = mm.maxLoc;
if (confidence > confThreshold) {
int centerX = (int) (row.get(0, 0)[0] * frame.cols()); //scaling for drawing the bounding boxes//
int centerY = (int) (row.get(0, 1)[0] * frame.rows());
int width = (int) (row.get(0, 2)[0] * frame.cols());
int height = (int) (row.get(0, 3)[0] * frame.rows());
int left = centerX - width / 2;
int top = centerY - height / 2;
clsIds.add((int) classIdPoint.x);
confs.add((float) confidence);
rects.add(new Rect(left, top, width, height));
}
}
}
float nmsThresh = 0.5f;
MatOfFloat confidences = new MatOfFloat(Converters.vector_float_to_Mat(confs));
Rect[] boxesArray = rects.toArray(new Rect[0]);
MatOfRect boxes = new MatOfRect(boxesArray);
MatOfInt indices = new MatOfInt();
Dnn.NMSBoxes(boxes, confidences, confThreshold, nmsThresh, indices); //We draw the bounding boxes for objects here//
int[] ind = indices.toArray();
int j = 0;
for (int i = 0; i < ind.length; ++i) {
int idx = ind[i];
Rect box = boxesArray[idx];
Imgproc.rectangle(frame, box.tl(), box.br(), new Scalar(0, 0, 255), 2);
//i=j;
System.out.println(idx);
}
// Imgcodecs.imwrite("D://out.png", image);
//System.out.println("Image Loaded");
ImageIcon image = new ImageIcon(Mat2bufferedImage(frame)); //setting the results into a frame and initializing it //
vidpanel.setIcon(image);
vidpanel.repaint();
System.out.println(j);
System.out.println("Done");
}
}
}
private static BufferedImage Mat2bufferedImage(Mat image) { // The class described here takes in matrix and renders the video to the frame //
MatOfByte bytemat = new MatOfByte();
Imgcodecs.imencode(".jpg", image, bytemat);
byte[] bytes = bytemat.toArray();
InputStream in = new ByteArrayInputStream(bytes);
BufferedImage img = null;
try {
img = ImageIO.read(in);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return img;
}
}
I need to rotate my BufferedImage on 3 axis (x, y and z), by the angles given in 3 integers. Is there any native methods in java? If not, how would I achieve that?
Update #1: I've done some of it with OpenCV... Will update when finished!
Update #2: Since this was just a part of my project, I realized that solving just a part of the problem wouldn't be good, so I used OpenCV getPerspectiveTransform() and then warpPerspective() methods from Imgproc class to transform image. I have basically just ported this code to java and it works fine :)
Also I have changed the thread name due the changes to make it fit the actual question/solution.
Code (I used OpenCV 3.1, since it's the latest version):
import java.awt.Graphics;
import java.awt.image.BufferedImage;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import javax.imageio.ImageIO;
import javax.swing.JFrame;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Point;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgcodecs.Imgcodecs;
public class Main extends JFrame {
private static final long serialVersionUID = 1L;
BufferedImage transformed = null;
//These locations are just the corners of the 4 reference points. I am writing the auto recognition part right now :)
Point p4 = new Point(260, 215);
Point p1 = new Point(412, 221);
Point p2 = new Point(464, 444);
Point p3 = new Point(312, 435);
public Main() {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
File f = new File("FILEPATH ");
MatOfPoint2f corners = new MatOfPoint2f();
Mat src = Imgcodecs.imread(f.getAbsolutePath());
corners.push_back(new MatOfPoint2f(p1));
corners.push_back(new MatOfPoint2f(p2));
corners.push_back(new MatOfPoint2f(p3));
corners.push_back(new MatOfPoint2f(p4));
Point center = new Point(0, 0);
for (int i = 0; i < corners.toArray().length; i++) {
center.x += corners.toArray()[i].x;
center.y += corners.toArray()[i].y;
}
center.x /= corners.toArray().length;
center.y /= corners.toArray().length;
sortCorners(corners, center);
Mat quad = Mat.zeros(1000, 1900, CvType.CV_8U);
MatOfPoint2f quad_pts = new MatOfPoint2f();
quad_pts.push_back(new MatOfPoint2f(new Point(0, 0)));
quad_pts.push_back(new MatOfPoint2f(new Point(quad.width(), 0)));
quad_pts.push_back(new MatOfPoint2f(new Point(quad.width(), quad.height())));
quad_pts.push_back(new MatOfPoint2f(new Point(0, quad.height())));
Mat transmtx = Imgproc.getPerspectiveTransform(corners, quad_pts);
Imgproc.warpPerspective(src, quad, transmtx, quad.size());
transformed = matToBufferedImage(quad);
setSize(500, 500);
setLocationRelativeTo(null);
setDefaultCloseOperation(EXIT_ON_CLOSE);
setVisible(true);
}
public void paint(Graphics g) {
g.clearRect(0, 0, this.getWidth(), this.getHeight());
g.drawImage(transformed, 0, 22, null);
}
public MatOfPoint2f sortCorners(MatOfPoint2f corners, Point center) {
MatOfPoint2f top = new MatOfPoint2f();
MatOfPoint2f bot = new MatOfPoint2f();
for (int i = 0; i < corners.toArray().length; i++) {
if (corners.toArray()[i].y < center.y){
top.push_back(new MatOfPoint2f(corners.toArray()[i]));
}
else
bot.push_back(new MatOfPoint2f(corners.toArray()[i]));
}
Point tl = p4;
Point tr = p1;
Point bl = p2;
Point br = p3;
tl = top.toArray()[0].x > top.toArray()[1].x ? top.toArray()[1] : top.toArray()[0];
tr = top.toArray()[0].x > top.toArray()[1].x ? top.toArray()[0] : top.toArray()[1];
bl = bot.toArray()[0].x > bot.toArray()[1].x ? bot.toArray()[1] : bot.toArray()[0];
br = bot.toArray()[0].x > bot.toArray()[1].x ? bot.toArray()[0] : bot.toArray()[1];
corners.release();
corners.push_back(new MatOfPoint2f(tl));
corners.push_back(new MatOfPoint2f(tr));
corners.push_back(new MatOfPoint2f(br));
corners.push_back(new MatOfPoint2f(bl));
System.out.println(corners.toArray()[0] + ", " + corners.toArray()[1] + ", " + corners.toArray()[2] + ", " + corners.toArray()[3] + ", ");
return corners;
}
public BufferedImage matToBufferedImage(Mat image) {
Mat image_tmp = image;
MatOfByte matOfByte = new MatOfByte();
Imgcodecs.imencode(".jpg", image_tmp, matOfByte);
byte[] byteArray = matOfByte.toArray();
BufferedImage bufImage = null;
try {
InputStream in = new ByteArrayInputStream(byteArray);
bufImage = ImageIO.read(in);
} catch (Exception e) {
e.printStackTrace();
}
return bufImage;
}
}
I think that the TransformJ package does what you want, but I don't think it contains native code.
I'm trying to write strings to images, so it's harder to copy the text and run it through a translator.
My code works fine, but I get always a really long image - I rather would like to have a more readable box in where the string is written. My method "StringDiver" does add "\n" but it does not help when writing the string to an image.
Right now I get this output.
Any hint what I could do?
import java.awt.Color;
import java.awt.Font;
import java.awt.Graphics2D;
import java.awt.font.FontRenderContext;
import java.awt.geom.Rectangle2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class writeToImage {
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
String newString = "Mein Eindruck ist, dass die politische und öffentliche Meinung in Deutschland anfängt, die wirtschaftliche Zerstörung im Inland und in Europa zu erkennen, die auf einen eventuellen Zusammenbruch des Euro folgen würde.";
String sampleText = StringDivider(newString);
//Image file name
String fileName = "Image";
//create a File Object
File newFile = new File("./" + fileName + ".jpg");
//create the font you wish to use
Font font = new Font("Tahoma", Font.PLAIN, 15);
//create the FontRenderContext object which helps us to measure the text
FontRenderContext frc = new FontRenderContext(null, true, true);
//get the height and width of the text
Rectangle2D bounds = font.getStringBounds(sampleText, frc);
int w = (int) bounds.getWidth();
int h = (int) bounds.getHeight();
//create a BufferedImage object
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
//calling createGraphics() to get the Graphics2D
Graphics2D g = image.createGraphics();
//set color and other parameters
g.setColor(Color.WHITE);
g.fillRect(0, 0, w, h);
g.setColor(Color.BLACK);
g.setFont(font);
g.drawString(sampleText, (float) bounds.getX(), (float) -bounds.getY());
//releasing resources
g.dispose();
//creating the file
try {
ImageIO.write(image, "jpg", newFile);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public static String StringDivider(String s){
StringBuilder sb = new StringBuilder(s);
int i = 0;
while ((i = sb.indexOf(" ", i + 30)) != -1) {
sb.replace(i, i + 1, "\n");
}
return sb.toString();
}
}
g.drawString(sampleText, (float) bounds.getX(), (float) -bounds.getY());
Split text and write every part to image.
Rectangle2D bounds = font.getStringBounds(sampleText, frc);
int w = (int) bounds.getWidth();
int h = (int) bounds.getHeight();
String[] parts = sampleText.split("\n");
//create a BufferedImage object
BufferedImage image = new BufferedImage(w, h * parts.length, BufferedImage.TYPE_INT_RGB);
int index = 0;
for(String part : parts){
g.drawString(part, 0, h * index++);
}
ex:
first part: x=0 ; y=0
second part: x=0 ; y=5
third part: x=0 ; y=10;
heightText = h
Take a look at LineBreakMeasurer. The first code example in the Javadoc is exactly what you're looking for.
I wrote a program that generates a BufferedImage to be displayed on the screen and then printed. Part of the image includes grid lines that are 1 pixel wide. That is, the line is 1 pixel, with about 10 pixels between lines. Because of screen resolution, the image is displayed much bigger than that, with several pixels for each line. I'd like to draw it smaller, but when I scale the image (either by using Image.getScaledInstance or Graphics2D.scale), I lose significant amounts of detail.
I'd like to print the image as well, and am dealing with the same problem. In that case, I am using this code to set the resolution:
HashPrintRequestAttributeSet set = new HashPrintRequestAttributeSet();
PrinterResolution pr = new PrinterResolution(250, 250, ResolutionSyntax.DPI);
set.add(pr);
job.print(set);
which works to make the image smaller without losing detail. But the problem is that the image is cut off at the same boundary as if I hadn't set the resolution. I'm also confused because I expected a larger number of DPI to make a smaller image, but it's working the other way.
I'm using java 1.6 on Windows 7 with eclipse.
Regarding the image being cut-off on the page boundary, have you checked the clip region of the graphics? I mean try :
System.out.println(graphics.getClipBounds());
and make sure it is correctly set.
I had the same problem. Here is my solution.
First change the resolution of the print job...
PrinterJob job = PrinterJob.getPrinterJob();
// Create the paper size of our preference
double cmPx300 = 300.0 / 2.54;
Paper paper = new Paper();
paper.setSize(21.3 * cmPx300, 29.7 * cmPx300);
paper.setImageableArea(0, 0, 21.3 * cmPx300, 29.7 * cmPx300);
PageFormat format = new PageFormat();
format.setPaper(paper);
// Assign a new print renderer and the paper size of our choice !
job.setPrintable(new PrintReport(), format);
if (job.printDialog()) {
try {
HashPrintRequestAttributeSet set = new HashPrintRequestAttributeSet();
PrinterResolution pr = new PrinterResolution((int) (dpi), (int) (dpi), ResolutionSyntax.DPI);
set.add(pr);
job.setJobName("Jobname");
job.print(set);
} catch (PrinterException e) {
}
}
Now you can draw everything you like into the new high resolution paper like this !
public class PrintReport implements Printable {
#Override
public int print(Graphics g, PageFormat pf, int page) throws PrinterException {
// Convert pixels to cm to lay yor page easy on the paper...
double cmPx = dpi / 2.54;
Graphics2D g2 = (Graphics2D) g;
int totalPages = 2; // calculate the total pages you have...
if (page < totalPages) {
// Draw Page Header
try {
BufferedImage image = ImageIO.read(ClassLoader.getSystemResource(imgFolder + "largeImage.png"));
g2.drawImage(image.getScaledInstance((int) (4.8 * cmPx), -1, BufferedImage.SCALE_SMOOTH), (int) (cmPx),
(int) (cmPx), null);
} catch (IOException e) {
}
// Draw your page as you like...
// End of Page
return PAGE_EXISTS;
} else {
return NO_SUCH_PAGE;
}
}
It sounds like your problem is that you are making the grid lines part of the BufferedImage and it doesn't look good when scaled. Why not use drawLine() to produce the grid after your image has been drawn?
Code for Convert image with dimensions using Java and print the converted image.
Class: ConvertImageWithDimensionsAndPrint.java
package com.test.convert;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
public class ConvertImageWithDimensionsAndPrint {
private static final int IMAGE_WIDTH = 800;
private static final int IMAGE_HEIGHT = 1000;
public static void main(String[] args) {
try {
String sourceDir = "C:/Images/04-Request-Headers_1.png";
File sourceFile = new File(sourceDir);
String destinationDir = "C:/Images/ConvertedImages/";//Converted images save here
File destinationFile = new File(destinationDir);
if (!destinationFile.exists()) {
destinationFile.mkdir();
}
if (sourceFile.exists()) {
String fileName = sourceFile.getName().replace(".png", "");
BufferedImage bufferedImage = ImageIO.read(sourceFile);
int type = bufferedImage.getType() == 0 ? BufferedImage.TYPE_INT_ARGB : bufferedImage.getType();
BufferedImage resizedImage = new BufferedImage(IMAGE_WIDTH, IMAGE_HEIGHT, type);
Graphics2D graphics2d = resizedImage.createGraphics();
graphics2d.drawImage(bufferedImage, 0, 0, IMAGE_WIDTH, IMAGE_HEIGHT, null);//resize goes here
graphics2d.dispose();
ImageIO.write(resizedImage, "png", new File( destinationDir + fileName +".png" ));
int oldImageWidth = bufferedImage.getWidth();
int oldImageHeight = bufferedImage.getHeight();
System.out.println(sourceFile.getName() +" OldFile with Dimensions: "+ oldImageWidth +"x"+ oldImageHeight);
System.out.println(sourceFile.getName() +" ConvertedFile converted with Dimensions: "+ IMAGE_WIDTH +"x"+ IMAGE_HEIGHT);
//Print the image file
PrintActionListener printActionListener = new PrintActionListener(resizedImage);
printActionListener.run();
} else {
System.err.println(destinationFile.getName() +" File not exists");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
Reference of PrintActionListener.java
package com.test.convert;
import java.awt.Graphics;
import java.awt.image.BufferedImage;
import java.awt.print.PageFormat;
import java.awt.print.Printable;
import java.awt.print.PrinterException;
import java.awt.print.PrinterJob;
public class PrintActionListener implements Runnable {
private BufferedImage image;
public PrintActionListener(BufferedImage image) {
this.image = image;
}
#Override
public void run() {
PrinterJob printJob = PrinterJob.getPrinterJob();
printJob.setPrintable(new ImagePrintable(printJob, image));
if (printJob.printDialog()) {
try {
printJob.print();
} catch (PrinterException prt) {
prt.printStackTrace();
}
}
}
public class ImagePrintable implements Printable {
private double x, y, width;
private int orientation;
private BufferedImage image;
public ImagePrintable(PrinterJob printJob, BufferedImage image) {
PageFormat pageFormat = printJob.defaultPage();
this.x = pageFormat.getImageableX();
this.y = pageFormat.getImageableY();
this.width = pageFormat.getImageableWidth();
this.orientation = pageFormat.getOrientation();
this.image = image;
}
#Override
public int print(Graphics g, PageFormat pageFormat, int pageIndex) throws PrinterException {
if (pageIndex == 0) {
int pWidth = 0;
int pHeight = 0;
if (orientation == PageFormat.PORTRAIT) {
pWidth = (int) Math.min(width, (double) image.getWidth());
pHeight = pWidth * image.getHeight() / image.getWidth();
} else {
pHeight = (int) Math.min(width, (double) image.getHeight());
pWidth = pHeight * image.getWidth() / image.getHeight();
}
g.drawImage(image, (int) x, (int) y, pWidth, pHeight, null);
return PAGE_EXISTS;
} else {
return NO_SUCH_PAGE;
}
}
}
}
Output:
04-Request-Headers_1.png OldFile with Dimensions: 1224x1584
04-Request-Headers_1.png ConvertedFile converted with Dimensions: 800x1000
After conversion of a image a Print window will be open for printing the converted image. The window displays like below, Select the printer from Name dropdown and Click OK button.
You can use either of the following to improve the quality of the scaling. I believe BiCubic gives better results but is slower than BILINEAR.
g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BICUBIC);
g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);
I would also not use Image.getScaledInstance() as it is very slow. I'm not sure about the printing as I'm struggling with similar issues.