I am developing an application where i need to save the thumbnails of the image in a folder inside the folder where the images are . The image folder is selected by means of a file chooser.
I am having a problem saving the thumbnails. It says an error message a file not found exception rather.
The code that i have written is :
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ThumbnailFactory {
public ThumbnailFactory() {
}
public void run(String folder) {
savepath = folder+"\\thumbnails";
File dir = new File(folder);
for (File file : dir.listFiles()) {
createThumbnail(file);
}
}
private void createThumbnail(File file) {
try {
// BufferedImage is the best (Toolkit images are less flexible)
BufferedImage img = ImageIO.read(file);
BufferedImage thumb = createEmptyThumbnail();
// BufferedImage has a Graphics2D
Graphics2D g2d = (Graphics2D) thumb.getGraphics();
g2d.drawImage(img, 0, 0,
thumb.getWidth() - 1,
thumb.getHeight() - 1,
0, 0,
img.getWidth() - 1,
img.getHeight() - 1,
null);
g2d.dispose();
ImageIO.write(thumb, "PNG", createOutputFile(file));
} catch (Exception e) {
e.printStackTrace();
}
}
private File createOutputFile(File inputFile) throws IOException {
System.out.println(savepath+"\\"+inputFile.getName());
File f = new File(savepath+"\\"+inputFile.getName()+".png");
if(!f.exists())
{
System.out.println("Creating the file in thumbnail directory");
f.createNewFile();
}
return new File(savepath+"\\"+inputFile.getName()+".png") ;
}
private BufferedImage createEmptyThumbnail() {
return new BufferedImage(100, 200,
BufferedImage.TYPE_INT_RGB);
}
private String savepath;
}
It throws a FileNotFoundException , NullPointerException in the createOutputFile() method, at the f.createNewFile() point .
The input file is the image in the folder selected . I have to place a thumbnail of this image inside a folder created inside the folder selected.
For example,
Selected image folder is D:\pictures
Then i need to place a thumbnail of every picture inside D:\pictures in D:\pictures\thumbnails.
please point out the mistake that i am doing and how to correct it .
Rather than writing all of that code yourself, you might look into using the Thumbnailator library. Your entire example can be written in the following few lines, which expresses your intent much better.
public class ThumbnailFactory {
public void run(String folder) {
Thumbnails.of(new File(folder+"\\thumbnails").listFiles())
.size(100,200)
.outputFormat("png")
.asFiles(Rename.SUFFIX_HYPTHEN_THUMBNAIL);
}
}
I wanted to play with a final static THUMBNAIL, and G2D... It didn't work, but this does if you can live with the ill-proportioned results of making all images the same size, and therefore shape, regardless of there original dimensions ;-)
package forums;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.File;
import java.io.FilenameFilter;
import java.io.IOException;
public class ThumbnailFactory
{
private static final String THUMBNAILS_SUBDIR_NAME = File.separator + "thumbnails";
private final File _thumbsSubdir;
private final File _picsDir;
public ThumbnailFactory(String picsDirectoryPath) {
_picsDir = new File(picsDirectoryPath);
_thumbsSubdir = new File(thumbDirectoryPath(_picsDir));
}
private static String thumbDirectoryPath(File picsDir) {
return picsDir.getAbsolutePath()+THUMBNAILS_SUBDIR_NAME;
}
public void createThumbnails() throws IOException {
if (!_thumbsSubdir.exists()) {
_thumbsSubdir.mkdir();
}
for (File picFile : _picsDir.listFiles(
new FilenameFilter() {
public boolean accept(File f, String s) {
return s.toLowerCase().endsWith(".jpg");
}
}
)) {
if ( !createThumbnail(picFile, new File(thumbFilename(picFile))) )
break;
}
}
private String thumbFilename(File pictureFile) {
return _thumbsSubdir.getAbsolutePath()
+ File.separator
+ pictureFile.getName()
+ ".png";
}
private boolean createThumbnail(File pictureFile, File thumbFile)
throws IOException
{
boolean retval = false;
BufferedImage image = new BufferedImage(100, 200, BufferedImage.TYPE_INT_RGB);
Graphics2D g2d = (Graphics2D) image.getGraphics();
BufferedImage picture = ImageIO.read(pictureFile);
if (picture!=null) {
g2d.drawImage(
picture
, 0, 0, image.getWidth()-1, image.getHeight()-1
, 0, 0, picture.getWidth()-1, picture.getHeight()-1
, null
);
retval = ImageIO.write(image, "PNG", thumbFile);
System.out.println(thumbFile);
}
return retval;
}
public static void main(String... args) {
try {
ThumbnailFactory factory = new ThumbnailFactory("C:/Users/Administrator/Pictures");
factory.createThumbnails();
} catch (Exception e) {
e.printStackTrace();
}
}
}
I'm glad you got it sorted out ;-)
Cheers. Keith.
Related
I am trying to read image objects from a PDF document. The image comes out as black background and white text. How can I reverse that. the image in the pdf, is white foreground and black background.
Here is the code, main piece for loading the image component
import java.awt.image.BufferedImage;
import java.io.ByteArrayOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import javax.imageio.ImageIO;
import com.itextpdf.text.DocumentException;
import com.itextpdf.text.Image;
import com.itextpdf.text.pdf.PRStream;
import com.itextpdf.text.pdf.PdfDictionary;
import com.itextpdf.text.pdf.PdfName;
import com.itextpdf.text.pdf.parser.ImageRenderInfo;
import com.itextpdf.text.pdf.parser.PdfImageObject;
import com.itextpdf.text.pdf.parser.RenderListener;
import com.itextpdf.text.pdf.parser.TextRenderInfo;
public class MyImageRenderListener implements RenderListener {
/**
* The new document to which we've added a border rectangle.
*/
protected String path = "";
/**
* Creates a RenderListener that will look for images.
*/
public MyImageRenderListener(String path) {
this.path = path;
}
/**
* #see com.itextpdf.text.pdf.parser.RenderListener#beginTextBlock()
*/
public void beginTextBlock() {
}
/**
* #see com.itextpdf.text.pdf.parser.RenderListener#endTextBlock()
*/
public void endTextBlock() {
}
/**
* #see com.itextpdf.text.pdf.parser.RenderListener#renderImage(
*com.itextpdf.text.pdf.parser.ImageRenderInfo)
*/
public void renderImage(final ImageRenderInfo renderInfo) {
try {
String filename;
FileOutputStream os;
PdfImageObject image = renderInfo.getImage();
PdfImageObject tmp = null;
PdfName filter = (PdfName) image.get(PdfName.FILTER);
///
PdfDictionary imageDictionary = image.getDictionary();
// Try SMASK, SMASKINDATA
PRStream maskStream = (PRStream) imageDictionary.getAsStream(PdfName.SMASK);
// todo - required - black white - fix
PdfImageObject maskImage = new PdfImageObject(maskStream);
image = maskImage;
if (PdfName.DCTDECODE.equals(filter)) {
filename = String.format(path, renderInfo.getRef().getNumber(), "jpg");
os = new FileOutputStream(filename);
os.write(image.getImageAsBytes());
os.flush();
os.close();
} else if (PdfName.JPXDECODE.equals(filter)) {
filename = String.format(path, renderInfo.getRef().getNumber(), "jp2");
os = new FileOutputStream(filename);
os.write(image.getImageAsBytes());
os.flush();
os.close();
} else if (PdfName.JBIG2DECODE.equals(filter)) {
// ignore: filter not supported.
} else {
BufferedImage awtimage = renderInfo.getImage().getBufferedImage();
if (awtimage != null) {
filename = String.format(path, renderInfo.getRef().getNumber(), "png");
ImageIO.write(awtimage, "png", new FileOutputStream(filename));
}
}
try {
final String newfile = String.format(path, renderInfo.getRef().getNumber(), ".x.", "png");
BufferedImage bi = image.getBufferedImage();
BufferedImage newBi = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_USHORT_GRAY);
newBi.getGraphics().drawImage(bi, 0, 0, null);
ImageIO.write(newBi, "png", new FileOutputStream(newfile));
} catch(final Exception e) {
e.printStackTrace();;
}
} catch (IOException e) {
e.printStackTrace();
}
}
public static Image makeBlackAndWhitePng(PdfImageObject image) throws IOException, DocumentException {
BufferedImage bi = image.getBufferedImage();
BufferedImage newBi = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_USHORT_GRAY);
newBi.getGraphics().drawImage(bi, 0, 0, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(newBi, "png", baos);
return Image.getInstance(baos.toByteArray());
}
/**
* #see com.itextpdf.text.pdf.parser.RenderListener#renderText(
*com.itextpdf.text.pdf.parser.TextRenderInfo)
*/
public void renderText(TextRenderInfo renderInfo) {
}
}
And code around loading the pages
public static void readImages(final PdfReader reader, final File filex) throws IOException {
for (int i = 0; i < reader.getXrefSize(); i++) {
PdfObject pdfobj = reader.getPdfObject(i);
if (pdfobj == null || !pdfobj.isStream()) {
continue;
}
PdfStream stream = (PdfStream) pdfobj;
PdfObject pdfsubtype = stream.get(PdfName.SUBTYPE);
if (pdfsubtype != null && pdfsubtype.toString().equals(PdfName.IMAGE.toString())) {
byte[] img = PdfReader.getStreamBytesRaw((PRStream) stream);
FileOutputStream out = new FileOutputStream(new File(filex.getParentFile(), String.format("%1$05d", i) + ".jpg"));
out.write(img);
out.flush();
out.close();
}
}
}
To combine my comments:
What you see, merely is the soft mask of the image which contains transparency information, white = opaque, black = transparent. The base image actually is all black but only where the mask indicates opaque, that image black is drawn. Thus it looks like reversed.
You can use image manipulation libraries to invert a bitmap. Simply googl'ing for "imageio invert image" returns this, this, and numerous other interesting matches.
Currently im trying to load a qrcode in buffer so it would display on the spot.
This is my current revision for the code.
package wallettemplate;
import com.google.zxing.*;
import com.google.zxing.client.j2se.BufferedImageLuminanceSource;
import com.google.zxing.client.j2se.MatrixToImageWriter;
import com.google.zxing.common.BitMatrix;
import com.google.zxing.common.HybridBinarizer;
import com.google.zxing.qrcode.QRCodeWriter;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.nio.file.FileSystems;
import java.nio.file.Path;
import static wallettemplate.MainPage.kit;
public class BackEnd{
public static final String QR_CODE_IMAGE_PATH = "./MyQRCode.png";
public static String decodeQRCode(File qrCodeimage) throws IOException {
BufferedImage bufferedImage = ImageIO.read(qrCodeimage);
LuminanceSource source = new BufferedImageLuminanceSource(bufferedImage);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
try {
Result result = new MultiFormatReader().decode(bitmap);
return result.getText();
} catch (NotFoundException e) {
System.out.println("There is no QR code in the image");
return null;
}
}
public static void generateQRCodeImage(String text, int width, int height, String filePath) throws WriterException, IOException {
QRCodeWriter qrCodeWriter = new QRCodeWriter();
BitMatrix bitMatrix = qrCodeWriter.encode(text, BarcodeFormat.QR_CODE, width, height);
Path path = FileSystems.getDefault().getPath(filePath);
MatrixToImageWriter.writeToPath(bitMatrix, "PNG", path);
}
public static String gbal(){
String balance = kit.wallet().getBalance().toFriendlyString();
return balance;
}
public static String purchase(double amount){
String text;
try {
generateQRCodeImage("bitcoin:"+kit.wallet().currentReceiveAddress().toString()+"?amount="+String.format("%.7f",amount), 20, 20, QR_CODE_IMAGE_PATH);
text = "QR Code generated. - "+kit.wallet().currentReceiveAddress().toString();
} catch (WriterException e) {
text = "Could not generate QR Code, WriterException :: " + e.getMessage();
} catch (IOException e) {
text = "Could not generate QR Code, IOException :: " + e.getMessage();
}
return text;
}
public static String refund(){
String text;
try {
File file = new File(QR_CODE_IMAGE_PATH);
String decodedText = decodeQRCode(file);
if(decodedText == null) {
text = "No QR Code found in the image";
} else {
//text = "Decoded text = " + decodedText + " and the amount is "+value;
String[] parts = decodedText.split(":");
String[] amnt = parts[1].split("\\?");
String[] finn = amnt[1].split("=");
text = "Type: " + parts[0] + "\nAddress: " + amnt[0] + "\nAmount: " + finn[1];
String[] linkparam = {parts[0],amnt[0],finn[1]};
text = text + "\n" + linkparam[0] + " - " + linkparam[1] + " - " + linkparam[2];
}
} catch (IOException e) {
text = "Could not decode QR Code, IOException :: " + e.getMessage();
}
return text;
}
}
Above is the class that deals with all functions i need.
When i load the next contentpane it doesnt load up the qrcode that gets stored. But when i shut the program down and load it up, put in a new amount it reads the qrcode from last time.
I call it with this.
btnConfirm.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
BackEnd.purchase(Double.parseDouble(lblfinal.getText()));
parentForm.showPanel(MainPage.PPROCESS);
}
});
The qrcode gets generated fine no issues.
When i read it with an image inside a JLabel i use this code.
private void createUIComponents() {
ImageIcon imageIcon = new ImageIcon("MyQRCode.png");
Image image = imageIcon.getImage();
Image newimg = image.getScaledInstance(200, 200, java.awt.Image.SCALE_SMOOTH);
imageIcon = new ImageIcon(newimg);
lblQRCode = new JLabel(imageIcon);
}
These 2 segments are in different classes. How can i get it to read the CURRENT one i actually need?
Answer as follows.
public static BufferedImage getQRCodeImage(String amount) throws WriterException{
QRCodeWriter qrCodeWriter = new QRCodeWriter();
BitMatrix bitMatrix = qrCodeWriter.encode("bitcoin:"+kit.wallet().currentReceiveAddress().toString()+"?amount="+String.format("%.7f",Double.parseDouble(amount)), BarcodeFormat.QR_CODE, 200, 200);
return MatrixToImageWriter.toBufferedImage(bitMatrix);
}
imageIcon.setImage(BackEnd.getQRCodeImage(cost));
lblQRCode.setIcon(imageIcon);
I’m developing an android application for face recognition, using JavaCV which is an unofficial wrapper of OpenCV. After importing com.googlecode.javacv.cpp.opencv_contrib.FaceRecognizer,
I apply and test the following known methods:
LBPH using createLBPHFaceRecognizer() method
FisherFace using createFisherFaceRecognizer() method
EigenFace using createEigenFaceRecognizer() method
Before I recognize the detected face, I correct the rotated face and crop the proper zone, inspiring from this method
In general when I pass on camera a face already exist in the database, the recognition is ok. But this is not always correct. Sometimes it recognizes the unknown face (not found in Database of trained samples) with a high probability. When we have in the DB two or more faces of similar features (beard, mustache, glasses...) the recognition may be highly mistaken between those faces!
To predict the result using the test face image, I apply the following code:
public String predict(Mat m) {
int n[] = new int[1];
double p[] = new double[1];
IplImage ipl = MatToIplImage(m,WIDTH, HEIGHT);
faceRecognizer.predict(ipl, n, p);
if (n[0]!=-1)
mProb=(int)p[0];
else
mProb=-1;
if (n[0] != -1)
return labelsFile.get(n[0]);
else
return "Unkown";
}
I can’t control the threshold of the probability p, because:
Small p < 50 could predict a correct result.
High p > 70 could predict a false result.
Middle p could predict a correct or false.
As well, I don’t understand why predict() function gives sometime a probability greater than 100 in case of using LBPH??? and in case of Fisher and Eigen it gives very big values (>2000) ??
Can someone help in finding a solution for these bizarre problems?
Is there any suggestion to improve robustness of recognition? especially in case of similarity of two different faces.
The following is the entire class using Facerecognizer:
package org.opencv.javacv.facerecognition;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_contrib.*;
import java.io.File;
import java.io.FileOutputStream;
import java.io.FilenameFilter;
import java.util.ArrayList;
import org.opencv.android.Utils;
import org.opencv.core.Mat;
import com.googlecode.javacv.cpp.opencv_imgproc;
import com.googlecode.javacv.cpp.opencv_contrib.FaceRecognizer;
import com.googlecode.javacv.cpp.opencv_core.IplImage;
import com.googlecode.javacv.cpp.opencv_core.MatVector;
import android.graphics.Bitmap;
import android.os.Environment;
import android.util.Log;
import android.widget.Toast;
public class PersonRecognizer {
public final static int MAXIMG = 100;
FaceRecognizer faceRecognizer;
String mPath;
int count=0;
labels labelsFile;
static final int WIDTH= 128;
static final int HEIGHT= 128;;
private int mProb=999;
PersonRecognizer(String path)
{
faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(2,8,8,8,200);
// path=Environment.getExternalStorageDirectory()+"/facerecog/faces/";
mPath=path;
labelsFile= new labels(mPath);
}
void changeRecognizer(int nRec)
{
switch(nRec) {
case 0: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(1,8,8,8,100);
break;
case 1: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createFisherFaceRecognizer();
break;
case 2: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createEigenFaceRecognizer();
break;
}
train();
}
void add(Mat m, String description) {
Bitmap bmp= Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(m,bmp);
bmp= Bitmap.createScaledBitmap(bmp, WIDTH, HEIGHT, false);
FileOutputStream f;
try {
f = new FileOutputStream(mPath+description+"-"+count+".jpg",true);
count++;
bmp.compress(Bitmap.CompressFormat.JPEG, 100, f);
f.close();
} catch (Exception e) {
Log.e("error",e.getCause()+" "+e.getMessage());
e.printStackTrace();
}
}
public boolean train() {
File root = new File(mPath);
Log.i("mPath",mPath);
FilenameFilter pngFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(".jpg");
};
};
File[] imageFiles = root.listFiles(pngFilter);
MatVector images = new MatVector(imageFiles.length);
int[] labels = new int[imageFiles.length];
int counter = 0;
int label;
IplImage img=null;
IplImage grayImg;
int i1=mPath.length();
for (File image : imageFiles) {
String p = image.getAbsolutePath();
img = cvLoadImage(p);
if (img==null)
Log.e("Error","Error cVLoadImage");
Log.i("image",p);
int i2=p.lastIndexOf("-");
int i3=p.lastIndexOf(".");
int icount=Integer.parseInt(p.substring(i2+1,i3));
if (count<icount) count++;
String description=p.substring(i1,i2);
if (labelsFile.get(description)<0)
labelsFile.add(description, labelsFile.max()+1);
label = labelsFile.get(description);
grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);
cvCvtColor(img, grayImg, CV_BGR2GRAY);
images.put(counter, grayImg);
labels[counter] = label;
counter++;
}
if (counter>0)
if (labelsFile.max()>1)
faceRecognizer.train(images, labels);
labelsFile.Save();
return true;
}
public boolean canPredict()
{
if (labelsFile.max()>1)
return true;
else
return false;
}
public String predict(Mat m) {
if (!canPredict())
return "";
int n[] = new int[1];
double p[] = new double[1];
IplImage ipl = MatToIplImage(m,WIDTH, HEIGHT);
// IplImage ipl = MatToIplImage(m,-1, -1);
faceRecognizer.predict(ipl, n, p);
if (n[0]!=-1)
mProb=(int)p[0];
else
mProb=-1;
// if ((n[0] != -1)&&(p[0]<95))
if (n[0] != -1)
return labelsFile.get(n[0]);
else
return "Unkown";
}
IplImage MatToIplImage(Mat m,int width,int heigth)
{
Bitmap bmp=Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(m, bmp);
return BitmapToIplImage(bmp,width, heigth);
}
IplImage BitmapToIplImage(Bitmap bmp, int width, int height) {
if ((width != -1) || (height != -1)) {
Bitmap bmp2 = Bitmap.createScaledBitmap(bmp, width, height, false);
bmp = bmp2;
}
IplImage image = IplImage.create(bmp.getWidth(), bmp.getHeight(),
IPL_DEPTH_8U, 4);
bmp.copyPixelsToBuffer(image.getByteBuffer());
IplImage grayImg = IplImage.create(image.width(), image.height(),
IPL_DEPTH_8U, 1);
cvCvtColor(image, grayImg, opencv_imgproc.CV_BGR2GRAY);
return grayImg;
}
protected void SaveBmp(Bitmap bmp,String path)
{
FileOutputStream file;
try {
file = new FileOutputStream(path , true);
bmp.compress(Bitmap.CompressFormat.JPEG,100,file);
file.close();
}
catch (Exception e) {
// TODO Auto-generated catch block
Log.e("",e.getMessage()+e.getCause());
e.printStackTrace();
}
}
public void load() {
train();
}
public int getProb() {
// TODO Auto-generated method stub
return mProb;
}
}
I think you need to implement something to be more robust to illumination changes. see: Illumination normalization in OpenCV
Then, in order to manage similarity between images maybe you can use something like Principal component Analysis.
import java.awt.image.BufferedImage;
import java.io.IOException;
import java.io.InputStream;
import javax.imageio.ImageIO;
import javax.swing.Icon;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import common.ResourcesToAccess;
public class RecordingStartingThread extends Thread{
#Override
public void run(){
JFrame f = new JFrame();
ImageIcon reel = new ImageIcon("src/images/reel.GIF");
JLabel label = new JLabel(reel);
reel.setImageObserver(label);
f.getContentPane().add(label);
f.setUndecorated(true);
f.setSize(300, 300);
f.setVisible(true);
}
public static void main(String[] args) {
new RecordingStartingThread().start();
}
}
Issue: GIF plays extremely fast.
Question: How do I make sure that GIF plays at a normal speed?
As for GIF speed playback - I've encountered this problem too. If I remember correctly this was caused by the "default" (or not provided?) value for frame rate in GIF file. Some web browsers "overrided" that frame rate so that GIF played correctly.
As a result I created a class that converts GIF (read GIF -> write GIF) and give frame rate provided by the user. com.madgag.gif.fmsware.AnimatedGifEncoder class is an external library that I link to the project via Maven: animated-gif-lib-1.0.jar
public final class GIFUtils {
private GIFUtils() {
}
public static List<BufferedImage> extractFrames(String filePath) throws IOException {
return extractFrames(new File(filePath));
}
public static List<BufferedImage> extractFrames(File file) throws IOException {
List<BufferedImage> imgs = new LinkedList<BufferedImage>();
ImageReader reader = ImageIO.getImageReadersBySuffix("GIF").next();
ImageInputStream in = null;
try {
in = ImageIO.createImageInputStream(new FileInputStream(file));
reader.setInput(in);
BufferedImage img = null;
int count = reader.getNumImages(true);
for(int i = 0; i < count; i++) {
Node tree = reader.getImageMetadata(i).getAsTree("javax_imageio_gif_image_1.0");
int x = Integer.valueOf(tree.getChildNodes().item(0).getAttributes()
.getNamedItem("imageLeftPosition").getNodeValue());
int y = Integer.valueOf(tree.getChildNodes().item(0).getAttributes()
.getNamedItem("imageTopPosition").getNodeValue());
BufferedImage image = reader.read(i);
if(img == null) {
img = new BufferedImage(image.getWidth() + x, image.getHeight() + y,
BufferedImage.TYPE_4BYTE_ABGR);
}
Graphics2D g = img.createGraphics();
ImageUtils.setBestRenderHints(g);
g.drawImage(image, x, y, null);
imgs.add(ImageUtils.copy(img));
}
}
finally {
if(in != null) {
in.close();
}
}
return imgs;
}
public static void writeGif(List<BufferedImage> images, File gifFile, int millisForFrame)
throws FileNotFoundException, IOException {
BufferedImage firstImage = images.get(0);
int type = firstImage.getType();
ImageOutputStream output = new FileImageOutputStream(gifFile);
// create a gif sequence with the type of the first image, 1 second
// between frames, which loops continuously
GifSequenceWriter writer = new GifSequenceWriter(output, type, 100, false);
// write out the first image to our sequence...
writer.writeToSequence(firstImage);
for(int i = 1; i < images.size(); i++) {
BufferedImage nextImage = images.get(i);
writer.writeToSequence(nextImage);
}
writer.close();
output.close();
}
public static Image createGif(List<BufferedImage> images, int millisForFrame) {
AnimatedGifEncoder g = new AnimatedGifEncoder();
ByteArrayOutputStream out = new ByteArrayOutputStream(5 * 1024 * 1024);
g.start(out);
g.setDelay(millisForFrame);
g.setRepeat(1);
for(BufferedImage i : images) {
g.addFrame(i);
}
g.finish();
byte[] bytes = out.toByteArray();
return Toolkit.getDefaultToolkit().createImage(bytes);
}
And GifSequenceWriter looks like this:
public class GifSequenceWriter {
protected ImageWriter gifWriter;
protected ImageWriteParam imageWriteParam;
protected IIOMetadata imageMetaData;
public GifSequenceWriter(ImageOutputStream outputStream, int imageType, int timeBetweenFramesMS,
boolean loopContinuously) throws IIOException, IOException {
gifWriter = getWriter();
imageWriteParam = gifWriter.getDefaultWriteParam();
ImageTypeSpecifier imageTypeSpecifier = ImageTypeSpecifier.createFromBufferedImageType(imageType);
imageMetaData = gifWriter.getDefaultImageMetadata(imageTypeSpecifier, imageWriteParam);
String metaFormatName = imageMetaData.getNativeMetadataFormatName();
IIOMetadataNode root = (IIOMetadataNode) imageMetaData.getAsTree(metaFormatName);
IIOMetadataNode graphicsControlExtensionNode = getNode(root, "GraphicControlExtension");
graphicsControlExtensionNode.setAttribute("disposalMethod", "none");
graphicsControlExtensionNode.setAttribute("userInputFlag", "FALSE");
graphicsControlExtensionNode.setAttribute("transparentColorFlag", "FALSE");
graphicsControlExtensionNode.setAttribute("delayTime", Integer.toString(timeBetweenFramesMS / 10));
graphicsControlExtensionNode.setAttribute("transparentColorIndex", "0");
IIOMetadataNode commentsNode = getNode(root, "CommentExtensions");
commentsNode.setAttribute("CommentExtension", "Created by MAH");
IIOMetadataNode appEntensionsNode = getNode(root, "ApplicationExtensions");
IIOMetadataNode child = new IIOMetadataNode("ApplicationExtension");
child.setAttribute("applicationID", "NETSCAPE");
child.setAttribute("authenticationCode", "2.0");
int loop = loopContinuously ? 0 : 1;
child.setUserObject(new byte[] { 0x1, (byte) (loop & 0xFF), (byte) (loop >> 8 & 0xFF) });
appEntensionsNode.appendChild(child);
imageMetaData.setFromTree(metaFormatName, root);
gifWriter.setOutput(outputStream);
gifWriter.prepareWriteSequence(null);
}
public void writeToSequence(RenderedImage img) throws IOException {
gifWriter.writeToSequence(new IIOImage(img, null, imageMetaData), imageWriteParam);
}
public void close() throws IOException {
gifWriter.endWriteSequence();
}
private static ImageWriter getWriter() throws IIOException {
Iterator<ImageWriter> iter = ImageIO.getImageWritersBySuffix("gif");
if(!iter.hasNext()) {
throw new IIOException("No GIF Image Writers Exist");
}
return iter.next();
}
private static IIOMetadataNode getNode(IIOMetadataNode rootNode, String nodeName) {
int nNodes = rootNode.getLength();
for(int i = 0; i < nNodes; i++) {
if(rootNode.item(i).getNodeName().compareToIgnoreCase(nodeName) == 0) {
return (IIOMetadataNode) rootNode.item(i);
}
}
IIOMetadataNode node = new IIOMetadataNode(nodeName);
rootNode.appendChild(node);
return node;
}
}
The easiest fix for this problem is to just create a .gif file which is "well-formed", i.e. contains the appropriate frame rate. This can be achieved with the help of various online converters, just google "gif speed changer".
Simple as the title states: Can you use only Java commands to take a screenshot and save it? Or, do I need to use an OS specific program to take the screenshot and then grab it off the clipboard?
Believe it or not, you can actually use java.awt.Robot to "create an image containing pixels read from the screen." You can then write that image to a file on disk.
I just tried it, and the whole thing ends up like:
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage capture = new Robot().createScreenCapture(screenRect);
ImageIO.write(capture, "bmp", new File(args[0]));
NOTE: This will only capture the primary monitor. See GraphicsConfiguration for multi-monitor support.
I never liked using Robot, so I made my own simple method for making screenshots of JFrame objects:
public static final void makeScreenshot(JFrame argFrame) {
Rectangle rec = argFrame.getBounds();
BufferedImage bufferedImage = new BufferedImage(rec.width, rec.height, BufferedImage.TYPE_INT_ARGB);
argFrame.paint(bufferedImage.getGraphics());
try {
// Create temp file
File temp = File.createTempFile("screenshot", ".png");
// Use the ImageIO API to write the bufferedImage to a temporary file
ImageIO.write(bufferedImage, "png", temp);
// Delete temp file when program exits
temp.deleteOnExit();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
If you'd like to capture all monitors, you can use the following code:
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice[] screens = ge.getScreenDevices();
Rectangle allScreenBounds = new Rectangle();
for (GraphicsDevice screen : screens) {
Rectangle screenBounds = screen.getDefaultConfiguration().getBounds();
allScreenBounds.width += screenBounds.width;
allScreenBounds.height = Math.max(allScreenBounds.height, screenBounds.height);
}
Robot robot = new Robot();
BufferedImage screenShot = robot.createScreenCapture(allScreenBounds);
public void captureScreen(String fileName) throws Exception {
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
Rectangle screenRectangle = new Rectangle(screenSize);
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
ImageIO.write(image, "png", new File(fileName));
}
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
import javax.swing.*;
public class HelloWorldFrame extends JFrame implements ActionListener {
JButton b;
public HelloWorldFrame() {
this.setVisible(true);
this.setLayout(null);
b = new JButton("Click Here");
b.setBounds(380, 290, 120, 60);
b.setBackground(Color.red);
b.setVisible(true);
b.addActionListener(this);
add(b);
setSize(1000, 700);
}
public void actionPerformed(ActionEvent e)
{
if (e.getSource() == b)
{
this.dispose();
try {
Thread.sleep(1000);
Toolkit tk = Toolkit.getDefaultToolkit();
Dimension d = tk.getScreenSize();
Rectangle rec = new Rectangle(0, 0, d.width, d.height);
Robot ro = new Robot();
BufferedImage img = ro.createScreenCapture(rec);
File f = new File("myimage.jpg");//set appropriate path
ImageIO.write(img, "jpg", f);
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
}
public static void main(String[] args) {
HelloWorldFrame obj = new HelloWorldFrame();
}
}
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice[] screens = ge.getScreenDevices();
Rectangle allScreenBounds = new Rectangle();
for (GraphicsDevice screen : screens) {
Rectangle screenBounds = screen.getDefaultConfiguration().getBounds();
allScreenBounds.width += screenBounds.width;
allScreenBounds.height = Math.max(allScreenBounds.height, screenBounds.height);
allScreenBounds.x=Math.min(allScreenBounds.x, screenBounds.x);
allScreenBounds.y=Math.min(allScreenBounds.y, screenBounds.y);
}
Robot robot = new Robot();
BufferedImage bufferedImage = robot.createScreenCapture(allScreenBounds);
File file = new File("C:\\Users\\Joe\\Desktop\\scr.png");
if(!file.exists())
file.createNewFile();
FileOutputStream fos = new FileOutputStream(file);
ImageIO.write( bufferedImage, "png", fos );
bufferedImage will contain a full screenshot, this was tested with three monitors
You can use java.awt.Robot to achieve this task.
below is the code of server, which saves the captured screenshot as image in your Directory.
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;
import java.net.SocketTimeoutException;
import java.sql.SQLException;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import javax.imageio.ImageIO;
public class ServerApp extends Thread
{
private ServerSocket serverSocket=null;
private static Socket server = null;
private Date date = null;
private static final String DIR_NAME = "screenshots";
public ServerApp() throws IOException, ClassNotFoundException, Exception{
serverSocket = new ServerSocket(61000);
serverSocket.setSoTimeout(180000);
}
public void run()
{
while(true)
{
try
{
server = serverSocket.accept();
date = new Date();
DateFormat dateFormat = new SimpleDateFormat("_yyMMdd_HHmmss");
String fileName = server.getInetAddress().getHostName().replace(".", "-");
System.out.println(fileName);
BufferedImage img=ImageIO.read(ImageIO.createImageInputStream(server.getInputStream()));
ImageIO.write(img, "png", new File("D:\\screenshots\\"+fileName+dateFormat.format(date)+".png"));
System.out.println("Image received!!!!");
//lblimg.setIcon(img);
}
catch(SocketTimeoutException st)
{
System.out.println("Socket timed out!"+st.toString());
//createLogFile("[stocktimeoutexception]"+stExp.getMessage());
break;
}
catch(IOException e)
{
e.printStackTrace();
break;
}
catch(Exception ex)
{
System.out.println(ex);
}
}
}
public static void main(String [] args) throws IOException, SQLException, ClassNotFoundException, Exception{
ServerApp serverApp = new ServerApp();
serverApp.createDirectory(DIR_NAME);
Thread thread = new Thread(serverApp);
thread.start();
}
private void createDirectory(String dirName) {
File newDir = new File("D:\\"+dirName);
if(!newDir.exists()){
boolean isCreated = newDir.mkdir();
}
}
}
And this is Client code which is running on thread and after some minutes it is capturing the screenshot of user screen.
package com.viremp.client;
import java.awt.AWTException;
import java.awt.Dimension;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import java.io.IOException;
import java.net.Socket;
import java.util.Random;
import javax.imageio.ImageIO;
public class ClientApp implements Runnable {
private static long nextTime = 0;
private static ClientApp clientApp = null;
private String serverName = "192.168.100.18"; //loop back ip
private int portNo = 61000;
//private Socket serverSocket = null;
/**
* #param args
* #throws InterruptedException
*/
public static void main(String[] args) throws InterruptedException {
clientApp = new ClientApp();
clientApp.getNextFreq();
Thread thread = new Thread(clientApp);
thread.start();
}
private void getNextFreq() {
long currentTime = System.currentTimeMillis();
Random random = new Random();
long value = random.nextInt(180000); //1800000
nextTime = currentTime + value;
//return currentTime+value;
}
#Override
public void run() {
while(true){
if(nextTime < System.currentTimeMillis()){
System.out.println(" get screen shot ");
try {
clientApp.sendScreen();
clientApp.getNextFreq();
} catch (AWTException e) {
// TODO Auto-generated catch block
System.out.println(" err"+e);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch(Exception e){
e.printStackTrace();
}
}
//System.out.println(" statrted ....");
}
}
private void sendScreen()throws AWTException, IOException {
Socket serverSocket = new Socket(serverName, portNo);
Toolkit toolkit = Toolkit.getDefaultToolkit();
Dimension dimensions = toolkit.getScreenSize();
Robot robot = new Robot(); // Robot class
BufferedImage screenshot = robot.createScreenCapture(new Rectangle(dimensions));
ImageIO.write(screenshot,"png",serverSocket.getOutputStream());
serverSocket.close();
}
}
Toolkit returns pixels based on PPI, as a result, a screenshot is not created for the entire screen when using PPI> 100% in Windows.
I propose to do this:
DisplayMode displayMode = GraphicsEnvironment.getLocalGraphicsEnvironment().getScreenDevices()[0].getDisplayMode();
Rectangle screenRectangle = new Rectangle(displayMode.getWidth(), displayMode.getHeight());
BufferedImage screenShot = new Robot().createScreenCapture(screenRectangle);