I am using a piece of code posted on stackover flow to write custom metadata to PNG image and read it. The write function seems to work fine but when i try to read data that i had written it throws NullPointerException. Can someone tell me what is wrong?
Here is code for writing metadata
try{
image=ImageIO.read(new FileInputStream("input.png"));
writeCustomData(image, "software", "FRDDC");
ImageIO.write(image, "png", new File("output.png"));
}
catch(Exception e){
e.printStackTrace();
}
Method to write metadata
public static byte[] writeCustomData(BufferedImage buffImg, String key, String value) throws Exception {
ImageWriter writer = ImageIO.getImageWritersByFormatName("png").next();
ImageWriteParam writeParam = writer.getDefaultWriteParam();
ImageTypeSpecifier typeSpecifier = ImageTypeSpecifier.createFromBufferedImageType(BufferedImage.TYPE_INT_RGB);
//adding metadata
javax.imageio.metadata.IIOMetadata metadata = writer.getDefaultImageMetadata(typeSpecifier, writeParam);
IIOMetadataNode textEntry = new IIOMetadataNode("tEXtEntry");
textEntry.setAttribute("keyword", key);
textEntry.setAttribute("value", value);
IIOMetadataNode text = new IIOMetadataNode("tEXt");
text.appendChild(textEntry);
IIOMetadataNode root = new IIOMetadataNode("javax_imageio_png_1.0");
root.appendChild(text);
metadata.mergeTree("javax_imageio_png_1.0", root);
//writing the data
ByteArrayOutputStream baos = new ByteArrayOutputStream();
javax.imageio.stream.ImageOutputStream stream = ImageIO.createImageOutputStream(baos);
writer.setOutput(stream);
writer.write(metadata, new IIOImage(buffImg, null, metadata), writeParam);
try {
ImageIO.write(buffImg, "png", new File("new.png"));
} catch (Exception e) {
e.printStackTrace();
}
stream.close();
return baos.toByteArray();
}
Reading metadata
try{
image=ImageIO.read(new FileInputStream("output.png"));
ByteArrayOutputStream baos=new ByteArrayOutputStream();
ImageIO.write(image, "png", baos );
byte[] b=baos.toByteArray();
String out=readCustomData(b, "software");
}
catch(Exception e){
e.printStackTrace();
}
Method to read metadata
public static String readCustomData(byte[] imageData, String key) throws IOException{
ImageReader imageReader = ImageIO.getImageReadersByFormatName("png").next();
imageReader.setInput(ImageIO.createImageInputStream(new ByteArrayInputStream(imageData)), true);
// read metadata of first image
javax.imageio.metadata.IIOMetadata metadata = imageReader.getImageMetadata(0);
//this cast helps getting the contents
//Node n=metadata.getAsTree("javax_imageio_png_1.0");
//NodeList childNodes=n.getChildNodes();
PNGMetadata pngmeta = (PNGMetadata) metadata;
if(pngmeta.getStandardTextNode()==null){
System.out.println("not found");
}
NodeList childNodes = pngmeta.getStandardTextNode().getChildNodes();
for (int i = 0; i < childNodes.getLength(); i++) {
Node node = childNodes.item(i);
String keyword = node.getAttributes().getNamedItem("keyword").getNodeValue();
String value = node.getAttributes().getNamedItem("value").getNodeValue();
if(key.equals(keyword)){
return value;
}
}
return null;
}
Error Message
not found
java.lang.NullPointerException
at PNGMeta.readCustomData(PNGMeta.java:104)
at PNGMeta.main(PNGMeta.java:40)
BUILD SUCCESSFUL (total time: 2 seconds)
Here's the most efficient way of modifying and then reading the metadata that I know of. In contrast to the code posted by the OP, this version does not fully replace the metadata in the image, but merges the new content with any existing content.
As it uses the "standard" metadata format, it should also work for any format supported by ImageIO, that allows arbitrary text comments (I only tested for PNG, though). The actual data written, should match that of the native PNG metadata format in this case.
It reads all image pixel data and metadata using a single method, to avoid excess stream open/close and seeking and memory usage. It writes all image pixel data and metadata at once for the same reason. For lossless, single image formats like PNG, this roundtrip should not lose any quality or metadata.
When reading metadata back, only the metadata is read, pixel data is ignored.
public class IIOMetadataUpdater {
public static void main(final String[] args) throws IOException {
File in = new File(args[0]);
File out = new File(in.getParent(), createOutputName(in));
System.out.println("Output path: " + out.getAbsolutePath());
try (ImageInputStream input = ImageIO.createImageInputStream(in);
ImageOutputStream output = ImageIO.createImageOutputStream(out)) {
Iterator<ImageReader> readers = ImageIO.getImageReaders(input);
ImageReader reader = readers.next(); // TODO: Validate that there are readers
reader.setInput(input);
IIOImage image = reader.readAll(0, null);
addTextEntry(image.getMetadata(), "foo", "bar");
ImageWriter writer = ImageIO.getImageWriter(reader); // TODO: Validate that there are writers
writer.setOutput(output);
writer.write(image);
}
try (ImageInputStream input = ImageIO.createImageInputStream(out)) {
Iterator<ImageReader> readers = ImageIO.getImageReaders(input);
ImageReader reader = readers.next(); // TODO: Validate that there are readers
reader.setInput(input);
String value = getTextEntry(reader.getImageMetadata(0), "foo");
System.out.println("value: " + value);
}
}
private static String createOutputName(final File file) {
String name = file.getName();
int dotIndex = name.lastIndexOf('.');
String baseName = name.substring(0, dotIndex);
String extension = name.substring(dotIndex);
return baseName + "_copy" + extension;
}
private static void addTextEntry(final IIOMetadata metadata, final String key, final String value) throws IIOInvalidTreeException {
IIOMetadataNode textEntry = new IIOMetadataNode("TextEntry");
textEntry.setAttribute("keyword", key);
textEntry.setAttribute("value", value);
IIOMetadataNode text = new IIOMetadataNode("Text");
text.appendChild(textEntry);
IIOMetadataNode root = new IIOMetadataNode(IIOMetadataFormatImpl.standardMetadataFormatName);
root.appendChild(text);
metadata.mergeTree(IIOMetadataFormatImpl.standardMetadataFormatName, root);
}
private static String getTextEntry(final IIOMetadata metadata, final String key) {
IIOMetadataNode root = (IIOMetadataNode) metadata.getAsTree(IIOMetadataFormatImpl.standardMetadataFormatName);
NodeList entries = root.getElementsByTagName("TextEntry");
for (int i = 0; i < entries.getLength(); i++) {
IIOMetadataNode node = (IIOMetadataNode) entries.item(i);
if (node.getAttribute("keyword").equals(key)) {
return node.getAttribute("value");
}
}
return null;
}
}
Expected output of the above code is:
Output path: /path/to/yourfile_copy.png
value: bar
Your output contains "not found", which is created by
if(pngmeta.getStandardTextNode()==null){
System.out.println("not found");
}
Therefore
NodeList childNodes = pngmeta.getStandardTextNode().getChildNodes();
must fail with a NullPointerException. pngmeta.getStandardTextNode() results in null, so you actully call
null.getChildNodes();
Related
I have a jpg image and I want to convert it to a tiff file, but when I create the output file from byteArrayOutputStream, the output file has 0 byte length.
public static void main(String[] args) throws Exception {
String root = "E:\\Temp\\imaging\\test\\";
File image = new File(root + "0riginalTif-convertedToJpg.JPG");
byte[] bytes = compressJpgToTiff(image);
File destination = new File(root + "OriginalJpg-compressedToTiff.tiff");
FileOutputStream fileOutputStream = new FileOutputStream(destination);
fileOutputStream.write(bytes);
}
public static byte[] compressJpgToTiff(File imageFile) throws Exception {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(255);
ImageOutputStream imageOutputStream = null;
try {
File input = new File(imageFile.getAbsolutePath());
Iterator<ImageWriter> imageWriterIterator = ImageIO.getImageWritersByFormatName("TIF");
ImageWriter writer = imageWriterIterator.next();
imageOutputStream = ImageIO.createImageOutputStream(byteArrayOutputStream);
writer.setOutput(imageOutputStream);
ImageWriteParam param = writer.getDefaultWriteParam();
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionType("JPEG");
param.setCompressionQuality(0.1f);
BufferedImage bufferedImage = ImageIO.read(input);
writer.write(null, new IIOImage(bufferedImage, null, null), param);
writer.dispose();
return byteArrayOutputStream.toByteArray();
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
if (imageOutputStream != null)
imageOutputStream.close();
byteArrayOutputStream.close();
}
}
I want to reduce the size of output tiff as much as possible. Is there a better approach? Is it even possible to reduce the size of a tiff image?
return byteArrayOutputStream.toByteArray(); but you didn't wirite data to byteArrayOutputStream. Look, you just added data to writer.
About compression of tiff file, you have already done it with - param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
Your byteArrayOutputStream object is getting closed in finally block before you converting byteArrayOutputStream to byteArray using byteArrayOutputStream.toByteArray() thats why you getting content length to be 0. So modify your code once as below :
public static byte[] compressJpgToTiff(File imageFile) throws Exception {
//Add rest of your method code here
writer.dispose();
byte[] bytesToReturn = byteArrayOutputStream.toByteArray();
return bytesToReturn;
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
if (imageOutputStream != null)
imageOutputStream.close();
byteArrayOutputStream.close();
}
}
I recently started working on this project where I need to convert a PDF File into a JPEG2000 file(s) - 1 jp2 file per page -.
The goal was to replace a previous pdf to jpeg converter method we had, in order to reduce the size of the output file(s).
Based on a code I found on the internet, I made the pdftojpeg2000 converter method below, and I've been changing the setEncodingRate parameter value and comparing the results.
I managed to get smaller jpeg2000 output files, but the quality is very poor, compared to the Jpeg ones, specially for colored text or images.
Here is what my orginal pdf file looks like:
When I set setEncodingRate to 0.8 it looks like this:
My output file size is 850Ko, which is even bigger than the Jpeg (around 600Ko) ones, and lower quality.
At 0.1 setEncodingRate, the file size is considerably small, 111 Ko, but basically unreadable.
So basically what I'm trying to get here is smaller output files ( <600K ) with a better quality, And I'm wondering if it is feasible with the Jpeg2000 format.
public class ImageConverter {
public void compressor(String inputFile, String outputFile) throws IOException {
J2KImageWriteParam iwp = new J2KImageWriteParam();
PDDocument document = PDDocument.load(new File (inputFile), MemoryUsageSetting.setupMixed(10485760L));
PDFRenderer pdfRenderer = new PDFRenderer(document);
int nbPages = document.getNumberOfPages();
int pageCounter = 0;
BufferedImage image;
for (PDPage page : document.getPages()) {
if (page.hasContents()) {
image = pdfRenderer.renderImageWithDPI(pageCounter, 300, ImageType.RGB);
if (image == null)
{
System.out.println("If no registered ImageReader claims to be able to read the resulting stream");
}
Iterator writers = ImageIO.getImageWritersByFormatName("JPEG2000");
String name = null;
ImageWriter writer = null;
while (name != "com.sun.media.imageioimpl.plugins.jpeg2000.J2KImageWriter") {
writer = (ImageWriter) writers.next();
name = writer.getClass().getName();
System.out.println(name);
}
File f = new File(outputFile+"_"+pageCounter+".jp2");
long s = System.currentTimeMillis();
ImageOutputStream ios = ImageIO.createImageOutputStream(f);
writer.setOutput(ios);
J2KImageWriteParam param = (J2KImageWriteParam) writer.getDefaultWriteParam();
IIOImage ioimage = new IIOImage(image, null, null);
param.setSOP(true);
param.setWriteCodeStreamOnly(true);
param.setProgressionType("layer");
param.setLossless(true);
param.setCompressionMode(J2KImageWriteParam.MODE_EXPLICIT);
param.setCompressionType("JPEG2000");
param.setCompressionQuality(0.01f);
param.setEncodingRate(1.01);
param.setFilter(J2KImageWriteParam.FILTER_53 );
writer.write(null, ioimage, param);
System.out.println(System.currentTimeMillis() - s);
writer.dispose();
ios.flush();
ios.close();
image.flush();
pageCounter++;
}
}
}
public static void main(String[] args) {
String input = "E:/IMGTEST/mail-DOC0002.pdf";
String output = "E:/IMGTEST/mail-DOC0002/docamail-DOC0002-";
ImageConverter imgcv = new ImageConverter();
try {
imgcv.compressor(input, output);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Once you've loaded a document:
public static void main(String[] args) throws IOException {
PDDocument doc = PDDocument.load(new File("blah.pdf"));
How do you get the page by page printing color intent from the PDDocument? I read the docs, didn't see coverage.
This gets the output intents (you'll get these with high quality PDF files) and also the icc profiles for colorspaces and images:
PDDocument doc = PDDocument.load(new File("XXXXX.pdf"));
for (PDOutputIntent oi : doc.getDocumentCatalog().getOutputIntents())
{
COSStream destOutputIntent = oi.getDestOutputIntent();
String info = oi.getOutputCondition();
if (info == null || info.isEmpty())
{
info = oi.getInfo();
}
InputStream is = destOutputIntent.createInputStream();
FileOutputStream fos = new FileOutputStream(info + ".icc");
IOUtils.copy(is, fos);
fos.close();
is.close();
}
for (int p = 0; p < doc.getNumberOfPages(); ++p)
{
PDPage page = doc.getPage(p);
for (COSName name : page.getResources().getColorSpaceNames())
{
PDColorSpace cs = page.getResources().getColorSpace(name);
if (cs instanceof PDICCBased)
{
PDICCBased iccCS = (PDICCBased) cs;
InputStream is = iccCS.getPDStream().createInputStream();
FileOutputStream fos = new FileOutputStream(System.currentTimeMillis() + ".icc");
IOUtils.copy(is, fos);
fos.close();
is.close();
}
}
for (COSName name : page.getResources().getXObjectNames())
{
PDXObject x = page.getResources().getXObject(name);
if (x instanceof PDImageXObject)
{
PDImageXObject img = (PDImageXObject) x;
if (img.getColorSpace() instanceof PDICCBased)
{
InputStream is = ((PDICCBased) img.getColorSpace()).getPDStream().createInputStream();
FileOutputStream fos = new FileOutputStream(System.currentTimeMillis() + ".icc");
IOUtils.copy(is, fos);
fos.close();
is.close();
}
}
}
}
doc.close();
What this doesn't do (but I could add some of it if needed):
colorspaces of shadings, patterns, xobject forms, appearance stream resources
recursion in colorspaces like DeviceN and Separation
recursion in patterns, xobject forms, soft masks
I read the examples on "How to create/add Intents to a PDF file". I couldn't get an example on "How to get intents". Using the API/examples, I wrote the following (untested code) to get the COSStream object for each of the Intents. See if this is useful for you.
public static void main(String[] args) throws IOException {
PDDocument doc = PDDocument.load(new File("blah.pdf"));
PDDocumentCatalog cat = doc.getDocumentCatalog();
List<PDOutputIntent> list = cat.getOutputIntents();
for (PDOutputIntent e : list) {
p("PDOutputIntent Found:");
p("Info="+e.getInfo());
p("OutputCondition="+e.getOutputCondition());
p("OutputConditionIdentifier="+e.getOutputConditionIdentifier());
p("RegistryName="+e.getRegistryName());
COSStream cstr = e.getDestOutputIntent();
}
static void p(String s) {
System.out.println(s);
}
}
Using itext pdf library (fork of an older version 4.2.1) you could do smth. like:
PdfReader reader = new com.lowagie.text.pdf.PdfReader(Path pathToPdf);
PRStream stream = (PRStream) reader.getCatalog().getAsDict(PdfName.DESTOUTPUTPROFILE);
if (stream != null)
{
byte[] destProfile = PdfReader.getStreamBytes(stream);
}
For extracting the profile from each page you could iterate over each page like
for(int i = 1; i <= pdfReader.getNumberOfPages(); i++)
{
PRStream prStream = (PRStream) pdfReader.getPageN(i).getDirectObject(PdfName.DESTOUTPUTPROFILE);
if (stream != null)
{
byte[] destProfile = PdfReader.getStreamBytes(stream);
}
}
I don't know whether this code help or not, after searching below links,
How do I add an ICC to an existing PDF document
PdfBox - PDColorSpaceFactory.createColorSpace(document, iccColorSpace) throws nullpointerexception
https://pdfbox.apache.org/docs/1.8.11/javadocs/org/apache/pdfbox/pdmodel/graphics/color/PDICCBased.html
I found some code, check whether it help or not,
public static void main(String[] args) throws IOException {
PDDocument doc = PDDocument.load(new File("blah.pdf"));
PDDocumentCatalog cat = doc.getDocumentCatalog();
List<PDOutputIntent> list = cat.getOutputIntents();
PDDocumentCatalog cat = doc.getDocumentCatalog();
COSArray cosArray = doc.getCOSObject();
PDICCBased pdCS = new PDICCBased( cosArray );
pdCS.getNumberOfComponents()
static void p(String s) {
System.out.println(s);
}
}
I convert my .tifs from uncompressed to PackBits compression... However I need to also change the PlanarConfiguration from Chunky to Separate, I do not know how to do this. It's not looking like there's anything on the web about it doing it in Java either. Only people saying to set PlanarConfiguration = 2; Here's my code for the .tif conversion so far...
public boolean packBitsConversionMove(File currentDirectory, File currentFile) throws IOException{
try{
InputStream fis = new BufferedInputStream(new FileInputStream(currentFile));
PNGImageReaderSpi spi = new PNGImageReaderSpi();
ImageReader reader = spi.createReaderInstance();
ImageInputStream iis = ImageIO.createImageInputStream(fis);
reader.setInput(iis, true);
BufferedImage bi = ImageIO.read(currentFile);
TIFFImageWriterSpi tiffspi = new TIFFImageWriterSpi();
ImageWriter writer = tiffspi.createWriterInstance();
ImageWriteParam param = writer.getDefaultWriteParam();
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionType("PackBits");
File fOutputFile = new File(currentDirectory.getPath() + "\\" + currentFile.getName());
ImageOutputStream ios = ImageIO.createImageOutputStream(fOutputFile);
writer.setOutput(ios);
writer.write(null, new IIOImage(bi, null, null), param);
} catch (IOException e){
return false;
}
return true;
}
My param variable doesn't have any setPlanarCompression function but I do not know a work around. Thanks for any advice!
It seems to me there are two ways to store an attachment in a NotesDocument.
Either as a RichTextField or as a "MIME Part".
If they are stored as RichText you can do stuff like:
document.getAttachment(fileName)
That does not seem to work for an attachment stored as a MIME Part. See screenshot
I have thousands of documents like this in the backend. This is NOT a UI issue where I need to use the file Download control of XPages.
Each document as only 1 attachment. An Image. A JPG file. I have 3 databases for different sizes. Original, Large, and Small. Originally I created everything from documents that had the attachment stored as RichText. But my code saved them as MIME Part. that's just what it did. Not really my intent.
What happened is I lost some of my "Small" pictures so I need to rebuild them from the Original pictures that are now stored as MIME Part. So my ultimate goal is to get it from the NotesDocument into a Java Buffered Image.
I think I have the code to do what I want but I just "simply" can't figure out how to get the attachment off the document and then into a Java Buffered Image.
Below is some rough code I'm working with. My goal is to pass in the document with the original picture. I already have the fileName because I stored that out in metaData. But I don't know how to get that from the document itself. And I'm passing in "Small" to create the Small image.
I think I just don't know how to work with attachments stored in this manner.
Any ideas/advice would be appreciated! Thanks!!!
public Document processImage(Document inputDoc, String fileName, String size) throws IOException {
// fileName is the name of the attachment on the document
// The goal is to return a NEW BLANK document with the image on it
// The Calling code can then deal with keys and meta data.
// size is "Original", "Large" or "Small"
System.out.println("Processing Image, Size = " + size);
//System.out.println("Filename = " + fileName);
boolean result = false;
Session session = Factory.getSession();
Database db = session.getCurrentDatabase();
session.setConvertMime(true);
BufferedImage img;
BufferedImage convertedImage = null; // the output image
EmbeddedObject image = null;
InputStream imageStream = null;
int currentSize = 0;
int newWidth = 0;
String currentName = "";
try {
// Get the Embedded Object
image = inputDoc.getAttachment(fileName);
System.out.println("Input Form : " + inputDoc.getItemValueString("form"));
if (null == image) {
System.out.println("ALERT - IMAGE IS NULL");
}
currentSize = image.getFileSize();
currentName = image.getName();
// Get a Stream of the Imahe
imageStream = image.getInputStream();
img = ImageIO.read(imageStream); // this is the buffered image we'll work with
imageStream.close();
Document newDoc = db.createDocument();
// Remember this is a BLANK document. The calling code needs to set the form
if ("original".equalsIgnoreCase(size)) {
this.attachImage(newDoc, img, fileName, "JPG");
return newDoc;
}
if ("Large".equalsIgnoreCase(size)) {
// Now we need to convert the LARGE image
// We're assuming FIXED HEIGHT of 600px
newWidth = this.getNewWidth(img.getHeight(), img.getWidth(), 600);
convertedImage = this.getScaledInstance(img, newWidth, 600, false);
this.attachImage(newDoc, img, fileName, "JPG");
return newDoc;
}
if ("Small".equalsIgnoreCase(size)) {
System.out.println("converting Small");
newWidth = this.getNewWidth(img.getHeight(), img.getWidth(), 240);
convertedImage = this.getScaledInstance(img, newWidth, 240, false);
this.attachImage(newDoc, img, fileName, "JPG");
System.out.println("End Converting Small");
return newDoc;
}
return newDoc;
} catch (Exception e) {
// HANDLE EXCEPTION HERE
// SAMLPLE WRITE TO LOG.NSF
System.out.println("****************");
System.out.println("EXCEPTION IN processImage()");
System.out.println("****************");
System.out.println("picName: " + fileName);
e.printStackTrace();
return null;
} finally {
if (null != imageStream) {
imageStream.close();
}
if (null != image) {
LibraryUtils.incinerate(image);
}
}
}
I believe it will be some variation of the following code snippet. You might have to change which mimeentity has the content so it might be in the parent or another child depending.
Stream stream = session.createStream();
doc.getMIMEEntity().getFirstChildEntity().getContentAsBytes(stream);
ByteArrayInputStream bais = new ByteArrayInputStream(stream.read());
return ImageIO.read(bais);
EDIT:
session.setConvertMime(false);
Stream stream = session.createStream();
Item itm = doc.getFirstItem("ParentEntity");
MIMEEntity me = itm.getMIMEEntity();
MIMEEntity childEntity = me.getFirstChildEntity();
childEntity.getContentAsBytes(stream);
ByteArrayOutputStream bo = new ByteArrayOutputStream();
stream.getContents(bo);
byte[] mybytearray = bo.toByteArray();
ByteArrayInputStream bais = new ByteArrayInputStream(mybytearray);
return ImageIO.read(bais);
David have a look at DominoDocument,http://public.dhe.ibm.com/software/dw/lotus/Domino-Designer/JavaDocs/XPagesExtAPI/8.5.2/com/ibm/xsp/model/domino/wrapped/DominoDocument.html
There you can wrap every Notes document
In the DominoDocument, there such as DominoDocument.AttachmentValueHolder where you can access the attachments.
I have explained it at Engage. It very powerful
http://www.slideshare.net/flinden68/engage-use-notes-objects-in-memory-and-other-useful-java-tips-for-x-pages-development