Exception when attempting to generate variable-page PDF with iText - java

I'm trying to create an auto-filled PDF of a government payroll form, which involves the possibility of a variable number of pages. I'm currently storing each page as a Map, with the keys being the names of the fields and the values being their contents.
At the moment, I have this code:
in = new FileInputStream(inputPDF);
PdfCopyFields adder = new PdfCopyFields(outStream);
PdfReader reader = null;
PdfStamper stamper = null;
ByteArrayOutputStream baos = null;
for (int pageNum = 0; pageNum < numPages; pageNum++) {
reader = new PdfReader(in);
baos = new ByteArrayOutputStream();
stamper = new PdfStamper(reader, baos);
AcroFields form = stamper.getAcroFields();
Map<String, String> page = pages.get(pageNum);
setFieldsToPage(form, pageNum);
populatePage(form, page, pageNum);
stamper.close();
reader = new PdfReader(baos.toByteArray());
adder.addDocument(reader);
}
The methods called are:
private void populatePage(AcroFields form, Map<String, String> pageMap, int pageNum) throws Exception {
ArrayList<String> fieldNames = new ArrayList<String>();
for (String key : pageMap.keySet()) {
fieldNames.add(key);
}
for (String key : fieldNames) {
form.setField(key + pageNum, pageMap.get(key));
}
}
and
private void setFieldsToPage(AcroFields form, int pageNum) {
ArrayList<String> fieldNames = new ArrayList<String>();
Map<String, AcroFields.Item> fields = form.getFields();
for (String fieldName : fields.keySet()) {
fieldNames.add(fieldName);
}
for (String fieldName : fieldNames) {
form.renameField(fieldName, fieldName + pageNum);
}
}
The issue is that this throws an exception on the second iteration through the loop: at reader = new PdfReader(in); I get the following exception:
java.io.IOException: PDF header signature not found.
What am I doing wrong here, and how do I fix it?
EDIT:
Here is the exception:
java.io.IOException: PDF header signature not found.
at com.lowagie.text.pdf.PRTokeniser.checkPdfHeader(Unknown Source)
at com.lowagie.text.pdf.PdfReader.readPdf(Unknown Source)
at com.lowagie.text.pdf.PdfReader.<init>(Unknown Source)
at com.lowagie.text.pdf.PdfReader.<init>(Unknown Source)
By the way, I'm sorry if the formatting is bad - this is my first time using stackoverflow.

Your issue is that you essentially try to read the same input stream multiple times while it is positioned at its end already after the first time:
in = new FileInputStream(inputPDF);
[...]
for (int pageNum = 0; pageNum < numPages; pageNum++) {
reader = new PdfReader(in);
[...]
}
The whole stream is read in the first iteration; thus, in the second one new PdfReader(in) essentially tries to parse an empty file resulting in your
java.io.IOException: PDF header signature not found
You can fix that by simply constructing the PdfReader with the input file path directly every time:
for (int pageNum = 0; pageNum < numPages; pageNum++) {
reader = new PdfReader(inputPDF);
[...]
}
Two more things, though:
You don't close your PdfReader instances after use. In the most recent iText versions implicit closing of readers has been taken out of the code as it collides with numerous use cases. Thus, after you finished working with a reader (this includes that any stamper etc using that reader also is closed), you should close the reader explicitly.
In general, if you have a PDF already in your file system, opening a PdfReader for it via a FileInputStream is very wasteful resource-wise --- a reader initialized with an input stream first completely reads that stream into memory (byte[]) and then parses the in-memory representation; a reader initialized with a file path directly parses on-disc representation.

The exception tells you that the file you're reading doesn't start with %PDF-.
Write a small example that doesn't involve iText and check the first 5 bytes of the InputStream in and you'll find out what you're doing wrong (we can't tell you unless you show us those 5 bytes).

Related

How to replace text in a pdf with correct encoding using Itext

I create a java program for translating PDFs. I am using google API for translation. I am getting the translation correct on my Eclipse IDE Console but when I check the newly created pdf, either it's not translated and copied as it is or few words are translated or the new pdf comes as empty and sometimes corrupted.
I suppose it has something to do with encoding & font types.
I have already gone through the Itext page & all the related questions but none worked for my case. I am trying to translate Portuguese Spanish Finnish French Hungarian, etc into English.
Here is my code:
public static final String SRC = "5587309Finnish.pdf";
public static final String DEST = "changed.pdf";
public static void main(String[] args) throws java.io.IOException, DocumentException {
Translate translate = TranslateOptions.getDefaultInstance().getService();
PdfReader reader = new PdfReader(SRC);
int pages = reader.getNumberOfPages();
PdfStamper stamper = new PdfStamper(reader, new FileOutputStream(DEST));
for(int i=1;i<=pages;i++) {
PdfDictionary dict = reader.getPageN(i);
PdfObject object = dict.getDirectObject(PdfName.CONTENTS);
if (object instanceof PRStream) {
String pageContent =
PdfTextExtractor.getTextFromPage(reader, i);
String[] word = pageContent.split(" ");
PRStream stream = (PRStream) object;
byte[] data = PdfReader.getStreamBytes(stream);
String dd = new String(data, BaseFont.CP1252);
for (int j=0; j < word.length; j++)
{
Translation translation = translate.translate(word[j],Translate.TranslateOption.sourceLanguage("fi"),
Translate.TranslateOption.targetLanguage("en"));
System.out.println(word[j]+"-->>"+translation.getTranslatedText());//here i can check the translation is correct.
dd = dd.replace(word[j],translation.getTranslatedText());
}
stream.setData(dd.getBytes());
}
}
stamper.close();
reader.close();
}
Please help.
According to a comment you have improved your code and are
getting the update dd(i.e. content stream which I am printing) correctly with the replaced text. I don't know why I am getting a blank pdf
Thus, I assume that your (hopefully representative) test PDFs have all their fonts of interest encoded in ANSI'ish encodings and the text arguments of the text drawing instructions contain whole words or even phrases which can properly be processed because otherwise text replacement would not have been possible.
Thus, here an example how one can replace text pieces with similarly long ones under such benign circumstances without breaking the content stream syntax. In this example I simply use a Map containing replacement strings. You can do your translation there.
First a frame loading the source, creating a stamper, iterating over the pages, and calling a helper to create a content stream replacement:
Map<String, String> replacements = new HashMap<>();
replacements.put("Förfallodatum", "Ablaufdatum");
try ( InputStream resource = SOURCE_INPUTSTREAM;
OutputStream result = new FileOutputStream(RESULT_FILE) ) {
PdfReader pdfReader = new PdfReader(resource);
PdfStamper pdfStamper = new PdfStamper(pdfReader, result);
for (int pageNum = 1; pageNum <= pdfReader.getNumberOfPages(); pageNum++) {
PdfDictionary page = pdfReader.getPageN(pageNum);
byte[] pageContentInput = ContentByteUtils.getContentBytesForPage(pdfReader, pageNum);
page.remove(PdfName.CONTENTS);
replaceInStringArguments(pageContentInput, pdfStamper.getUnderContent(pageNum), replacements);
}
pdfStamper.close();
}
(EditPageContentSimple test testReplaceInStringArgumentsForklaringAvFakturan)
The method replaceInStringArguments now parses the instructions in the given content stream, isolates string arguments, and calls another helper for each string argument doing the replacement.
void replaceInStringArguments(byte[] contentBytesBefore, PdfContentByte canvas, Map<String, String> replacements) throws IOException {
PRTokeniser tokeniser = new PRTokeniser(new RandomAccessFileOrArray(new RandomAccessSourceFactory().createSource(contentBytesBefore)));
PdfContentParser ps = new PdfContentParser(tokeniser);
ArrayList<PdfObject> operands = new ArrayList<PdfObject>();
while (ps.parse(operands).size() > 0){
for (int i = 0; i < operands.size(); i++) {
PdfObject pdfObject = operands.get(i);
if (pdfObject instanceof PdfString) {
operands.set(i, replaceInString((PdfString)pdfObject, replacements));
} else if (pdfObject instanceof PdfArray) {
PdfArray pdfArray = (PdfArray) pdfObject;
for (int j = 0; j < pdfArray.size(); j++) {
PdfObject arrayObject = pdfArray.getPdfObject(j);
if (arrayObject instanceof PdfString) {
pdfArray.set(j, replaceInString((PdfString)arrayObject, replacements));
}
}
}
}
for (PdfObject object : operands)
{
object.toPdf(canvas.getPdfWriter(), canvas.getInternalBuffer());
canvas.getInternalBuffer().append((byte) ' ');
}
canvas.getInternalBuffer().append((byte) '\n');
}
}
(EditPageContentSimple helper method)
The method replaceInString in turn retrieves a single string operand (a PdfString instance), manipulates it, and returns the manipulated string version:
PdfString replaceInString(PdfString string, Map<String, String> replacements) {
String value = PdfEncodings.convertToString(string.getBytes(), PdfObject.TEXT_PDFDOCENCODING);
for (Map.Entry<String, String> entry : replacements.entrySet()) {
value = value.replace(entry.getKey(), entry.getValue());
}
return new PdfString(PdfEncodings.convertToBytes(value, PdfObject.TEXT_PDFDOCENCODING));
}
(EditPageContentSimple helper method)
Instead of that for loop here you would call your translation routine and translate value.
As has been mentioned before, this code only works under certain benign circumstances. Don't expect it to work for arbitrary documents from the wild, in particular not for documents with other than Western European glyphs.

Get size (in bytes) of a specific page in a PDF using iText

I'm using iText (v 2.1.7) and I need to find the size, in bytes, of a specific page.
I've written the following code:
public static long[] getPageSizes(byte[] input) throws IOException {
PdfReader reader;
reader = new PdfReader(input);
int pageCount = reader.getNumberOfPages();
long[] pageSizes = new long[pageCount];
for (int i = 0; i < pageCount; i++) {
pageSizes[i] = reader.getPageContent(i+1).length;
}
reader.close();
return pageSizes;
}
But it doesn't work properly. The reader.getPageContent(i+1).length; instruction returns very small values (<= 100 usually), even for large pages that are more than 1MB, so clearly this is not the correct way to do this.
But what IS the correct way? Is there one?
Note: I've already checked this question, but the offered solution consists of writing each page of the PDF to disk and then checking the file size, which is extremely inefficient and may even be wrong, since I'm assuming this would repeat the PDF header and metadata each time. I was searching for a more "proper" solution.
Well, in the end I managed to get hold of the source code for the original program that I was working with, which only accepted PDFs as input with a maximum "page size" of 1MB. Turns out... what it actually means by "page size" was fileSize / pageCount -_-^
For anyone that actually needs the precise size of a "standalone" page, with all content included, I've tested this solution and it seems to work well, tho it probably isn't very efficient as it writes out an entire PDF document for each page. Using a memory stream instead of a disk-based one helps, but I don't know how much.
public static int[] getPageSizes(byte[] input) throws IOException {
PdfReader reader;
reader = new PdfReader(input);
int pageCount = reader.getNumberOfPages();
int[] pageSizes = new int[pageCount];
for (int i = 0; i < pageCount; i++) {
try {
Document doc = new Document();
ByteArrayOutputStream bous = new ByteArrayOutputStream();
PdfCopy copy= new PdfCopy(doc, bous);
doc.open();
PdfImportedPage page = copy.getImportedPage(reader, i+1);
copy.addPage(page);
doc.close();
pageSizes[i] = bous.size();
} catch (DocumentException e) {
e.printStackTrace();
}
}
reader.close();
return pageSizes;
}

java.net.malformedURL exception

URL stringfile = getXsl("test.xml");
File originFile = new File(stringfile.getFile());
String xml = null;
ByteArrayOutputStream pdfStream = null;
try {
FileInputStream fis = new FileInputStream(originFile);
int length = fis.available();
byte[] readData = new byte[length];
fis.read(readData);
xml = (new String(readData)).trim();
fis.close();
xml = xml.substring(xml.lastIndexOf("<HttpCommandList>")+17, xml.lastIndexOf("</HttpCommandList>"));
String[] splitxml = xml.split("</HttpCommand>");
for (int i = 0; i < splitxml.length; i++) {
tmpxml = splitxml[i].trim() + "</HttpCommand>";
System.out.println("splitxml:" +tmpxml);
pdfStream = new ByteArrayOutputStream();
pdf = new com.lowagie.text.Document();
PdfWriter.getInstance(pdf, pdfStream);
pdf.open();
URL xslToUse = getXsl("test.xsl");
// Here am using some utility class to transform
// generate the XML needed by iText to generate the PDF using MessageBuffer contents
String iTextXml = XmlUtil.transformXml(tmpxml.toString(), xslToUse).trim();
// generate the PDF document by parsing the specified XML file
XmlParser.parse(pdf, new ByteArrayInputStream(iTextXml.getBytes()));
}
For the above code, during the XmlParser am getting java.net.malformedURL exception : no protocol
Am trying to generate the pdf document by parsing the specified xml file.
We could need the actual xml-file to decide what is missing. I expect, that there is no protocol defined, just like this:
192.168.1.2/ (no protocol)
file://192.168.1.2/ (there is one)
And URL seems to need one.
Also try:
new File("somexsl.xlt").toURI().toURL();
See here and here.
It always helps spoilering the complete stacktrace. No one knows, where the exception actually occured, if you dont post the line numbers.

Splitting one Pdf file to multiple according to the file-size

I have been trying to split one big PDF file to multiple pdf files based on its size. I was able to split it but it only creates one single file and rest of the file data is lost. Means it does not create more than one files to split it. Can anyone please help? Here is my code
public static void main(String[] args) {
try {
PdfReader Split_PDF_By_Size = new PdfReader("C:\\Temp_Workspace\\TestZip\\input1.pdf");
Document document = new Document();
PdfCopy copy = new PdfCopy(document, new FileOutputStream("C:\\Temp_Workspace\\TestZip\\File1.pdf"));
document.open();
int number_of_pages = Split_PDF_By_Size.getNumberOfPages();
int pagenumber = 1; /* To generate file name dynamically */
// int Find_PDF_Size; /* To get PDF size in bytes */
float combinedsize = 0; /* To convert this to Kilobytes and estimate new PDF size */
for (int i = 1; i < number_of_pages; i++ ) {
float Find_PDF_Size;
if (combinedsize == 0 && i != 1) {
document = new Document();
pagenumber++;
String FileName = "File" + pagenumber + ".pdf";
copy = new PdfCopy(document, new FileOutputStream(FileName));
document.open();
}
copy.addPage(copy.getImportedPage(Split_PDF_By_Size, i));
Find_PDF_Size = copy.getCurrentDocumentSize();
combinedsize = (float)Find_PDF_Size / 1024;
if (combinedsize > 496 || i == number_of_pages) {
document.close();
combinedsize = 0;
}
}
System.out.println("PDF Split By Size Completed. Number of Documents Created:" + pagenumber);
}
catch (Exception i)
{
System.out.println(i);
}
}
}
(BTW, it would have been great if you had tagged your question with itext, too.)
PdfCopy used to close the PdfReaders it imported pages from whenever the source PdfReader for page imports switched or the PdfCopy was closed. This was due to the original intended use case to create one target PDF from multiple source PDFs in combination with the fact that many users forget to close their PdfReaders.
Thus, after you close the first target PdfCopy, the PdfReader is closed, too, and no further pages are extracted.
If I interpret the most recent checkins into the iText SVN repository correctly, this implicit closing of PdfReaders is in the process of being removed from the code. Therefore, with one of the next iText versions, your code may work as intended.

iText PdfTextExtractor getTextFromPage exception "Error reading string at file pointer"

I am using iText PdfTextExtractor to extract text from the PdfReader, where the PdfReader is created from a byte array,
byte[] pdfbytes = outputStream.toByteArray();
PdfReader reader = new PdfReader(pdfbytes);
int pagenumber = reader.getNumberOfPages();
PdfTextExtractor extractor = new PdfTextExtractor(reader);
for(int i = 1; i<= pagenumber; i++) {
System.out.println("============PAGE NUMBER " + i + "=============" );
String line = extractor.getTextFromPage(i);
System.out.println(line);
}
The first test pdf is from: http://www.gnostice.com/downloads/Gnostice_PathQuest.pdf
I can print out the first page, but get the follow exception at the second page
Exception:
Exception in thread "main" ExceptionConverter: java.io.IOException: Error reading string at file pointer 238291
at com.lowagie.text.pdf.PRTokeniser.throwError(Unknown Source)
at com.lowagie.text.pdf.PRTokeniser.nextToken(Unknown Source)
at com.lowagie.text.pdf.PdfContentParser.nextValidToken(Unknown Source)
at com.lowagie.text.pdf.PdfContentParser.readPRObject(Unknown Source)
at com.lowagie.text.pdf.PdfContentParser.parse(Unknown Source)
at com.lowagie.text.pdf.parser.PdfContentStreamProcessor.processContent(Unknown Source)
at com.lowagie.text.pdf.parser.PdfTextExtractor.getTextFromPage(Unknown Source)
at org.xxx.services.pdfparser.xxxExtensionPdfParser.main(xxxExtensionPdfParser.java:114)
where xxxExtensionPdfParser.java:114 is String line = extractor.getTextFromPage(i);
But at second test at http://www.irs.gov/pub/irs-pdf/fw4.pdf, I can get text content without exception. So i think it must be the format issue of first pdf that causes the exception.
So my question is, what is this format issue and is there anyway to avoid it? Thanks.
I am getting the same error and upon some investigation, it seems that the problem with my pdf documents is that they contain 'header' or 'footer' as opposed to the irs document you've linked. I indexed a 900 page pdf document and about 70 of the pages fail to extract. Apparently, all these pages have a footer copyright information. Any ideas how to resolve this issue ?
------EDIT ----------
I applied the following method to get text out from the aforementioned pdf. Hope this works for you as well.
PdfReader pdfReader = new PdfReader(file);
PdfReaderContentParser parser = new PdfReaderContentParser(pdfReader);
strategy = parser.processContent(currentPage, new SimpleTextExtractionStrategy());
content = strategy.getResultantText();
byte[] pdfbytes = outputStream.toByteArray();
PdfReader reader = new PdfReader(pdfbytes);
int pagenumber = reader.getNumberOfPages();
PdfTextExtractor extractor = new PdfTextExtractor(reader);
for(int i = 1; i<= pagenumber; i++) {
System.out.println("============PAGE NUMBER " + i + "=============" );
String line = PdfTextExtractor.getTextFromPage(reader,i);
System.out.println(line);
}
replace your code with this it will work fine..

Categories