Is there any way i can add multiple documents to a httpservletresponce.
This is the scenario.
I am looping through a list of filenames that a customer wants to print and retrieve them from an FTP location.
Now im able to add each one of the individually and show them in a browser. I want to show all of them..at a time.. in one browser.
Below is what i do to send one file to the browser.
is = ftp.retrieveFileStream(strFile);
ByteArrayOutputStream baos = convertTIFFtoPDF(is);
response.setContentType("application/pdf");
response.setContentLength(baos.size());
response.setHeader("Content-disposition",
"attachment;filename=\""
+ "importDocs.pdf" + "\"");
ServletOutputStream out = response
.getOutputStream();
baos.writeTo(out);
out.flush();
out.close();
How about concatenating all the pdf files in the server and deliver as one pdf file to the browser?
Adding more info due to request:
If you are on Linux, you could easily use gs or pdftk for this purpose. Windows ports should also be available.
You would use gs to concatenate pdf files like this:
gs -q -sPAPERSIZE=a4 -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=output.pdf file1.pdf file2.pdf file3.pdf [...] lastfile.pdf
You would use pdftk to concatenate pdf files like this:
pdftk *.pdf cat output onelargepdfile.pdf
So here are the steps that I would follow provided that the server has gs or pdftk installed:
Access all the pdf files from the remote location (your FTP server) to a directory in your server where you are running the application. Make sure each pdf file has a unique name to avoid multiple requests overriding files.
Execute gs or pdftk using a system command execution facility (Runtime.exec) from the servlet. Make sure the command executed generates the concatenated pdf with a unique file name.
Send the generated concatenated pdf to the browser using code similar to what you are already using.
Clean the server directory.
Finally.. here we go
for (PdfBean pdfBean : list)
{
if(!pdfBean.getFileName().isEmpty()&&!pdfBean.getFileLocation().isEmpty()){
msgDH.InsertUserHistDtls(pdfBean.getHawb(), userId, pdfBean.getCustomer(), pdfBean.getFileName(), pdfBean.getProduct(), pdfBean.getDocType(), "Print");
msgDH.InsertLogDtls(pdfBean.getFileName(), pdfBean.getProduct(), pdfBean.getCustomer(), userId);
FTPClient ftp = new FTPClient();
int tiffPages = 0;
try
{
ftp.connect(ftpserver);
} catch (SocketException e1)
{
// TODO Auto-generated catch block
e1.printStackTrace();
} catch (IOException e1)
{
// TODO Auto-generated catch block
e1.printStackTrace();
}
try
{
boolean login = ftp.login(ftpusername, ftppassword);
} catch (IOException e1)
{
// TODO Auto-generated catch block
e1.printStackTrace();
}
int reply;
reply = ftp.getReplyCode();
if (!FTPReply.isPositiveCompletion(reply))
{
ftp.disconnect();
System.out.println("FTP server refused connection.");
} else
{
// ftp.enterLocalPassiveMode();
ftp.enterLocalActiveMode();
ftp.setFileType(FTPClient.BINARY_FILE_TYPE);
ftp.changeWorkingDirectory(pdfBean.getFileLocation());
/* FTPFile[] */
ftpFiles = ftp.listFiles();
/* FTPFile[] */// ftpFiles = ftp.listFiles();
// ftp.completePendingCommand();
}
String fileName = pdfBean.getFileLocation() + "/"
+ pdfBean.getFileName();
if ("/interface/oracle/dds/generic/hold/TIF"
.equalsIgnoreCase(pdfBean.getFileLocation()))
{
int i = 0;
String F = pdfBean.getFileName().replaceAll(".PDF",
".TIF");
// FTPFile[] ftpFiles =
// ftp.listFiles(pdfBean.getFileLocation());
for (FTPFile ftpFile : ftpFiles)
{
if (ftpFile.getName().equals(F))
{
fileName = pdfBean.getFileLocation()
.concat("/").concat(F);
i = 1;
}
}
if (i == 0)
{
F = pdfBean.getFileName().replaceAll(".PDF",
".TIFF");
for (FTPFile ftpFile : ftpFiles)
{
if (ftpFile.getName().equals(F))
{
fileName = pdfBean.getFileLocation()
.concat("/").concat(F);
i = 1;
}
}
}
int Repcode = ftp.getReplyCode();
//
// System.out.println(Repcode);
/* if(fileName.endsWith(".TIF")) */
InputStream is = (InputStream) ftp.retrieveFileStream(fileName);
Repcode = ftp.getReplyCode();
System.out.println(Repcode);
// ftp.completePendingCommand();
Repcode = ftp.getReplyCode();
System.out.println(Repcode);
ra1 = new RandomAccessFileOrArray(is);
tiffPages = TiffImage.getNumberOfPages(ra1);
System.out.println("No of pages in image is : "
+ tiffPages);
for (int a = 1; a <= tiffPages; a++)
{
try
{
Image img = TiffImage.getTiffImage(ra1, a);
if (img != null)
{
if (img.getScaledWidth() > 500
|| img.getScaledHeight() > 700)
{
img.scaleToFit(800, 800);
}
doc.setPageSize(new Rectangle(img
.getScaledWidth(), img
.getScaledHeight()));
img.setAbsolutePosition(0, 0);
cb.addImage(img);
// doc.
doc.newPage();
// ++pages;
}
} catch (Throwable e)
{
System.out.println("Exception " + " page "
+ (a + 1) + " " + e.getMessage());
}
}
is.close();
ra1.close();
}
else{
InputStream pdf = ftp.retrieveFileStream(fileName);
if(pdf != null) {
PdfReader pdfRea = new PdfReader(pdf);
readers.add(pdfRea);
totalPages += pdfRea.getNumberOfPages();
}
/*totalPages += pdfRea.getNumberOfPages();
PdfImportedPage page;
int currentPageNumber = 0;
int pageOfCurrentReaderPDF = 0;
Iterator<PdfReader> iteratorPDFReader = readers.iterator();
BaseFont bf = BaseFont.createFont(BaseFont.HELVETICA,
BaseFont.CP1252, BaseFont.NOT_EMBEDDED);
// Loop through the PDF files and add to the output.
while (iteratorPDFReader.hasNext()) {
PdfReader pdfReader = iteratorPDFReader.next();
// Create a new page in the target for each source page.
while (pageOfCurrentReaderPDF < pdfReader.getNumberOfPages()) {
doc.newPage();
pageOfCurrentReaderPDF++;
currentPageNumber++;
page = write.getImportedPage(pdfReader, pageOfCurrentReaderPDF);
cb.addTemplate(page, 0, 0);
// Code for pagination.
cb.beginText();
cb.setFontAndSize(bf, 9);
cb.showTextAligned(PdfContentByte.ALIGN_CENTER, ""
+ currentPageNumber + " of " + totalPages, 520, 5, 0);
cb.endText();
}
pageOfCurrentReaderPDF = 0;
}*/
}
ftp.logout();
ftp.disconnect();
ftp = null;
}}
PdfImportedPage page;
int currentPageNumber = 0;
int pageOfCurrentReaderPDF = 0;
Iterator<PdfReader> iteratorPDFReader = readers.iterator();
BaseFont bf = BaseFont.createFont(BaseFont.HELVETICA,
BaseFont.CP1252, BaseFont.NOT_EMBEDDED);
// Loop through the PDF files and add to the output.
while (iteratorPDFReader.hasNext()) {
PdfReader pdfReader = iteratorPDFReader.next();
// Create a new page in the target for each source page.
while (pageOfCurrentReaderPDF < pdfReader.getNumberOfPages()) {
doc.newPage();
pageOfCurrentReaderPDF++;
currentPageNumber++;
page = write.getImportedPage(pdfReader, pageOfCurrentReaderPDF);
cb.addTemplate(page, 0, 0);
// Code for pagination.
cb.beginText();
cb.setFontAndSize(bf, 9);
cb.showTextAligned(PdfContentByte.ALIGN_CENTER, ""
+ currentPageNumber + " of " + totalPages, 520, 5, 0);
cb.endText();
}
pageOfCurrentReaderPDF = 0;
}
}
doc.close();
write.flush();
write.close();
System.out.println("done printing");
}
FileInputStream fiss = new FileInputStream(temp);
bis = new BufferedInputStream(fiss);
response.reset();
Related
for (int i = 0; i < listOfTempFiles.length; i++) {
for (int j = 0; j < listOfFAQFiles.length; j++) {
if (listOfTempFiles[i].isFile() && listOfTempFiles[i].length() > 0) {
if (listOfTempFiles[i].getName().toLowerCase().contains(".pdf")) {
if (listOfTempFiles[i].getName().substring(listOfTempFiles[i].getName().lastIndexOf("#") + 1).equals(listOfFAQFiles[j].getName())) {
try {
List<InputStream> list = new ArrayList<InputStream>();
list.add(new FileInputStream(listOfTempFiles[i]));
list.add(new FileInputStream(listOfFAQFiles[j]));
System.out.println(listOfTempFiles[i].getName() + "with FAQ: " + listOfFAQFiles[j].getName());
int iend = listOfTempFiles[i].getName().lastIndexOf("#");
if (iend != -1) {
outputFilename = listOfTempFiles[i].getName().substring(0, iend);
}
OutputStream out = new FileOutputStream(new File(finalPDFParh + "/" + outputFilename + ".pdf"));
doMerge(list, out);
boolean flag=listOfTempFiles[i].delete();
System.out.println("Flag----->"+flag);
list.clear();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (DocumentException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void doMerge(List<InputStream> list, OutputStream outputStream)
throws DocumentException, IOException {
Document document = new Document();
PdfWriter writer = PdfWriter.getInstance(document, outputStream);
document.open();
PdfContentByte cb = writer.getDirectContent();
for (InputStream in : list) {
PdfReader reader = new PdfReader(in);
for (int i = 1; i <= reader.getNumberOfPages(); i++) {
document.newPage();
//import the page from source pdf
PdfImportedPage page = writer.getImportedPage(reader, i);
//add the page to the destination pdf
cb.addTemplate(page, 0, 0);
}
}
outputStream.flush();
document.close();
outputStream.close();
}
I want to delete original file from listofTempFiles after it merges with FAQ file.doMerge methhod merges pdf added in the list.I have used delete function but it is not deleted?What can I do with it? I have used delete function.
Hello I was able to convert a tif file to jpeg with the following code that I got from
https://stackoverflow.com/questions/15429011/how-to-convert-tiff-to-jpeg-png-in-java#=
String inPath = "./tifTest/113873996.002.tif";
String otPath = "./tifTest/113873996.002-0.jpeg";
BufferedInputStream input = null;
BufferedOutputStream output = null;
try {
input = new BufferedInputStream(new FileInputStream(inPath), DEFAULT_BUFFER_SIZE);
output = new BufferedOutputStream(new FileOutputStream(otPath), DEFAULT_BUFFER_SIZE);
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
int length;
while ((length = input.read(buffer)) > 0) {
output.write(buffer, 0, length);
}
} catch (FileNotFoundException ex) {
Logger.getLogger(TifToJpeg.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(TifToJpeg.class.getName()).log(Level.SEVERE, null, ex);
} finally {
try {
output.flush();
output.close();
input.close();
} catch (IOException e) {
e.printStackTrace();
}
}
This only works with one-page tif file, and when I use it with a multi-page tif, it only saves the first page.
How can I modified this to save a mymultipagetif.tif into:
mymultipagetif-0.jpeg
mymultipagetif-1.jpeg
mymultipagetif-2.jpeg
Thanks!
This will takes a multi-page TIFF file (in the SeekableStream), extract the pages in the pages array ("1", "3", "4" for example) and write them into a single multi-page tiff into the file outTIffFileName. Modify as desired.
private String _ExtractListOfPages (SeekableStream ss, String outTiffFileName, String[] pages){
// pageNums is a String array of 0-based page numbers.
try {
TIFFDirectory td = new TIFFDirectory(ss, 0);
if (debugOn) {
System.out.println("Directory has " + Integer.toString(td.getNumEntries()) + " entries");
System.out.println("Getting TIFFFields");
System.out.println("X resolution = " + Float.toString(td.getFieldAsFloat(TIFFImageDecoder.TIFF_X_RESOLUTION)));
System.out.println("Y resolution = " + Float.toString(td.getFieldAsFloat(TIFFImageDecoder.TIFF_Y_RESOLUTION)));
System.out.println("Resolution unit = " + Long.toString(td.getFieldAsLong(TIFFImageDecoder.TIFF_RESOLUTION_UNIT)));
}
ImageDecoder decodedImage = ImageCodec.createImageDecoder("tiff", ss, null);
int count = decodedImage.getNumPages();
if (debugOn) { System.out.println("Input image has " + count + " page(s)"); }
TIFFEncodeParam param = new TIFFEncodeParam();
TIFFField tf = td.getField(259); // Compression as specified in the input file
param.setCompression(tf.getAsInt(0)); // Set the compression of the output to be the same.
param.setLittleEndian(false); // Intel
param.setExtraFields(td.getFields());
FileOutputStream fOut = new FileOutputStream(outTiffFileName);
Vector<RenderedImage> vector = new Vector<RenderedImage>();
RenderedImage page0 = decodedImage.decodeAsRenderedImage(Integer.parseInt(pages[0]));
BufferedImage img0 = new BufferedImage(page0.getColorModel(), (WritableRaster)page0.getData(), false, null);
int pgNum;
// Adding the extra pages starts with the second one on the list.
for (int i = 1; i < pages.length; i++ ) {
pgNum = Integer.parseInt(pages[i]);
if (debugOn) { System.out.println ("Page number " + pgNum); }
RenderedImage page = decodedImage.decodeAsRenderedImage(pgNum);
if (debugOn) { System.out.println ("Page is " + Integer.toString(page.getWidth()) + " pixels wide and "+ Integer.toString(page.getHeight()) + " pixels high."); }
if (debugOn) { System.out.println("Adding page " + pages[i] + " to vector"); }
vector.add(page);
}
param.setExtraImages(vector.iterator());
ImageEncoder encoder = ImageCodec.createImageEncoder("tiff", fOut, param);
if (debugOn) { System.out.println("Encoding page " + pages[0]); }
encoder.encode(decodedImage.decodeAsRenderedImage(Integer.parseInt(pages[0])));
fOut.close();
} catch (Exception e) {
System.out.println(e.toString());
return("Not OK " + e.getMessage());
}
return ("OK");
}
I currently have a file download process in my java class, listed below, to take all the data in an SQL table and put it in a CSV file for user download. However, when I download the file, all the data is good except it will cut off at random points (usually around line 20 of the data, given there are at least over 100 lines of data). I want to ask, what if making the cutoff? Is it session time related or is the code just problematic?
public String processFileDownload() {
DataBaseBean ckear = new DataBaseBean();
ckear.clearContens();
FacesContext fc = FacesContext.getCurrentInstance();
ExternalContext ec = fc.getExternalContext();
Map<String, Object> m = fc.getExternalContext().getSessionMap();
dbase = (DbaseBean) m.get("dbaseBean");
message = (MessageBean) m.get("messageBean");
dataBean = (DataBean) m.get("dataBean");
dbmsUser = (DbmsUserBean) m.get("dbmsUserBean");
FileOutputStream fos = null;
String path = fc.getExternalContext().getRealPath("/temp");
String tableName = dbmsUser.getTableName();
String fileNameBase = tableName + ".csv";
java.net.URL check = getClass().getClassLoader().getResource(
"config.properties");
File check2 = new File(check.getPath());
path = check2.getParent();
String fileName = path + "/" + dbmsUser.getUserName() + "_"
+ fileNameBase;
File f = new File(fileName);
try {
f.createNewFile();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
dbase.connect();
dbase.setQueryType("SELECT");
dbase.executeSQL("select * from " + tableName);
if (dbase.getResultSet() == null) {
FacesContext.getCurrentInstance().addMessage("myForm3:errmess",
new FacesMessage("Table doesn't exist!"));
return "failed";
}
Result result = ResultSupport.toResult(dbase.getResultSet());
downlaodedrows = result.getRowCount();
Object[][] sData = result.getRowsByIndex();
String columnNames[] = result.getColumnNames();
StringBuffer sb = new StringBuffer();
try {
fos = new FileOutputStream(fileName);
for (int i = 0; i < columnNames.length; i++) {
sb.append(columnNames[i].toString() + ",");
}
sb.append("\n");
fos.write(sb.toString().getBytes());
for (int i = 0; i < sData.length; i++) {
sb = new StringBuffer();
for (int j = 0; j < sData[0].length; j++) {
sb.append(sData[i][j].toString() + ",");
}
sb.append("\n");
fos.write(sb.toString().getBytes());
}
fos.flush();
fos.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
String mimeType = ec.getMimeType(fileName);
FileInputStream in = null;
byte b;
ec.responseReset();
ec.setResponseContentType(mimeType);
ec.setResponseContentLength((int) f.length());
ec.setResponseHeader("Content-Disposition", "attachment; filename=\""
+ fileNameBase + "\"");
try {
in = new FileInputStream(f);
OutputStream output = ec.getResponseOutputStream();
while (true) {
b = (byte) in.read();
if (b < 0)
break;
output.write(b);
}
} catch (NullPointerException e) {
fc.responseComplete();
return "SUCCESS";
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
try {
in.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
fc.responseComplete();
return "SUCCESS";
}
The problem seems to be that you are simply appending commas between values and it's likely one of the values you are writing contains a delimiter, line separator or quote character, which will "break" the CSV format if not correctly escaped.
It would be way easier and faster to use CSV library for that. uniVocity-parsers comes with pre-built routines to dump your resultset into properly formatted CSV. In your case, you could use this library in the following manner:
ResultSet resultSet = dbase.getResultSet();
// Configure the output format as needed before actually dumping the data:
CsvWriterSettings writerSettings = new CsvWriterSettings(); //many settings here, check the tutorials & examples.
writerSettings.getFormat().setLineSeparator("\n");
writerSettings.setHeaderWritingEnabled(true); // we want the column names to be printed out as well.
// Then create a routines object:
CsvRoutines routines = new CsvRoutines(writerSettings);
// The write() method takes care of everything. The resultSet and any other resources required are closed by the routine.
routines.write(resultSet, new File(fileName), "UTF-8");
Hope this helps
Disclaimer: I'm the author of this library. It's open source and free (Apache 2.0. license)
I am using the below code to retrieve a file from an ftp server and display it in the browser.
boolean fileFormatType = fileName.endsWith(".PDF");
if (fileFormatType) {
if (FilePdf != null && FilePdf.length() > 0) {
is = ftp.retrieveFileStream(strFile);
bis = new BufferedInputStream(is);
response.reset();
response.setContentType("application/pdf");
response.setHeader("Content-Disposition",
"inline;filename=example.pdf");
ServletOutputStream outputStream = response
.getOutputStream();
System.out.println("here ");
byte[] buffer = new byte[1024];
int readCount;
while ((readCount = bis.read(buffer)) > 0){
outputStream.write(buffer, 0, readCount);
}
outputStream.flush();
outputStream.close();
}
} else {
is = ftp.retrieveFileStream(strFile);
ByteArrayOutputStream baos = convertTIFFtoPDF(is);
response.setContentType("application/pdf");
response.setContentLength(baos.size());
response.setHeader("Content-disposition",
"attachment;filename=\"" + "importDocs.pdf" + "\"");
ServletOutputStream out = response.getOutputStream();
baos.writeTo(out);
out.flush();
out.close();
But now, i need to add the multiple files in a folder (the folder could contain PDFs and TIFFs) and display them all at a time in a browser. I have been trying unsuccessfully for the past three days. I could post the code that i tried here but i want a fresh opinion/approach. Please help me solve this. I am using itextpdf5.1 commons-io.util apis
I get a negative array size exception btw.
This is the final answer to my problem.
for (PdfBean pdfBean : list)
{
if(!pdfBean.getFileName().isEmpty()&&!pdfBean.getFileLocation().isEmpty()){
msgDH.InsertUserHistDtls(pdfBean.getHawb(), userId, pdfBean.getCustomer(), pdfBean.getFileName(), pdfBean.getProduct(), pdfBean.getDocType(), "Print");
msgDH.InsertLogDtls(pdfBean.getFileName(), pdfBean.getProduct(), pdfBean.getCustomer(), userId);
FTPClient ftp = new FTPClient();
int tiffPages = 0;
try
{
ftp.connect(ftpserver);
} catch (SocketException e1)
{
// TODO Auto-generated catch block
e1.printStackTrace();
} catch (IOException e1)
{
// TODO Auto-generated catch block
e1.printStackTrace();
}
try
{
boolean login = ftp.login(ftpusername, ftppassword);
} catch (IOException e1)
{
// TODO Auto-generated catch block
e1.printStackTrace();
}
int reply;
reply = ftp.getReplyCode();
if (!FTPReply.isPositiveCompletion(reply))
{
ftp.disconnect();
System.out.println("FTP server refused connection.");
} else
{
// ftp.enterLocalPassiveMode();
ftp.enterLocalActiveMode();
ftp.setFileType(FTPClient.BINARY_FILE_TYPE);
ftp.changeWorkingDirectory(pdfBean.getFileLocation());
/* FTPFile[] */
ftpFiles = ftp.listFiles();
/* FTPFile[] */// ftpFiles = ftp.listFiles();
// ftp.completePendingCommand();
}
String fileName = pdfBean.getFileLocation() + "/"
+ pdfBean.getFileName();
if ("/interface/oracle/dds/generic/hold/TIF"
.equalsIgnoreCase(pdfBean.getFileLocation()))
{
int i = 0;
String F = pdfBean.getFileName().replaceAll(".PDF",
".TIF");
// FTPFile[] ftpFiles =
// ftp.listFiles(pdfBean.getFileLocation());
for (FTPFile ftpFile : ftpFiles)
{
if (ftpFile.getName().equals(F))
{
fileName = pdfBean.getFileLocation()
.concat("/").concat(F);
i = 1;
}
}
if (i == 0)
{
F = pdfBean.getFileName().replaceAll(".PDF",
".TIFF");
for (FTPFile ftpFile : ftpFiles)
{
if (ftpFile.getName().equals(F))
{
fileName = pdfBean.getFileLocation()
.concat("/").concat(F);
i = 1;
}
}
}
int Repcode = ftp.getReplyCode();
//
// System.out.println(Repcode);
/* if(fileName.endsWith(".TIF")) */
InputStream is = (InputStream) ftp.retrieveFileStream(fileName);
Repcode = ftp.getReplyCode();
System.out.println(Repcode);
// ftp.completePendingCommand();
Repcode = ftp.getReplyCode();
System.out.println(Repcode);
ra1 = new RandomAccessFileOrArray(is);
tiffPages = TiffImage.getNumberOfPages(ra1);
System.out.println("No of pages in image is : "
+ tiffPages);
for (int a = 1; a <= tiffPages; a++)
{
try
{
Image img = TiffImage.getTiffImage(ra1, a);
if (img != null)
{
if (img.getScaledWidth() > 500
|| img.getScaledHeight() > 700)
{
img.scaleToFit(800, 800);
}
doc.setPageSize(new Rectangle(img
.getScaledWidth(), img
.getScaledHeight()));
img.setAbsolutePosition(0, 0);
cb.addImage(img);
// doc.
doc.newPage();
// ++pages;
}
} catch (Throwable e)
{
System.out.println("Exception " + " page "
+ (a + 1) + " " + e.getMessage());
}
}
is.close();
ra1.close();
}
else{
InputStream pdf = ftp.retrieveFileStream(fileName);
if(pdf != null) {
PdfReader pdfRea = new PdfReader(pdf);
readers.add(pdfRea);
totalPages += pdfRea.getNumberOfPages();
}
/*totalPages += pdfRea.getNumberOfPages();
PdfImportedPage page;
int currentPageNumber = 0;
int pageOfCurrentReaderPDF = 0;
Iterator<PdfReader> iteratorPDFReader = readers.iterator();
BaseFont bf = BaseFont.createFont(BaseFont.HELVETICA,
BaseFont.CP1252, BaseFont.NOT_EMBEDDED);
// Loop through the PDF files and add to the output.
while (iteratorPDFReader.hasNext()) {
PdfReader pdfReader = iteratorPDFReader.next();
// Create a new page in the target for each source page.
while (pageOfCurrentReaderPDF < pdfReader.getNumberOfPages()) {
doc.newPage();
pageOfCurrentReaderPDF++;
currentPageNumber++;
page = write.getImportedPage(pdfReader, pageOfCurrentReaderPDF);
cb.addTemplate(page, 0, 0);
// Code for pagination.
cb.beginText();
cb.setFontAndSize(bf, 9);
cb.showTextAligned(PdfContentByte.ALIGN_CENTER, ""
+ currentPageNumber + " of " + totalPages, 520, 5, 0);
cb.endText();
}
pageOfCurrentReaderPDF = 0;
}*/
}
ftp.logout();
ftp.disconnect();
ftp = null;
}}
PdfImportedPage page;
int currentPageNumber = 0;
int pageOfCurrentReaderPDF = 0;
Iterator<PdfReader> iteratorPDFReader = readers.iterator();
BaseFont bf = BaseFont.createFont(BaseFont.HELVETICA,
BaseFont.CP1252, BaseFont.NOT_EMBEDDED);
// Loop through the PDF files and add to the output.
while (iteratorPDFReader.hasNext()) {
PdfReader pdfReader = iteratorPDFReader.next();
// Create a new page in the target for each source page.
while (pageOfCurrentReaderPDF < pdfReader.getNumberOfPages()) {
doc.newPage();
pageOfCurrentReaderPDF++;
currentPageNumber++;
page = write.getImportedPage(pdfReader, pageOfCurrentReaderPDF);
cb.addTemplate(page, 0, 0);
// Code for pagination.
cb.beginText();
cb.setFontAndSize(bf, 9);
cb.showTextAligned(PdfContentByte.ALIGN_CENTER, ""
+ currentPageNumber + " of " + totalPages, 520, 5, 0);
cb.endText();
}
pageOfCurrentReaderPDF = 0;
}
}
doc.close();
write.flush();
write.close();
System.out.println("done printing");
}
FileInputStream fiss = new FileInputStream(temp);
bis = new BufferedInputStream(fiss);
response.reset();
i am trying to merge 2 pdf in one. Merging is working fine but contents overflow from pdf page. A shown in attachment. Original Document pdf is as Follows.
After Merge Document is coming like this
Java code as follows :
BaseFont bf = BaseFont.createFont(BaseFont.TIMES_BOLD, BaseFont.CP1252, BaseFont.EMBEDDED);
//BaseFont bf= BaseFont.createFont();
PdfContentByte cb = writer.getDirectContent(); // Holds the PDF
// data
PdfImportedPage page;
int currentPageNumber = 0;
int pageOfCurrentReaderPDF = 0;
Iterator<PdfReader> iteratorPDFReader = readers.iterator();
// Loop through the PDF files and add to the output.
while (iteratorPDFReader.hasNext()) {
PdfReader pdfReader = iteratorPDFReader.next();
// Create a new page in the target for each source page.
while (pageOfCurrentReaderPDF < pdfReader.getNumberOfPages()) {
document.newPage();
pageOfCurrentReaderPDF++;
currentPageNumber++;
page = writer.getImportedPage(pdfReader,
pageOfCurrentReaderPDF);
cb.addTemplate(page, 0, 0);
// Code for pagination.
if (paginate) {
cb.beginText();
cb.setFontAndSize(bf, 9);
cb.showTextAligned(PdfContentByte.ALIGN_CENTER, ""
+ currentPageNumber + " of " + totalPages, 520,
5, 0);
cb.endText();
}
}
pageOfCurrentReaderPDF = 0;
}
please Help.
Please download chapter 6 of my book and take a look at table 6.1. You're making the mistake merging two documents using PdfWriter instead of using PdfCopy as documented. Take a look at listing 6.22 to find out how to add page numbers when using PdfCopy.
i used "PdfCopyFields" Snippet as follows :
public static boolean concatPDFFiles(List<String> listOfFiles,
String outputfilepath) throws FileNotFoundException, DocumentException {
PdfCopyFields copy = null;
try {
copy = new PdfCopyFields(new FileOutputStream(outputfilepath));
} catch (DocumentException ex) {
Logger.getLogger(MergerGoogleDocsToPDF.class.getName()).log(Level.SEVERE, null, ex);
}
try {
for (String fileName : listOfFiles) {
PdfReader reader1 = new PdfReader(fileName);
copy.addDocument(reader1);
}
} catch (IOException ex) {
Logger.getLogger(MergerGoogleDocsToPDF.class.getName()).log(Level.SEVERE, null, ex);
} finally {
copy.close();
}
if (new File(outputfilepath).exists()) {
double bytes = new File(outputfilepath).length();
//double kilobytes = (bytes / 1024);
if (bytes != 0) {
return true;
} else {
return false;
}
} else {
return false;
}
}