I am trying to count the number of attachments on a PDF to verify our attachment code. The code I have works most of the time but recently it started failing when the number of attachments went up as well as the size of the attachments. Example: I have a PDF with 700 attachments which total 1.6 gb. And another with 65 attachments of around 10mb. The 65 count was done incrementally. We had built it up file by file. At 64 files (about 9.8mb) the routine counted fine. Add file 65 (about .5mb) and the routine failed.
This is on itextpdf-5.5.9.jar under jre1.8.0_162
We are still testing different combinations of file numbers and size to see where it breaks.
private static String CountFiles() throws IOException, DocumentException {
Boolean errorFound = new Boolean(true);
PdfDictionary root;
PdfDictionary names;
PdfDictionary embeddedFiles;
PdfReader reader = null;
String theResult = "unknown";
try {
if (!theBaseFile.toLowerCase().endsWith(".pdf"))
theResult = "file not PDF";
else {
reader = new PdfReader(theBaseFile);
root = reader.getCatalog();
names = root.getAsDict(PdfName.NAMES);
if (names == null)
theResult = "0";
else {
embeddedFiles = names.getAsDict(PdfName.EMBEDDEDFILES);
PdfArray namesArray = embeddedFiles.getAsArray(PdfName.NAMES);
theResult = String.format("%d", namesArray.size() / 2);
}
reader.close();
errorFound = false;
}
}
catch (Exception e) {
theResult = "unknown";
}
finally {
if (reader != null)
reader.close();
}
if (errorFound)
sendError(theResult);
return theResult;
}
private static String AttachFileInDir() throws IOException, DocumentException {
String theResult = "unknown";
String outputFile = theBaseFile.replaceFirst("(?i).pdf$", ".attach.pdf");
int maxFiles = 1000;
int fileCount = 1;
PdfReader reader = null;
PdfStamper stamper = null;
try {
if (!theBaseFile.toLowerCase().endsWith(".pdf"))
theResult = "basefile not PDF";
else if (theFileDir.length() == 0)
theResult = "no attach directory";
else if (!Files.isDirectory(Paths.get(theFileDir)))
theResult = "invalid attach directory";
else {
reader = new PdfReader(theBaseFile);
stamper = new PdfStamper(reader, new FileOutputStream(outputFile));
stamper.getWriter().setPdfVersion(PdfWriter.VERSION_1_7);
Path dir = FileSystems.getDefault().getPath(theFileDir);
DirectoryStream<Path> stream = Files.newDirectoryStream(dir);
for (Path path : stream) {
stamper.addFileAttachment(null, null, path.toFile().toString(), path.toFile().getName());
if (++fileCount > maxFiles) {
theResult = "maxfiles exceeded";
break;
}
}
stream.close();
stamper.close();
reader.close();
theResult = "SUCCESS";
}
}
catch (Exception e) {
theResult = "unknown";
}
finally {
if (stamper != null)
stamper.close();
if (reader != null)
reader.close();
}
if (theResult != "SUCCESS")
sendError(theResult);
return theResult;
}
I expect a simple count of attachments back. What seems to be happening is the namesArray is coming back null. The result stays "unknown". I suspect the namesArray is trying to hold all the files and choking on the size.
Note: The files are being attached using the AttachFileInDir procedure. Dump all the files in a directory and run the AttachFileInDir. And yes the error trapping in AttachFileInDir needs work.
Any help would be appreciated or another method welcome
I finally got it. Turns out each KID is a dictionary of NAMES….
Each NAMES hold 64 file references. At 65 files and up it made a KIDS dictionary array of names. So 279 files = ( 8*64 +46 )/2 (9 total KIDS array elements).
One thing that I had to compensate for. If one deletes all the attachments off a pdf it leaves artifacts behind as opposed to a PDF that never had an attachment
private static String CountFiles() throws IOException, DocumentException {
Boolean errorFound = new Boolean(true);
int totalFiles = 0;
PdfArray filesArray;
PdfDictionary root;
PdfDictionary names;
PdfDictionary embeddedFiles;
PdfReader reader = null;
String theResult = "unknown";
try {
if (!theBaseFile.toLowerCase().endsWith(".pdf"))
theResult = "file not PDF";
else {
reader = new PdfReader(theBaseFile);
root = reader.getCatalog();
names = root.getAsDict(PdfName.NAMES);
if (names == null){
theResult = "0";
errorFound = false;
}
else {
embeddedFiles = names.getAsDict(PdfName.EMBEDDEDFILES);
filesArray = embeddedFiles.getAsArray(PdfName.NAMES);
if (filesArray != null)
totalFiles = filesArray.size();
else {
filesArray = embeddedFiles.getAsArray(PdfName.KIDS);
if (filesArray != null){
for (int i = 0; i < filesArray.size(); i++)
totalFiles += filesArray.getAsDict(i).getAsArray(PdfName.NAMES).size();
}
}
theResult = String.format("%d", totalFiles / 2);
reader.close();
errorFound = false;
}
}
}
catch (Exception e) {
theResult = "unknown" + e.getMessage();
}
finally {
if (reader != null)
reader.close();
}
if (errorFound)
sendError(theResult);
return theResult;
}
Related
I'm merging multiple files, which originally have 19mb.
But the result is a total of 56mb. How can I make this final value approach the 19mb.
[EDIT]
public void concatena(InputStream anterior, InputStream novo, OutputStream saida, List<String> marcadores)
throws IOException {
PDFMergerUtility pdfMerger = new PDFMergerUtility();
pdfMerger.setDestinationStream(saida);
PDDocument dest;
PDDocument src;
MemoryUsageSetting setupMainMemoryOnly = MemoryUsageSetting.setupMainMemoryOnly();
if (anterior != null) {
dest = PDDocument.load(anterior, setupMainMemoryOnly);
src = PDDocument.load(novo, setupMainMemoryOnly);
} else {
dest = PDDocument.load(novo, setupMainMemoryOnly);
src = new PDDocument();
}
int totalPages = dest.getNumberOfPages();
pdfMerger.appendDocument(dest, src);
criaMarcador(dest, totalPages, marcadores);
saida = pdfMerger.getDestinationStream();
dest.save(saida);
dest.close();
src.close();
}
Sorry, I still do not know how to use stackoverflow very well. I'm trying to post the rest of the code but I'm getting an error
[Edit 2 - add criaMarcador method]
private void criaMarcador(PDDocument src, int numPaginas, List<String> marcadores) {
if (marcadores != null && !marcadores.isEmpty()) {
PDDocumentOutline documentOutline = src.getDocumentCatalog().getDocumentOutline();
if (documentOutline == null) {
documentOutline = new PDDocumentOutline();
}
PDPage page;
if (src.getNumberOfPages() == numPaginas) {
page = src.getPage(0);
} else {
page = src.getPage(numPaginas);
}
PDOutlineItem bookmark = null;
PDOutlineItem pai = null;
String etiquetaAnterior = null;
for (String etiqueta : marcadores) {
bookmark = bookmark(pai != null ? pai : documentOutline, etiqueta);
if (bookmark == null) {
if (etiquetaAnterior != null && !etiquetaAnterior.equals(etiqueta) && pai == null) {
pai = bookmark(documentOutline, etiquetaAnterior);
}
bookmark = new PDOutlineItem();
bookmark.setTitle(etiqueta);
if (marcadores.indexOf(etiqueta) == marcadores.size() - 1) {
bookmark.setDestination(page);
}
if (pai != null) {
pai.addLast(bookmark);
pai.openNode();
} else {
documentOutline.addLast(bookmark);
}
} else {
pai = bookmark;
}
etiquetaAnterior = etiqueta;
}
src.getDocumentCatalog().setDocumentOutline(documentOutline);
}
}
private PDOutlineItem bookmark(PDOutlineNode outline, String etiqueta) {
PDOutlineItem current = outline.getFirstChild();
while (current != null) {
if (current.getTitle().equals(etiqueta)) {
return current;
}
bookmark(current, etiqueta);
current = current.getNextSibling();
}
return current;
}
[Edit 3]Here is the code used for testing
public class PDFMergeTeste {
public static void main(String[] args) throws IOException {
if (args.length == 1) {
PDFMergeTeste teste = new PDFMergeTeste();
teste.executa(args[0]);
} else {
System.err.println("Argumento tem que ser diretorio contendo arquivos .pdf com nomeclatura no padrão Autos");
}
}
private void executa(String diretorioArquivos) throws IOException {
File[] listFiles = new File(diretorioArquivos).listFiles((pathname) ->
pathname.getName().endsWith(".pdf") || pathname.getName().endsWith(".PDF"));
List<File> lista = Arrays.asList(listFiles);
lista.sort(Comparator.comparing(File::lastModified));
PDFMerge merge = new PDFMerge();
InputStream anterior = null;
ByteArrayOutputStream saida = new ByteArrayOutputStream();
for (File file : lista) {
List<String> marcadores = marcadores(file.getName());
InputStream novo = new FileInputStream(file);
merge.concatena(anterior, novo, saida, marcadores);
anterior = new ByteArrayInputStream(saida.toByteArray());
}
try (OutputStream pdf = new FileOutputStream(pathDestFile)) {
saida.writeTo(pdf);
}
}
private List<String> marcadores(String name) {
String semExtensao = name.substring(0, name.indexOf(".pdf"));
return Arrays.asList(semExtensao.split("_"));
}
}
The error is in the executa method:
InputStream anterior = null;
ByteArrayOutputStream saida = new ByteArrayOutputStream();
for (File file : lista) {
List<String> marcadores = marcadores(file.getName());
InputStream novo = new FileInputStream(file);
merge.concatena(anterior, novo, saida, marcadores);
anterior = new ByteArrayInputStream(saida.toByteArray());
}
Your ByteArrayOutputStream saida is re-used in each loop but it is not cleared in-between. Thus, it contains
after processing file 1:
file 1
after processing file 2:
file 1
concatenation of file 1 and file 2
after processing file 3: file 1
file 1
concatenation of file 1 and file 2
concatenation of file 1 and file 2 and file 3
after processing file 4:
file 1
concatenation of file 1 and file 2
concatenation of file 1 and file 2 and file 3
concatenation of file 1 and file 2 and file 3 and file 4
(Actually this only works because PDFBox tries to be nice and fixes broken input files under the hood as these concatenations of files strictly speaking are broken and PDFBox doesn't need to be able to parse them.)
You can fix this by clearing saida at the start of each iteration:
InputStream anterior = null;
ByteArrayOutputStream saida = new ByteArrayOutputStream();
for (File file : lista) {
saida.reset();
List<String> marcadores = marcadores(file.getName());
InputStream novo = new FileInputStream(file);
merge.concatena(anterior, novo, saida, marcadores);
anterior = new ByteArrayInputStream(saida.toByteArray());
}
With your original method the result size for your inputs is nearly 26 MB, with the fixed method it is about 5 MB, and that latter size approximately represents the sum of the sizes of the input files.
I have a servlet which receives a huge string (apprx 301695 length) as a post parameter.
Every minute, a .net application sends such huge string to the servlet.
Initially I used to get the string as below:
Line 1: String str = request.getParameter("data");
But, after 3-4 hours. I get the following exception:
java.lang.OutOfMemoryError: Java heap space
Then I commented Line: 1. Even though, My servlet code does not receive the string, I get the same exception as mentioned above.
Please guide me. How should I deal with this issue? I have read many blogs and articles related to it, increased the heap size and other things. But, haven't found any solution.
Original code was like below:
private String scanType = "";
private static final String path = "D:\\Mobile_scan_alerts";
private static final String stockFileName = "stock.txt";
private static final String foFileName = "fo.txt";
private static Logger logger = null;
private String currDate = "";
private DateFormat dateFormat;
private StringBuffer stockData;
private StringBuffer foData;
StringBuffer data = new StringBuffer("");
// For average time of received data
private static float sum = 0;
private static float count = 0;
private static float s_sum = 0;
private static float s_count = 0;
private static float fo_sum = 0;
private static float fo_count = 0;
private static final File dir = new File(path);
private static final File stockFile = new File(path + "\\" + stockFileName);
private static final File foFile = new File(path + "\\" + foFileName);
public void init() {
logger = MyLogger.getScanAlertLogger();
if(logger == null) {
MyLogger.createLog();
logger = MyLogger.getScanAlertLogger();
}
}
/**
* Processes requests for both HTTP <code>GET</code> and <code>POST</code>
* methods.
*
* #param request servlet request
* #param response servlet response
* #throws ServletException if a servlet-specific error occurs
* #throws IOException if an I/O error occurs
*/
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
PrintWriter out = response.getWriter();
response.setContentType("text/plain");
String strScan = "";
try {
String asof = null;
scanType = request.getParameter("type");
scanType = scanType == null ? "" : scanType;
if(scanType.length() > 0){
if(scanType.equalsIgnoreCase("s")) {
stockData = null;
stockData = new StringBuffer(request.getParameter("scanData"));
stockData = stockData == null ? new StringBuffer("") : stockData;
} else {
foData = null;
foData = new StringBuffer(request.getParameter("scanData"));
foData = foData == null ? new StringBuffer("") : foData;
}
}
asof = request.getParameter("asof");
asof = asof == null ? "" : asof.trim();
// Date format without seconds
DateFormat formatWithoutSec = new SimpleDateFormat("yyyy/MM/dd HH:mm");
dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
Date tmp = new Date();
// format: yyyy/MM/dd HH:mm:ss
currDate = dateFormat.format(tmp);
//format: yyyy/MM/dd HH:mm
Date asofDate = formatWithoutSec.parse(asof);
Date cDate = formatWithoutSec.parse(currDate);
cDate.setSeconds(0);
System.out.println(asofDate.toString()+" || "+cDate.toString());
int isDataExpired = asofDate.toString().compareTo(cDate.toString());
if(isDataExpired > 0 || isDataExpired == 0) {
if(scanType != null && scanType.length() > 0) {
checkAndCreateDir();
strScan = scanType.equalsIgnoreCase("s") ? "Stock Data Recieved at "+currDate
: "FO Data Recieved at "+currDate;
//System.out.println(strScan);
} else {
strScan = "JSON of scan data not received properly at "+currDate;
//System.out.println("GSAS: received null or empty");
}
} else {
strScan = "GSAS: " + scanType + ": Received Expired Data of "+asofDate.toString()+" at "+cDate.toString();
System.out.println(strScan);
}
scanType = null;
} catch (Exception ex) {
strScan = "Mobile server issue for receiving scan data";
System.out.println("GSAS: Exception-1: "+ex);
logger.error("GetScanAlertServlet: processRequest(): Exception: "+ex.toString());
} finally {
logger.info("GetScanAlertServlet: "+strScan);
out.println(strScan);
}
}
private void checkAndCreateDir() {
try {
boolean isStock = false;
Date ddate = new Date();
currDate = dateFormat.format(ddate);
sum += ddate.getSeconds();
count++;
logger.info("Total Average Time: "+(sum/count));
if(scanType.equalsIgnoreCase("s")){ //For Stock
setStockData(stockData);
Date date1 = new Date();
currDate = dateFormat.format(date1);
s_sum += date1.getSeconds();
s_count++;
logger.info("Stock Average Time: "+(s_sum/s_count));
//file = new File(path + "\\" + stockFileName);
isStock = true;
} else if (scanType.equalsIgnoreCase("fo")) { //For FO
setFOData(foData);
Date date2 = new Date();
currDate = dateFormat.format(date2);
fo_sum += date2.getSeconds();
fo_count++;
logger.info("FO Average Time: "+(fo_sum/fo_count));
//file = new File(path + "\\" +foFileName);
isStock = false;
}
if(!dir.exists()) { // Directory not exists
if(dir.mkdir()) {
if(isStock)
checkAndCreateFile(stockFile);
else
checkAndCreateFile(foFile);
}
} else { // Directory already exists
if(isStock)
checkAndCreateFile(stockFile);
else
checkAndCreateFile(foFile);
}
} catch (Exception e) {
System.out.println("GSAS: Exception-2: "+e);
logger.error("GetScanAlertServlet: checkAndCreateDir(): Exception: "+e);
}
}
private void checkAndCreateFile(File file) {
try{
if(!file.exists()){ // File not exists
if(file.createNewFile()){
writeToFile(file);
}
} else { // File already exists
writeToFile(file);
}
} catch (Exception e) {
System.out.println("GSAS: Exception-3: "+e);
logger.error("GetScanAlertServlet: checkAndCreateFile(): Exception: "+e.toString());
}
}
private void writeToFile(File file) {
FileOutputStream fop = null;
try{
if(scanType.equalsIgnoreCase("s")){ //For Stock
data = getStockData();
} else if (scanType.equalsIgnoreCase("fo")) { //For FO
data = getFOData();
}
if(data != null && data.length() > 0 && file != null){
fop = new FileOutputStream(file);
byte[] contentBytes = data.toString().getBytes();
for(byte b : contentBytes){
fop.write(b);
}
//fop.write(contentBytes);
fop.flush();
} else {
System.out.println("GSAS: Data is null/empty string");
logger.info("GSAS: Data is null or empty string");
}
data = null;
} catch (Exception e) {
System.out.println("GSAS: Exception-4: "+e);
logger.info("GetScanAlertServlet: writeToFile(): Exception: "+e.toString());
} finally {
try {
if(fop != null)
fop.close();
} catch (IOException ex) {
java.util.logging.Logger.getLogger(GetScanAlertServlet.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
private String readFromFile(String fileName){
String fileContent = "";
try{
String temp = "";
File file = new File(fileName);
if(file.exists()){
FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
while((temp = br.readLine()) != null)
{
fileContent += temp;
}
br.close();
} else {
System.out.println("GSAS: File not exists to read");
logger.info("GetScanAlertServlet: File not exists to read");
}
temp = null;
file = null;
} catch (Exception e) {
System.out.println("GSAS: Exception-5: "+e);
logger.error("GetScanAlertServlet: readFromFile(): Exception: "+e.toString());
}
return fileContent;
}
public StringBuffer getStockData() {
//String temp="";
//StringBuffer temp = (StringBuffer)scanDataSession.getAttribute("stock");
//if(temp != null && temp.length() > 0) {
// return temp;
//}
if(stockData != null && stockData.length() > 0){
return stockData;
} else {
stockData = null;
stockData = new StringBuffer(readFromFile(path + "\\"+ stockFileName));
return stockData;
}
}
public StringBuffer getFOData(){
//String temp="";
//StringBuffer temp = (StringBuffer)scanDataSession.getAttribute("fo");
//if(temp != null && temp.length() > 0) {
// return temp;
//}
if(foData != null && foData.length() > 0) {
return foData;
} else {
foData = null;
foData = new StringBuffer(readFromFile(path + "\\" + foFileName));
return foData;
}
}
}
Increasing heap size is not a good solution for this problem. Your upstream application should stop sending huge strings to your Servlet.
Your upstream(.net) application should consider writing all the data to a file, just need to send the location of the file as a parameter to your Servlet. Once your servlet receives notification from the upstream, you consider downloading/reading file from the location.
Then I commented Line: 1. Even though, My servlet code does not
receive the string (as commented), I get the same exception as
mentioned above.
The Line: 1 is to read data. If you comment it, you wont receive the String.
You can use apache commons-fileupload library Streaming API, this way, you get your uploaded file as a stream and write it to the file :
ServletFileUpload upload = new ServletFileUpload();
// Parse the request
FileItemIterator iter = upload.getItemIterator(request);
while (iter.hasNext()) {
FileItemStream item = iter.next();
String name = item.getFieldName();
InputStream stream = item.openStream();
if (item.isFormField()) {
System.out.println("Form field " + name + " with value "
+ Streams.asString(stream) + " detected.");
} else {
System.out.println("File field " + name + " with file name "
+ item.getName() + " detected.");
// Process the input stream
...
}
}
Now You have InputStream, so you can write it in the output stream.
But to use this you need your .NET application to upload the bytes to the server instead of sending entire String as request param.
http://commons.apache.org/proper/commons-fileupload/streaming.html
Please check your VM Arguments and modify them approriately if you have no control of the String being passed to the servlet. For examples:
set JAVA_OPTS=-Dfile.encoding=UTF-8 -Xms128m -Xmx1024m -XX:PermSize=64m -XX:MaxPermSize=256m
Check for a complete explanation here.
We used GZip compression/decompression to lower the size of the string. And it worked effectively.
So, the .net service compressed the huge string, sent it to our Servlet. We then decompress it at our server.
I want to merge PDF files; around 20 files of 60 MB each. I am using iText API to merge it.
The problem is, I have to complete it within 2 seconds but my code is taking 8 seconds.
Any solution to speeding up merging PDF files?
private void mergeFiles(List<String> filesToBeMerged, String mergedFilePath) throws Exception {
Document document = null;
PdfCopy copy = null;
PdfReader reader = null;
BufferedOutputStream bos = null;
int bufferSize = 8 * 1024 * 1024;
String pdfLocation = "C:\\application\\projectone-working\\projectone\\web\\pdf\\";
try {
int fileIndex = 0;
for (String file : filesToBeMerged)
{
reader = new PdfReader(pdfLocation+"/"+file);
reader.consolidateNamedDestinations();
int totalPages = reader.getNumberOfPages();
if (fileIndex == 0) {
document = new Document(reader.getPageSizeWithRotation(1));
bos = new BufferedOutputStream(new FileOutputStream(mergedFilePath), bufferSize);
copy = new PdfCopy(document, bos);
document.open();
}
PdfImportedPage page;
for (int currentPage = 1; currentPage <= totalPages; currentPage++) {
page = copy.getImportedPage(reader, currentPage);
copy.addPage(page);
}
PRAcroForm form = reader.getAcroForm();
if (form != null) {
copy.copyAcroForm(reader);
}
}
document.close();
} finally {
if (reader != null) {
reader.close();
}
if (bos != null) {
bos.flush();
bos.close();
}
if (copy != null) {
copy.close();
}
}
}
hello so I have been writing an updater for my game.
1) It checks a .version file on drop box and compares it to the local .version file.
2) If there is any link missing from the local version of the file, it downloads the required link one by one.
The issue I am having is some of the users can download the zips and some cannot.
One of my users who was having the issue was using windows xp. So some of them have old computers.
I was wondering if anyone could help me to get an idea on what could be causing this.
This is the main method that is ran
public void UpdateStart() {
System.out.println("Starting Updater..");
if(new File(cache_dir).exists() == false) {
System.out.print("Creating cache dir.. ");
while(new File(cache_dir).mkdir() == false);
System.out.println("Done");
}
try {
version_live = new Version(new URL(version_file_live));
} catch(MalformedURLException e) {
e.printStackTrace();
}
version_local = new Version(new File(version_file_local));
Version updates = version_live.differences(version_local);
System.out.println("Updated");
int i = 1;
try {
byte[] b = null, data = null;
FileOutputStream fos = null;
BufferedWriter bw = null;
for(String s : updates.files) {
if(s.equals(""))
continue;
System.out.println("Reading file "+s);
text = "Downloading file "+ i + " of "+updates.files.size();
b = readFile(new URL(s));
progress_a = 0;
progress_b = b.length;
text = "Unzipping file "+ i++ +" of "+updates.files.size();
ZipInputStream zipStream = new ZipInputStream(new ByteArrayInputStream(b));
File f = null, parent = null;
ZipEntry entry = null;
int read = 0, entry_read = 0;
long entry_size = 0;
progress_b = 0;
while((entry = zipStream.getNextEntry()) != null)
progress_b += entry.getSize();
zipStream = new ZipInputStream(new ByteArrayInputStream(b));
while((entry = zipStream.getNextEntry()) != null) {
f = new File(cache_dir+entry.getName());
if(entry.isDirectory())
continue;
System.out.println("Making file "+f.toString());
parent = f.getParentFile();
if(parent != null && !parent.exists()) {
System.out.println("Trying to create directory "+parent.getAbsolutePath());
while(parent.mkdirs() == false);
}
entry_read = 0;
entry_size = entry.getSize();
data = new byte[1024];
fos = new FileOutputStream(f);
while(entry_read < entry_size) {
read = zipStream.read(data, 0, (int)Math.min(1024, entry_size-entry_read));
entry_read += read;
progress_a += read;
fos.write(data, 0, read);
}
fos.close();
}
bw = new BufferedWriter(new FileWriter(new File(version_file_local), true));
bw.write(s);
bw.newLine();
bw.close();
}
} catch(Exception e) {
this.e = e;
e.printStackTrace();
return;
}
System.out.println(version_live);
System.out.println(version_local);
System.out.println(updates);
try {
} catch (Exception er) {
er.printStackTrace();
}
}
I have been trying to fix this for the last two days and I am just so stumped at this point
All the best,
Christian
This is a followup to this question: here, involving iText. I create a new Pdf with a different rotation angle, then delete the old one and rename the new one to the name of the old one. I've determined that my problem actually happens (only when rotation == null, wtf) with the call to
outFile.renameTo(inFile)
Oddly enough renameTo() returns true, but the file is not equal to the original, outFile will no longer open in Adobe Reader on Windows. I tried analyzing the corrupted Pdf file in a desktop Pdf repair program, and the results I get are:
The end-of-file marker was not found.
The ‘startxref’ keyword or the xref position was not found.
The end-of-file marker was not found.
If I leave out the calls to delete() and renameTo() I am left with two files, neither of which are corrupt. I have also tried copying the file contents with a byte[] with the same results. I have tried outFile.renameTo(new File(inFile.toString()) since inFile is actually a subclass of File with the same results. I have tried new FileDescriptor().sync() with the same results. I have tried adding this broadcast in between every file operation with the same results:
PdfRotateService.appContext.sendBroadcast(new Intent(Intent.ACTION_MEDIA_MOUNTED, Uri
.parse("file://")));
I have tried sleeping the Thread with the same results. I have verified the paths are correct. No exceptions are thrown and delele() and renameTo() return true. I have also tried keeping a reference to the FileOutputStream and manually closing it in the finally block.
I am beginning to think there is a bug in the Android OS or something (but maybe I am overlooking something simple), please help! I want a rotated Pdf with the same filename as the original.
static boolean rotatePdf(LocalFile inFile, int angle)
{
PdfReader reader = null;
PdfStamper stamper = null;
LocalFile outFile = getGoodFile(inFile, ROTATE_SUFFIX);
boolean worked = true;
try
{
reader = new PdfReader(inFile.toString());
stamper = new PdfStamper(reader, new FileOutputStream(outFile));
int i = FIRST_PAGE;
int l = reader.getNumberOfPages();
for (; i <= l; ++i)
{
int desiredRot = angle;
PdfDictionary pageDict = reader.getPageN(i);
PdfNumber rotation = pageDict.getAsNumber(PdfName.ROTATE);
if (rotation != null)
{
desiredRot += rotation.intValue();
desiredRot %= 360;
}
// else
// worked = false;
pageDict.put(PdfName.ROTATE, new PdfNumber(desiredRot));
}
} catch (IOException e)
{
worked = false;
Log.w("Rotate", "Caught IOException in rotate");
e.printStackTrace();
} catch (DocumentException e)
{
worked = false;
Log.w("Rotate", "Caught DocumentException in rotate");
e.printStackTrace();
} finally
{
boolean z = closeQuietly(stamper);
boolean y = closeQuietly(reader);
if (!(y && z))
worked = false;
}
if (worked)
{
if (!inFile.delete())
worked = false;
if (!outFile.renameTo(inFile))
worked = false;
}
else
{
outFile.delete();
}
return worked;
}
static boolean closeQuietly(Object resource)
{
try
{
if (resource != null)
{
if (resource instanceof PdfReader)
((PdfReader) resource).close();
else if (resource instanceof PdfStamper)
((PdfStamper) resource).close();
else
((Closeable) resource).close();
return true;
}
} catch (Exception ex)
{
Log.w("Exception during Resource.close()", ex);
}
return false;
}
public static LocalFile getGoodFile(LocalFile inFile, String suffix)
{
#SuppressWarnings("unused")
String outString = inFile.getParent() + DIRECTORY_SEPARATOR +
removeExtension(inFile.getName()) + suffix + getExtension(inFile.getName());
LocalFile outFile = new LocalFile(inFile.getParent() + DIRECTORY_SEPARATOR +
removeExtension(inFile.getName()) + suffix + getExtension(inFile.getName()));
int n = 1;
while (outFile.isFile())
{
outFile = new LocalFile(inFile.getParent() + DIRECTORY_SEPARATOR +
removeExtension(inFile.getName()) + suffix + n + getExtension(inFile.getName()));
++n;
}
return outFile;
}