I am using documents4j to convert word documents to pdf and some time I am getting below exception
2016-03-28 09:29:16.982 INFO 3660 --- [pool-1-thread-2] c.d.c.msoffice.MicrosoftWordBridge : Requested conversion from C:\conversion-temp\2b33637b-b74a-4aaa-ac65-a5ebc1eb3efc\temp3 (application/msword) to C:\conversion-temp\2b33637b-b74a-4aaa-ac65-a5ebc1eb3efc\temp4 (application/pdf)
2016-03-28 09:29:17.372 ERROR 3660 --- [http-nio-8080-exec-9] c.s.c.e.mappers.ExceptionMapper : Exception while handling request
com.documents4j.throwables.FileSystemInteractionException: Could not access target file
at com.documents4j.util.Reaction$FileSystemInteractionExceptionBuilder.make(Reaction.java:180) ~[documents4j-util-all-1.0.2.jar:na]
at com.documents4j.util.Reaction$ExceptionalReaction.apply(Reaction.java:75) ~[documents4j-util-all-1.0.2.jar:na]
at com.documents4j.conversion.ExternalConverterScriptResult.resolve(ExternalConverterScriptResult.java:70) ~[documents4j-transformer-api-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.evaluateExitValue(ProcessFutureWrapper.java:48) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.get(ProcessFutureWrapper.java:36) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.get(ProcessFutureWrapper.java:11) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.job.AbstractFutureWrappingPriorityFuture.run(AbstractFutureWrappingPriorityFuture.java:78) ~[documents4j-util-conversion-1.0.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_74]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_74]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74]
After this exception any further requests are rejected by the documents4j library with below exception
com.documents4j.throwables.ConverterAccessException: The converter seems to be shut down
at com.documents4j.util.Reaction$ConverterAccessExceptionBuilder.make(Reaction.java:117) ~[documents4j-util-all-1.0.2.jar:na]
at com.documents4j.util.Reaction$ExceptionalReaction.apply(Reaction.java:75) ~[documents4j-util-all-1.0.2.jar:na]
at com.documents4j.conversion.ExternalConverterScriptResult.resolve(ExternalConverterScriptResult.java:70) ~[documents4j-transformer-api-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.evaluateExitValue(ProcessFutureWrapper.java:48) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.get(ProcessFutureWrapper.java:36) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.get(ProcessFutureWrapper.java:11) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.job.AbstractFutureWrappingPriorityFuture.run(AbstractFutureWrappingPriorityFuture.java:78) ~[documents4j-util-conversion-1.0.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_74]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_74]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74]
This is how I am doing the documents conversion.
I am instantiating an instance of LocalConverter
LocalConverter.builder().workerPool(corePoolSize, maximumPoolSize, keepAliveTime, TimeUnit.MINUTES).baseFolder(baseFolder).processTimeout(processTimeout, TimeUnit.SECONDS).build();
corePoolSize is 5
maximumPoolSize is 10
keepAliveTime is 3 minutes
processTimeout is 20 minutes
And I am using this instance like
public File convertFile(MultipartFile file) throws ConversionException {
try(InputStream docStream = file.getInputStream(); ByteArrayOutputStream pdfStream = new ByteArrayOutputStream()) {
boolean status = iConverter.convert(docStream, false).as(DocumentType.DOC).to(pdfStream, false).as(DocumentType.PDF).execute();
if(status) {
// conversion is success, send the response
File response = new File();
//InputStream responseStream = new ByteArrayInputStream(pdfStream.toByteArray());
response.setContentLength(pdfStream.size());
//response.setInputStream(responseStream);
response.setOutputStream(pdfStream);
return response;
} else {
LOGGER.error("Failed to convert word to pdf, conversion status is {}", status);
throw new ConversionException("failed to convert word to pdf");
}
} catch (FileSystemInteractionException fsie) {
LOGGER.error("documents4j file system interaction exception", fsie);
throw new ConversionException("File system exception", fsie);
} catch(IOException ioe) {
throw new ConversionException("Cannot read the input stream of file", ioe);
}
}
This multipart file is spring multipart file.
I checked the vb script that documents4j uses for the conversion and I came to know that this error occurs when the wordDocument was not closed properly. Below is the snippet from vb script which is the source of this error
' Close the source document.
wordDocument.Close WdDoNotSaveChanges
If Err <> 0 Then
WScript.Quit -3
End If
On Error GoTo 0
I am not sure why I am getting FileSystemInteractionException.
There are two assumptions that I can think of
I am sending multiple simultaneous requests and the file is deleted by some other thread
I am getting the inputstream from MultipartFile object and the multipart file is a temporary and as per the documentation the user is responsible to copy the content to a persistent storage.
spring official docs
How can I resolve this error and what is the root cause of this error.
There can be multiple reasons for this error:
com.documents4j.throwables.FileSystemInteractionException: Could not access target file
Exception documentation here
Have you tried saving the uploaded multi-part file to a temporary file, then passing this temporary file to the converter? I am aware this is an unnecessary overhead. However, if this works, then we can safely assume that the input "docstream" isn't populated completely when the IConverter instance tries to access it, and hence the error. In this case, you should ensure that the inputstream is populated before attempting conversion and that should resolve your issue.
If you get this error even for "file-based" conversion scenarios, try the following steps:
Ensure that MS Office applications aren't running (because you opened a word document externally)
Ensure that there is one and only one instance of IConverter running across the physical machine (and not just the JVM)
If you are running Tomcat as a service (I'm assuming you're deploying this on Tomcat), you are running tomcat not as a SYSTEM account service but a local user account.
In a web application, you should have the IConverter instance being created once (like in a singleton class), and it should return the same instance whenever requested by one of your business methods. Also, do not shut down the converter if you anticipate simultaneous document conversion requests.
Ideally one of these steps should solve your issue at hand, let me know in the comments if you still face this issue.
I am also facing the same error.
I used this PDF conversion inside a Spring Boot application and deployed it, in a Windows Server.
When I run this application manually (using java -jar), it's working perfectly fine.
But, when I start this as a Windows Service(using winsw.exe), it is giving me the error:
com.documents4j.throwables.FileSystemInteractionException: Could not access target file
Related
I have encountered a strange problem when accessing an Azure Storage Blob from an Azure Function. I have a Function written in Java which is supposed to download some data from a Blob and then execute a PowerShell command. The PowerShell command can launch another Java application, which accesses the same Blob. I have this process working except for where the Function first downloads the Blob, which always gives a timeout while trying to get the size.
The weird thing is that the Java application launched by the PowerShell command uses the same code to download the Blob and can do so without any trouble at all.
Here is the relevant code snippet:
try {
blob = new BlobClientBuilder().endpoint(connStr).buildClient();
int dataSize = (int) blob.getProperties().getBlobSize(); // <- timeout occurs here
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(dataSize);
blob.download(outputStream);
outputStream.close();
String result = new String(outputStream.toByteArray(), "UTF-8");
return JsonParser.parseString(result).getAsJsonObject();
}
catch(Exception e) {
System.out.println("ERROR: "+e.getMessage());
e.printStackTrace();
return null;
}
Some relevant info:
The Blob is very small - only a few KB.
The same connection string is used in both cases.
The Blob is not the trigger for the function, but rather stores data for it.
EDIT
After getting better logs with a Log Analytics workspace, I found the timeout is being caused by a NoSuchMethodError.
java.lang.NoSuchMethodError: io.netty.handler.ssl.SslProvider.isAlpnSupported(Lio/netty/handler/ssl/SslProvider;)Z
I've seen this error before when I had the wrong version of netty-all-x.x.xFINAL.jar. Having already fixed this in the jars I upload with my code, I am now wondering where the Function gets libraries from other than what I include.
Following the exception mentioned in the edit led me to this thread:
https://github.com/Azure/azure-functions-java-worker/issues/381.
The issue was that the dependencies for the Function App itself were loading before my dependencies for my code and there is a conflict between them as mentioned here: https://github.com/Azure/azure-functions-java-worker/issues/365.
The solution was to set FUNCTIONS_WORKER_JAVA_LOAD_APP_LIBS = 1 in the Configuration settings of the Function App.
One more solution is to find out the exact error by running azure in local environment. most of the time following error is misleading.
FailureException: ClassCastException: java.lang.NoSuchMethodError cannot be cast to java.lang.RuntimeExceptionStack: java.lang.reflect.InvocationTargetExceptionat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at
following link will help you to run and debug azure function in local.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-maven-eclipse
most of the time the issues is also due to dependency jars conflicts as explained in previous post.
For some larger data we use HTTP streaming of objects in our Java-/Spring-Boot-Application. For that we need to bypass Spring MVC a bit like this:
#GetMapping("/report")
public void generateReport(HttpServletResponse response) throws TransformEntryException {
response.addHeader("Content-Disposition", "attachment; filename=report.json");
response.addHeader("Content-Type", "application/stream+json");
response.setCharacterEncoding("UTF-8");
OutputStream out = response.getOutputStream();
Long count = reportService.findReportData()
.map(entry -> transfromEntry(entry))
.map(entry -> om.writeValueAsBytes(entry))
.peek(entry -> out.write(entry))
.count();
LOGGER.info("Generated Report with {} entries.", count);
}
(...I know this code won't compile - just for illustration purposes...)
This works great so far - except if something goes wrong: let's say after streaming 12 entries successfully, the 13th entry will trigger an TransformEntryException during transfromEntry().
The stream will stop here. And the client gets indicated that his download finished successfully - while it was only part of the file.
We can log this server side and also attach some warning or even stacktrace to the downloaded file, but the client gets indicated that his download finished successfully - while it was only part of or even corrupt file.
I know that the HTTP status code gets sent with the header - which is already out. Is there any other way to indicate to the client a failed download?
("Client" most cases means some Webbrowser)
I get an exception when trying to upload a file to Amazon S3 from my Java Spring application. The method is pretty simple:
private void productionFileSaver(String keyName, File f) throws InterruptedException {
String bucketName = "{my-bucket-name}";
TransferManager tm = new TransferManager(new ProfileCredentialsProvider());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
bucketName, keyName, new File("/mypath/myfile.png"));
try {
// Or you can block and wait for the upload to finish
upload.waitForCompletion();
System.out.println("Upload complete.");
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
}
It is basically the same that amazon provides here, and the same exception with the exactly same message ("profile file cannot be null") appears when trying this other version.
The problem is not related to the file not existing or being null (I have already checked in a thousand ways that the File argument recieved by TransferManager.upload method exists before calling it).
I cannot find any info about my exception message "profile file cannot be null". The first lines of the error log are the following:
com.amazonaws.AmazonClientException: Unable to complete transfer: profile file cannot be null
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.unwrapExecutionException(AbstractTransfer.java:281)
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.rethrowExecutionException(AbstractTransfer.java:265)
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.waitForCompletion(AbstractTransfer.java:103)
at com.fullteaching.backend.file.FileController.productionFileSaver(FileController.java:371)
at com.fullteaching.backend.file.FileController.handlePictureUpload(FileController.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
My S3 policy allows getting and puttings objects for all kind of users.
What's happening?
ProfileCredentialsProvider() creates a new profile credentials provider that returns the AWS security credentials configured for the default profile.
So, if you haven't any configuration for default profile at ~/.aws/credentials, while trying to put object, it yields that error.
If you run your code on Lambda service, it will not provide this file. In that case, you also do not need to provide credentials. Just assign right IAM Role to your lambda function, then using default constructor should solve issue.
You may want to change TransferManager constructor according to your needs.
The solution was pretty simple: I was trying to implement this communication without an AmazonS3 bean for Spring.
This link will help with the configuration:
http://codeomitted.com/upload-file-to-s3-with-spring/
my code worked fine as below:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(DefaultAWSCredentialsProviderChain.getInstance()).withRegion(clientRegion).build();
I'm using Square's Tape library to queue uploads of data to the server.
The queue is stored in File in JSON format. When the app starts I init the queue and start uploading (i.e if on Wifi) However on some devices on users I'm seeing EOFException with 'null' message (logged in crashlytics).
The error occurs when creating a FileObjectQueue object from an existing file - from the debug info gather the actual file is ~1MB.
Any ideas what's causing this or how to prevent it? - maybe I need to dust up on my java.io.
Edit: using Tape v1.2.1
Caused by: java.io.EOFException
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:419)
at java.io.RandomAccessFile.readInt(RandomAccessFile.java:439)
at com.squareup.tape.QueueFile.readElement(:182)
at com.squareup.tape.QueueFile.readHeader(:162)
at com.squareup.tape.QueueFile.(:110)
at com.squareup.tape.FileObjectQueue.(:35)
at com.myapp.queue.MyUploadTaskQueue.create(:125)
Updated - Also seeing this error since upgrading to 1.2.2
Caused by: java.io.IOException: File is corrupt; length stored in header is 0.
at com.squareup.tape.QueueFile.readHeader(:165)
at com.squareup.tape.QueueFile.<init>(:117)
at com.squareup.tape.FileObjectQueue.<init>(:35)
The EOFException shows that End Of File has been reached, that is, there are no more bytes to read. This exception is just another way to signal that there is nothing more to read, whereas other methods return a value, like -1. As you can see in your error stack trace, the methods throwing the exception are read methods; java.io.RandomAccessFile.readFully(RandomAccessFile.java:419) and com.squareup.tape.QueueFile.readHeader(:165). As such, it can't be "prevented" unless you don't read all the bytes (which you typically want to), just catch it like so; catch(EOFException e) { /* ignore */ } :)
https://docs.oracle.com/javase/7/docs/api/java/io/EOFException.html
We are converting large PDF file using Adobe LiveCycle ConvertPDF service.
This works fine for smaller PDF files, but fails when we attempt to convert a large PDF file (around 150mb - don't ask).
It looks like Adobe sets the a transaction timeout around 14(?) minutes. As processing time for our huge PDF exceeds this time, operation is aborted.
We tried multiple PDFs, so this is not likely to be caused by corrupted input file.
Here's the output that exception produces:
com.adobe.livecycle.convertpdfservice.exception.ConvertPdfException: ALC-DSC-000-000: com.adobe.idp.dsc.DSCException: Internal error.
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2WithSMT(ConvertPdfServiceImpl.java:117)
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2(ConvertPdfServiceImpl.java:93)
[...]
Caused by: ALC-DSC-000-000: com.adobe.idp.dsc.DSCException: Internal error.
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl$1.doInTransaction(ConvertPdfServiceImpl.java:110)
at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean.doRequiresNew(EjbTransactionBMTAdapterBean.java:218)
[...]
Caused by: com.adobe.livecycle.convertpdfservice.exception.ConvertPdfException: Cannot convert PDF file to PostScript.
Exception: "Transaction timed out: Couldn't connect to Datamanager Service"
at com.adobe.convertpdf.ConvertPdfBmcWrapper.convertPdftoPs(ConvertPdfBmcWrapper.java:207)
at com.adobe.convertpdf.ConvertPdfServer.convertPdftoPs(ConvertPdfServer.java:121)
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2InTxn(ConvertPdfServiceImpl.java:129)
[...]
So far - seems logical.
However, I can't find where the transaction length is configured. I guess if we increased the timeout to something like 30 minutes, our problem would go away.
(Also the problem would go away if we had way of invoking this operation without any transactions...)
Let's say we are simply running it like this:
ServiceClientFactory factory = com.adobe.idp.dsc.clientsdk.ServiceClientFactory.createInstance(connectionProps);
ConvertPdfServiceClient convertPDFClient = new com.adobe.livecycle.convertpdfservice.client.ConvertPdfServiceClient(factory);
// ... set-up details skipped ...
com.adobe.idp.Document result_postscript = convertPDFClient.toPS2(inPdf,options);
result_postscript.copyToFile(new File("c:/Adobe/output.ps"))
However, either we are not setting up ServiceClientFactory correctly, or maybe not reading JBoss config properly, we can't find way to make the transaction live longer. (Is the transaction time to live really the issue?)
In LiveCycle Administration Console simply go to
Home > Services > Applications and Services > Service Management > ConvertPdfService
The service timeout can be changed there.
When testing with converting pdf (generated by iText) that contains 39k pages (13 initial, each cloned 3000 times, size ~15Mb) -final output PostScript file was ~1,25Gb. Whole work took about 2 hours. But it worked, no problems.
(I guess this answer makes the question not-programming related, but hey.)