EOFException when reading QueueFile tape - java

I'm using Square's Tape library to queue uploads of data to the server.
The queue is stored in File in JSON format. When the app starts I init the queue and start uploading (i.e if on Wifi) However on some devices on users I'm seeing EOFException with 'null' message (logged in crashlytics).
The error occurs when creating a FileObjectQueue object from an existing file - from the debug info gather the actual file is ~1MB.
Any ideas what's causing this or how to prevent it? - maybe I need to dust up on my java.io.
Edit: using Tape v1.2.1
Caused by: java.io.EOFException
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:419)
at java.io.RandomAccessFile.readInt(RandomAccessFile.java:439)
at com.squareup.tape.QueueFile.readElement(:182)
at com.squareup.tape.QueueFile.readHeader(:162)
at com.squareup.tape.QueueFile.(:110)
at com.squareup.tape.FileObjectQueue.(:35)
at com.myapp.queue.MyUploadTaskQueue.create(:125)
Updated - Also seeing this error since upgrading to 1.2.2
Caused by: java.io.IOException: File is corrupt; length stored in header is 0.
at com.squareup.tape.QueueFile.readHeader(:165)
at com.squareup.tape.QueueFile.<init>(:117)
at com.squareup.tape.FileObjectQueue.<init>(:35)

The EOFException shows that End Of File has been reached, that is, there are no more bytes to read. This exception is just another way to signal that there is nothing more to read, whereas other methods return a value, like -1. As you can see in your error stack trace, the methods throwing the exception are read methods; java.io.RandomAccessFile.readFully(RandomAccessFile.java:419) and com.squareup.tape.QueueFile.readHeader(:165). As such, it can't be "prevented" unless you don't read all the bytes (which you typically want to), just catch it like so; catch(EOFException e) { /* ignore */ } :)
https://docs.oracle.com/javase/7/docs/api/java/io/EOFException.html

Related

Avoid logs full of java.io.IOException: Broken pipe

I am using Server-Sent events on one browser, and a spring boot application on the back end. When I shot down the client, I get the next exception:
14:35:09.458 [http-nio-8084-exec-25] ERROR o.a.c.c.C.[Tomcat].[localhost] - Exception Processing ErrorPage[errorCode=0, location=/error]
org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe
I understand this is the expected behavior; on the other hand, my application works fine, but I have awful logs full of those exceptions. I guess this is caused by Tomcat. Is there a way to catch these exceptions, or at least to prevent Tomcat from writing this exception stack trace to the log? I mean, without modifying Tomcat's code.
To prevent this exception in logs you can try some changes in your code that performs push on client. Here is my example. I listen to api called and then it called I push socket to the client. I think you could understand the code:
#GetMapping("/api/status/{groupId}/{groupstatusId}")
#ResponseStatus(HttpStatus.NO_CONTENT)
#ExceptionHandler(IOException.class)
public void listenToNewStatus(#PathVariable Long groupId, #PathVariable String groupstatusId, IOException e) {
Group group = groupDTOService.findById(groupId);
if (group != null) {
if (StringUtils.containsIgnoreCase(ExceptionUtils.getRootCauseMessage(e), "Broken pipe")) {
logger.info("broken pipe");
} else {
template.convertAndSend("/topic/callstatus/" + group.getUser().getId(), groupstatusId);
}
}
In this code to prevent broken pipe I add annotation #ExceptionHandler(IOException.class) and check if exception contains broken pipe then nothing else send message to client.
I see that this question is quite old, but in case someone is still looking for an answer, there are a few blog posts on how to mute ClientAbortException so it doesn't flood your logs.
https://tutorial-academy.com/jersey-workaround-clientabortexception-ioexception/

logstash unexpectedly stopped

I have set up a logstash with redis architecture to handle my logs. The way I have organized it is:
logstash ---> redis ---> logstash ---> elasticsearch
but the problem that occurred is that after parsing nearly 1.25 million logs a java exception is thrown.
In my logstash.err log file, the exception appears as
Exception in thread "<file" java.lang.UnsupportedOperationException
at java.lang.Thread.stop(Thread.java:869)
at org.jruby.RubyThread.exceptionRaised(RubyThread.java:1221)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:112)
at java.lang.Thread.run(Thread.java:745)
I think that this exception might be thrown because of logstash unable to open/close a file. So what can I do to rectify this error? The way that my input configuration is set for my first logstash server to send the logs is:
input {
file {
start_position => "beginning"
path => [
"/var/logstash_logs/child1/nginx/*log*",
"/var/logstash_logs/child2/nginx/*log*",
"/var/logstash_logs/child3/nginx/*log*"
]
}
}
And the way output is sent is like this:
output {
redis {
host => "X.X.X.X"
key => "logstash"
data_type => "list"
}
}
There are no errors in the logs of logstash server with redis installed.
Well, one problem here is with JRuby, which is trying to call Thread.stop(Throwable obj);, a deprecated method that throws UnsupportedOperationException and totally messes up the actual source of the error (the Throwable parameter).
So currently you can only guess what the actual problem is, and guessing is never good.
One idea is to set a breakpoint on RubyThread.exceptionRaised(); and run it through a debugger. That should allow you to find out what the original Throwable is, and then you can get to the source of the problem.
You should also check if there exists a bug ticket for JRuby about this, and possibly update your JRuby.

Catch a specific Elasticsearch exception from a BulkRequest

I use Java to index some documents with a BulkRequestinto Elasticsearch 1.4.2.
Some of these docs only need to be written when they are not already in the index, so I set the CREATE-opType like this:
indexRequestBuilder.opType(IndexRequest.OpType.CREATE)
Now the docs which were already in the index fail in the BulkResponse.
Error message bulkItemResponse.getFailureMessage():
DocumentAlreadyExistsException[...]
I want to ignore this class of exception but retry writing the docs for all other type of exceptions.
So how can I catch just the DocumentAlreadyExistsException?
I can get the Failure with bulkItemResponse.getFailure(), but I cannot find any information about the type of the Exception beside the error message.
I could look in the error-message for the exception name, but this may be rather fragile with new Elasticsearch versions:
if(bulkItemResponse.getFailureMessage().startsWith("DocumentAlreadyExistsException[")
Is there a better way?
This cant be possible. The bulk request is actaully executed on the server side and not client side. And hence all it can do is to sent the stacktrace back and not the Exception object.

Java ObjectOutputStream reset error

my project consists of 2 parts: server side and client side. When I start server side everything is OK, but when I start client side from time to time I get this error:
java.io.IOException: stream active
at java.io.ObjectOutputStream.reset(Unknown Source)
at client.side.TcpConnection.sendUpdatedVersion(TcpConnection.java:77)
at client.side.Main.sendCharacter(Main.java:167)
at client.side.Main.start(Main.java:121)
at client.side.Main.main(Main.java:60)
When I tried to run this project on the other pc this error occurred even more frequently. In Java docs I found this bit.
Reset may not be called while objects are being serialized. If called
inappropriately, an IOException is thrown.
And this is the function where error is thrown
void sendUpdatedVersion(CharacterControlData data) {
try {
ServerMessage msg = new ServerMessage(SEND_MAIN_CHARACTER);
msg.setCharacterData(data);
oos.writeObject(msg);
oos.reset();
} catch (IOException e) {
e.printStackTrace();
}
}
I tried to put flush() but that didn't help. Any ideas? Besides, no errors on server side.
I think you're misunderstanding what reset() does. It resets the stream to disregard any object instances previously written to it. This is pretty clearly not what you want in your case, since you're sending an object to the stream and then resetting straight away, which is pointless.
It looks like all you need is a flush(); if that's insufficient then the problem is on the receiving side.
I think you are confusing close() with reset().
use
oos.close();
instead of oos.reset();
calling reset() is a perfectly valid thing to want to do. It is possible that 'data' is reused, or some field in data is reused, and the second time he calls sendUpdatedVersion, that part is not sent. So those who complain that the use is invalid are not accurate. Now as to why you are getting this error message
What the error message is saying is that you are not at the top level of your writeObject call chain. sendUpdatedVersion must be being called from an method that was called from another writeObject.
I'm assuming that some object is implementing a custom writeObject() and that method, is calling this method.
So you have to differentiate when sendUpdatedVersion is being called at the top level of the call chain and only use reset() in those cases.

How to increase transaction timeout in Adobe LiveCycle server? Long service call fails with timeout exception

We are converting large PDF file using Adobe LiveCycle ConvertPDF service.
This works fine for smaller PDF files, but fails when we attempt to convert a large PDF file (around 150mb - don't ask).
It looks like Adobe sets the a transaction timeout around 14(?) minutes. As processing time for our huge PDF exceeds this time, operation is aborted.
We tried multiple PDFs, so this is not likely to be caused by corrupted input file.
Here's the output that exception produces:
com.adobe.livecycle.convertpdfservice.exception.ConvertPdfException: ALC-DSC-000-000: com.adobe.idp.dsc.DSCException: Internal error.
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2WithSMT(ConvertPdfServiceImpl.java:117)
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2(ConvertPdfServiceImpl.java:93)
[...]
Caused by: ALC-DSC-000-000: com.adobe.idp.dsc.DSCException: Internal error.
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl$1.doInTransaction(ConvertPdfServiceImpl.java:110)
at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean.doRequiresNew(EjbTransactionBMTAdapterBean.java:218)
[...]
Caused by: com.adobe.livecycle.convertpdfservice.exception.ConvertPdfException: Cannot convert PDF file to PostScript.
Exception: "Transaction timed out: Couldn't connect to Datamanager Service"
at com.adobe.convertpdf.ConvertPdfBmcWrapper.convertPdftoPs(ConvertPdfBmcWrapper.java:207)
at com.adobe.convertpdf.ConvertPdfServer.convertPdftoPs(ConvertPdfServer.java:121)
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2InTxn(ConvertPdfServiceImpl.java:129)
[...]
So far - seems logical.
However, I can't find where the transaction length is configured. I guess if we increased the timeout to something like 30 minutes, our problem would go away.
(Also the problem would go away if we had way of invoking this operation without any transactions...)
Let's say we are simply running it like this:
ServiceClientFactory factory = com.adobe.idp.dsc.clientsdk.ServiceClientFactory.createInstance(connectionProps);
ConvertPdfServiceClient convertPDFClient = new com.adobe.livecycle.convertpdfservice.client.ConvertPdfServiceClient(factory);
// ... set-up details skipped ...
com.adobe.idp.Document result_postscript = convertPDFClient.toPS2(inPdf,options);
result_postscript.copyToFile(new File("c:/Adobe/output.ps"))
However, either we are not setting up ServiceClientFactory correctly, or maybe not reading JBoss config properly, we can't find way to make the transaction live longer. (Is the transaction time to live really the issue?)
In LiveCycle Administration Console simply go to
Home > Services > Applications and Services > Service Management > ConvertPdfService
The service timeout can be changed there.
When testing with converting pdf (generated by iText) that contains 39k pages (13 initial, each cloned 3000 times, size ~15Mb) -final output PostScript file was ~1,25Gb. Whole work took about 2 hours. But it worked, no problems.
(I guess this answer makes the question not-programming related, but hey.)

Categories