I am using the jSerialComm library to communicate to and from the SerialPort. I have written a SerialDataListener to read the bytes with an overridden serialEvent method that looks like this:
#Override
public void serialEvent(SerialPortEvent event) {
if (event.getEventType() != SerialPort.LISTENING_EVENT_DATA_AVAILABLE) return;
int numBytesAvailable = serialPort.bytesAvailable();
if (numBytesAvailable < 0) {
logger.error("Port is not open.. returning without any action");
return;
}
byte[] newData = new byte[numBytesAvailable];
int readData = serialPort.readBytes(newData, numBytesAvailable);
for (int i = 0; i < numBytesAvailable; i++) {
byte b = newData[i];
logger.info("Starting new response");
response = new Response();
response.addByte(b);
}
}
Now, if I do receive data and the subsequent code gets into a NUllPointerException somehow (one example being that the response's constructor is invoked and throws an NPE), then the SerialPort has been programmed inside the library's SerialPort class to
stop listening and
Swallow the exception
As a consequence of 1 and 2, no more data arriving on the SerialPort can be processed. There is neither an exposed API to see if the listener is stopped and restart it. I can neither take any action like reopening the SerialPort.
Here is that piece of code:
//Line 895 of the class SerialPort) (from dependency: com.fazecast:jSerialComm:1.3.11).
while (isListening && isOpened) { try { waitForSerialEvent(); } catch (NullPointerException e) { isListening = false; } }
Here are the questions:
Why was the exception swallowed and listening stopped inside the library? Are there any design reasons?
The SerialPort class itself is final and hence writing my own implementation of the class to replace the swallow is out of question. How do I proceed? Apart from this issue, jSerialComm appears to satisfy most other use cases decently well, so I may not migrate from it anytime soon.
One way is to catch it myself and do the handling. But I do not want to do it unless the answer for Q1 is clear. I have tried to investigate but not found any practical reasons for disabling the listening and not announcing the exception.
Why just a NPE, other exceptions could arise too. So then at least, I will have to handle the exceptions myself. Is this approach of my own handlers correct then?
TIA
Rahul
1) Why was the exception swallowed and listening stopped inside the library? Are there any design reasons?
You would need to ask the author of the code.
However, it does seem to be intentional, since the waitForSerialEvent is declared as throws NullPointerException.
If I were you, I would dig deeper into where the NPEs are thrown and why. Modify the code to print a stacktrace instead of just squashing the exception entirely. It could be a "hack" workaround, or there could be a legitimate reason for doing this.
If we make the assumption that the client's listener code could throw an NPE, then in my view it is a mistake for the event thread to assume that all NPEs can be squashed.
But looking at the code, I can also see places where NPE's are being thrown deliberately to (apparently) signal there is an error; e.g. in the read methods in SerialPortInputStream. So it is not clear to me that the NPEs should be squashed at all.
2) The SerialPort class itself is final and hence writing my own implementation of the class to replace the swallow is out of question. How do I proceed?
The code is on GitHub, so you could fork the repository, develop a patch and submit a pull request.
4) Why just a NPE, other exceptions could arise too. So then at least, I will have to handle the exceptions myself. Is this approach of my own handlers correct then?
Good question.
But really, all of these questions are best addressed to the author of the code. He does seem to respond to questions posted as issues ... if they are pertinent.
Related
I'm in the process of updating from picocli 3.9.6 to 4.2.0, and I'm running into an issue when replacing old deprecated calls with the new versions.
In my original version, I had a code block like this:
try {
return commandLine.parseWithHandlers(
new RunLast().useOut(ps),
new ExceptionHandler(),
args);
}
catch(Exception e) {
// handle exceptions
}
The ExceptionHandler handles both parameter and execution exceptions -- both are rethrown, but parameter exceptions get the help text added to the exception text. The catch would get hit in cases where, e.g., a command was given bad args. The catch would ensure the error was printed in the UI.
I attempted to update it like this:
try {
commandLine.setOut(pw);
ExceptionHandler handler = new ExceptionHandler();
commandLine.setExecutionExceptionHandler(handler);
commandLine.setParameterExceptionHandler(handler);
commandLine.execute(args);
return commandLine.getExecutionResult();
}
catch(Exception e) {
// handle exceptions
}
With this new version, exceptions are thrown as before, but they are no longer caught by the catch block after being rethrown by the ExceptionHandler. How can I catch these exceptions?
One of the changes in picocli 4.x is the new execution framework. The user manual has a section on migration that may be useful.
By design, the CommandLine::execute method never throws an exception. So there is no need to surround the call to CommandLine::execute with a try/catch block (unless you need to catch an Error or Throwable).
Instead, you can optionally specify custom exception handlers, like you already do in your example. These exception handlers is where you can show an error message to the users. (Perhaps a combination of what was in the previous ExceptionHandler and the logic that previously was in the catch block.)
The ParameterExceptionHandler is invoked when the user provided invalid input. The default handler shows an error message, may suggest alternative spellings for options or subcommands that look like a typo, and finally displays the usage help message. The Handling Errors section of the user manual has an example ShortErrorMessageHandler that may be useful when the usage help message is so long that it obscures the error message.
The ExecutionExceptionHandler is invoked when the business logic throws an exception. The default handler just rethrows the exception, which results in a stack trace being printed. The Business Logic Exceptions section of the user manual shows an alternative.
It sounds like you need a custom ExecutionExceptionHandler that prints a stack trace followed by the usage help message.
I asked (and answered myself) this question a couple of days ago, and resolved the problem, but I can't quite understand why the problem was solved and was hoping to get some clarification.
Essentially, I have implemented a jax-rs-based REST service that retrieves information from a RavenDB database and returns that content in a stream. The problem that I had was an unclosed database results iterator, which caused the REST service to hang (and accept no further requests) after exactly 10 requests.
My code is, roughly, as follows:
public Response ...
{
(...)
StreamingOutput adminAreaStream = new StreamingOutput()
{
ObjectWriter ow = new ObjectMapper().writer().withDefaultPrettyPrinter();
#Override
public void write(OutputStream output) throws IOException, WebApplicationException
{
try(IDocumentSession currentSession = ServiceListener.ravenDBStore.openSession())
{
Writer writer = new BufferedWriter(new OutputStreamWriter(output));
(...)
CloseableIterator<StreamResult<AdministrativeArea>> results;
(...)
writer.flush();
writer.close();
results.close();
currentSession.advanced().clear();
currentSession.close();
}
catch (Exception e)
{
System.out.println("Exception: " + e.getMessage() + e.getStackTrace());
}
}
};
if(!requestIsValid)
return Response.status(400).build();
else
return Response.ok(adminAreaStream).build();
}
From what I understand about the object lifecycle in Java, or rather more specifically object reachability and garbage collection, even though I didn't properly close that CleasableIterator, it should go out of scope/become unreachable by the time my method finishes with either a 400 or 200 status - and therefore get garbage collected.
Just to be clear: I am certainly not suggesting that one shouldn't properly close opened connections etc. - I AM doing that now - or rely on Java's garbage collection mechanism to save us from lazy/unclean coding... I am just struggling to understand exactly how those unclosed iterators could have caused the Tomcat behaviour observed.
In fact, my assumption is that we don't even need to know the details about the iterator's implementation, because at the "galactic level" of Java the object lifecycle, implementation differences are irrelevant. => "Once an object has become unreachable, it doesn't matter exactly how it was coded".
The only thing I can imagine is that Tomcat somehow, (through its container mechanism ?), slightly changes the game here, and causes things to "hang around".
Could someone please shed some light on this ?
Thanks in advance !
The CloseableIterator refers to a CloseableHttpResponse which refers to a HTTP connection. No finalizer releases the response or the connection, when CloseableIterator is not reachable anymore. You created a connection leak. Your bug is similar to the one described here: https://phillbarber.blogspot.com/2014/02/lessons-learned-from-connection-leak-in.html
See here why finalize methods to release resources are a bad idea: https://www.baeldung.com/java-finalize
In the logging part of the project I working for, I try to optimize the error messages that are shown in log management. Logging error messages is coded like this:
String errorMessage =" Problem with server "+"\n"+t.getMessage();
_logger.fatal(errorMessage);
Where t is a Throwable object and _logger is a Logger object, which is related to the log4j framework.
What I wonder is, what changes if I use _logger.fatal(errorMessage, t); instead of _logger.fatal(errorMessage);? If there is a major difference between them, Which one will be better to use?
Edit: I've just realised I copied "fatal" example instead of "error". However my question is same for fatal, too.
Practically all Java logging framework (alas, we have plenty of those...) support putting a Throwable as the last parameter.
This will result in a stack trace being logged, which can be extremely useful in diagnosing and fixing the problem.
I'd only ever not give the exception to the logger if the cause of the exception is really well established and printing the exception is just unnecessary noise. For example here:
try {
int port = Integer.parseInt(input);
// do something with the port
} catch (NumberFormatException e) {
logger.error("'{}' is not a valid port number: {}", input, e.toString);
}
Another case is when the exception is being re-thrown (and something else will eventually log it with more detail).
But not with a "Problem with server" (and at FATAL level no less). That looks like you want to get as much info as you can get.
Also note that in those cases, e.toString() is usually better than e.getMessage() because it also includes the name of the exception in addition to its message (which may be empty).
my project consists of 2 parts: server side and client side. When I start server side everything is OK, but when I start client side from time to time I get this error:
java.io.IOException: stream active
at java.io.ObjectOutputStream.reset(Unknown Source)
at client.side.TcpConnection.sendUpdatedVersion(TcpConnection.java:77)
at client.side.Main.sendCharacter(Main.java:167)
at client.side.Main.start(Main.java:121)
at client.side.Main.main(Main.java:60)
When I tried to run this project on the other pc this error occurred even more frequently. In Java docs I found this bit.
Reset may not be called while objects are being serialized. If called
inappropriately, an IOException is thrown.
And this is the function where error is thrown
void sendUpdatedVersion(CharacterControlData data) {
try {
ServerMessage msg = new ServerMessage(SEND_MAIN_CHARACTER);
msg.setCharacterData(data);
oos.writeObject(msg);
oos.reset();
} catch (IOException e) {
e.printStackTrace();
}
}
I tried to put flush() but that didn't help. Any ideas? Besides, no errors on server side.
I think you're misunderstanding what reset() does. It resets the stream to disregard any object instances previously written to it. This is pretty clearly not what you want in your case, since you're sending an object to the stream and then resetting straight away, which is pointless.
It looks like all you need is a flush(); if that's insufficient then the problem is on the receiving side.
I think you are confusing close() with reset().
use
oos.close();
instead of oos.reset();
calling reset() is a perfectly valid thing to want to do. It is possible that 'data' is reused, or some field in data is reused, and the second time he calls sendUpdatedVersion, that part is not sent. So those who complain that the use is invalid are not accurate. Now as to why you are getting this error message
What the error message is saying is that you are not at the top level of your writeObject call chain. sendUpdatedVersion must be being called from an method that was called from another writeObject.
I'm assuming that some object is implementing a custom writeObject() and that method, is calling this method.
So you have to differentiate when sendUpdatedVersion is being called at the top level of the call chain and only use reset() in those cases.
Sometimes, you just have to catch Throwable, e.g. when writing a dispatcher queue that dispatches generic items and needs to recover from any errors (said dispatcher logs all caught exceptions, but silently, and then execution is continued on other items).
One best practice I can think of is to always rethrow the exception if it's InterruptedException, because this means someone interrupted my thread and wants to kill it.
Another suggestion (that came from a comment, not an answer) is to always rethrow ThreadDeath
Any other best practices?
Probably the most important one is, never swallow a checked exception. By this I mean don't do this:
try {
...
} catch (IOException e) {
}
unless that's what you intend. Sometimes people swallow checked exceptions because they don't know what to do with them or don't want to (or can't) pollute their interface with "throws Exception" clauses.
If you don't know what to do with it, do this:
try {
...
} catch (IOException e) {
throw new RuntimeException(e);
}
The other one that springs to mind is to make sure you deal with exceptions. Reading a file should look something like this:
FileInputStream in = null;
try {
in = new FileInputStream(new File("..."));;
// do stuff
} catch (IOException e) {
// deal with it appropriately
} finally {
if (in != null) try { in.close(); } catch (IOException e) { /* swallow this one */ }
}
Depends on what you are working on.
if you are developing an API to be used by some one else, its better to re-throw the Exception or wrap it into a custom exception of yours and throw.
Whereas if you are developing an enduser application you need to handle this exception and do the needful.
What about OutOfMemoryError (or perhaps its super class VirtualMachineError)? I can't imagine there is much you can do after something that serious.
If you're writing a dispatcher queue, then by the time the exception comes back to you there's no point in doing anything with it other than logging it. The Swing event queue has basically that type of behavior.
Alternatively, you could provide a hook for an "uncaught exception handler," similar to ThreadGroup. Be aware that the handler could take a long time, and end up delaying your dispatcher.
As far as InterruptedException goes: the only thing that cares about that is your dispatch loop, which should be checking some external state to see if it should stop processing.