Tomcat and garbage collecting database connections - java

I asked (and answered myself) this question a couple of days ago, and resolved the problem, but I can't quite understand why the problem was solved and was hoping to get some clarification.
Essentially, I have implemented a jax-rs-based REST service that retrieves information from a RavenDB database and returns that content in a stream. The problem that I had was an unclosed database results iterator, which caused the REST service to hang (and accept no further requests) after exactly 10 requests.
My code is, roughly, as follows:
public Response ...
{
(...)
StreamingOutput adminAreaStream = new StreamingOutput()
{
ObjectWriter ow = new ObjectMapper().writer().withDefaultPrettyPrinter();
#Override
public void write(OutputStream output) throws IOException, WebApplicationException
{
try(IDocumentSession currentSession = ServiceListener.ravenDBStore.openSession())
{
Writer writer = new BufferedWriter(new OutputStreamWriter(output));
(...)
CloseableIterator<StreamResult<AdministrativeArea>> results;
(...)
writer.flush();
writer.close();
results.close();
currentSession.advanced().clear();
currentSession.close();
}
catch (Exception e)
{
System.out.println("Exception: " + e.getMessage() + e.getStackTrace());
}
}
};
if(!requestIsValid)
return Response.status(400).build();
else
return Response.ok(adminAreaStream).build();
}
From what I understand about the object lifecycle in Java, or rather more specifically object reachability and garbage collection, even though I didn't properly close that CleasableIterator, it should go out of scope/become unreachable by the time my method finishes with either a 400 or 200 status - and therefore get garbage collected.
Just to be clear: I am certainly not suggesting that one shouldn't properly close opened connections etc. - I AM doing that now - or rely on Java's garbage collection mechanism to save us from lazy/unclean coding... I am just struggling to understand exactly how those unclosed iterators could have caused the Tomcat behaviour observed.
In fact, my assumption is that we don't even need to know the details about the iterator's implementation, because at the "galactic level" of Java the object lifecycle, implementation differences are irrelevant. => "Once an object has become unreachable, it doesn't matter exactly how it was coded".
The only thing I can imagine is that Tomcat somehow, (through its container mechanism ?), slightly changes the game here, and causes things to "hang around".
Could someone please shed some light on this ?
Thanks in advance !

The CloseableIterator refers to a CloseableHttpResponse which refers to a HTTP connection. No finalizer releases the response or the connection, when CloseableIterator is not reachable anymore. You created a connection leak. Your bug is similar to the one described here: https://phillbarber.blogspot.com/2014/02/lessons-learned-from-connection-leak-in.html
See here why finalize methods to release resources are a bad idea: https://www.baeldung.com/java-finalize

Related

Spring Service Garbage Collection

I have a Spring Service, which calls an API. This Service creates several objects and returns these to the client (of a REST request).
Is this good practice? I observe rising memory consumption with every request. Is there is no garbage collection happening?
#org.springframework.stereotype.Service("FanService")
public class Service {
private static final Logger log = LoggerFactory.getLogger(Service.class);
public List<String> allCLubsInLeague() {
try {
URI urlString = new URI("https://www.thesportsdb.com/api/v1/json/1/search_all_teams.php?l=German%20Bundesliga");
RestTemplate restTemplate = new RestTemplate();
TeamsList response = restTemplate.getForObject(urlString, TeamsList.class);
List<BundesligaTeams> bundesligaTeams = response.getTeams();
//ResponseEntity<List<BundesligaTeams>> test = t.getForEntity(urlString, BundesligaTeams.class);
List<String> teamList = new ArrayList<>();
bundesligaTeams.forEach(value -> teamList.add(value.getStrTeam()));
log.info(bundesligaTeams.get(0).getStrAlternate());
bundesligaTeams = null;
response = null;
urlString = null;
restTemplate = null;
return teamList;
} catch (Exception e) {
log.info(e.getMessage());
}
return null;
}
}
I don't see any memory leak in this code.
Your memory is raising in every request because Garbage Collector will garbage unused objects when it decides to do so. So your objects can be garbaged after 10 or 20 request - you never know.
This happens because you still have a lot of free memory on your heap so garbage collector is not forced to clean it up yet. If you will try to invoke many many requests you will see Garbage Collector activity soon.
If you want to see more details, you can always run jvisualvm which should be shipped with JDK and observe how your heap memory increase/decrease according to garbage collector activity
If you are not coding low-latency application with zero-garbage allocation you should focus on writing readable and maintainable code first. Only then tune performance if it's not acceptable.
It's ok to create objects if you have available memory, memory allocation is cheap comparing to a GET request. See Latency Numbers Every Programmer Should Know.
There is no reason to null a local variable unless you are trying to remove security credentials. Don't write bundesligaTeams = null; and other statements at the end, these object will be collected once they are not reachable.
RestTemplate should be a separate bean. Creating this object could be expensive if the underling HTTP client creation is expensive. Consider auto-wiring the default RestTemplate provided by Spring Boot.
Cache the result of the GET request locally if the data is not changing often. A list of all the clubs in the German Bundesliga will change only once a year.
You should avoid creating String for log.info() call if the info logging level is not enabled. Either use placeholder syntax or call log.isInfoEnabled() before. Check out the What is the fastest way of (not) logging? FAQ.

No Exception handle in jSerialComm?

I am using the jSerialComm library to communicate to and from the SerialPort. I have written a SerialDataListener to read the bytes with an overridden serialEvent method that looks like this:
#Override
public void serialEvent(SerialPortEvent event) {
if (event.getEventType() != SerialPort.LISTENING_EVENT_DATA_AVAILABLE) return;
int numBytesAvailable = serialPort.bytesAvailable();
if (numBytesAvailable < 0) {
logger.error("Port is not open.. returning without any action");
return;
}
byte[] newData = new byte[numBytesAvailable];
int readData = serialPort.readBytes(newData, numBytesAvailable);
for (int i = 0; i < numBytesAvailable; i++) {
byte b = newData[i];
logger.info("Starting new response");
response = new Response();
response.addByte(b);
}
}
Now, if I do receive data and the subsequent code gets into a NUllPointerException somehow (one example being that the response's constructor is invoked and throws an NPE), then the SerialPort has been programmed inside the library's SerialPort class to
stop listening and
Swallow the exception
As a consequence of 1 and 2, no more data arriving on the SerialPort can be processed. There is neither an exposed API to see if the listener is stopped and restart it. I can neither take any action like reopening the SerialPort.
Here is that piece of code:
//Line 895 of the class SerialPort) (from dependency: com.fazecast:jSerialComm:1.3.11).
while (isListening && isOpened) { try { waitForSerialEvent(); } catch (NullPointerException e) { isListening = false; } }
Here are the questions:
Why was the exception swallowed and listening stopped inside the library? Are there any design reasons?
The SerialPort class itself is final and hence writing my own implementation of the class to replace the swallow is out of question. How do I proceed? Apart from this issue, jSerialComm appears to satisfy most other use cases decently well, so I may not migrate from it anytime soon.
One way is to catch it myself and do the handling. But I do not want to do it unless the answer for Q1 is clear. I have tried to investigate but not found any practical reasons for disabling the listening and not announcing the exception.
Why just a NPE, other exceptions could arise too. So then at least, I will have to handle the exceptions myself. Is this approach of my own handlers correct then?
TIA
Rahul
1) Why was the exception swallowed and listening stopped inside the library? Are there any design reasons?
You would need to ask the author of the code.
However, it does seem to be intentional, since the waitForSerialEvent is declared as throws NullPointerException.
If I were you, I would dig deeper into where the NPEs are thrown and why. Modify the code to print a stacktrace instead of just squashing the exception entirely. It could be a "hack" workaround, or there could be a legitimate reason for doing this.
If we make the assumption that the client's listener code could throw an NPE, then in my view it is a mistake for the event thread to assume that all NPEs can be squashed.
But looking at the code, I can also see places where NPE's are being thrown deliberately to (apparently) signal there is an error; e.g. in the read methods in SerialPortInputStream. So it is not clear to me that the NPEs should be squashed at all.
2) The SerialPort class itself is final and hence writing my own implementation of the class to replace the swallow is out of question. How do I proceed?
The code is on GitHub, so you could fork the repository, develop a patch and submit a pull request.
4) Why just a NPE, other exceptions could arise too. So then at least, I will have to handle the exceptions myself. Is this approach of my own handlers correct then?
Good question.
But really, all of these questions are best addressed to the author of the code. He does seem to respond to questions posted as issues ... if they are pertinent.

Java ObjectOutputStream reset error

my project consists of 2 parts: server side and client side. When I start server side everything is OK, but when I start client side from time to time I get this error:
java.io.IOException: stream active
at java.io.ObjectOutputStream.reset(Unknown Source)
at client.side.TcpConnection.sendUpdatedVersion(TcpConnection.java:77)
at client.side.Main.sendCharacter(Main.java:167)
at client.side.Main.start(Main.java:121)
at client.side.Main.main(Main.java:60)
When I tried to run this project on the other pc this error occurred even more frequently. In Java docs I found this bit.
Reset may not be called while objects are being serialized. If called
inappropriately, an IOException is thrown.
And this is the function where error is thrown
void sendUpdatedVersion(CharacterControlData data) {
try {
ServerMessage msg = new ServerMessage(SEND_MAIN_CHARACTER);
msg.setCharacterData(data);
oos.writeObject(msg);
oos.reset();
} catch (IOException e) {
e.printStackTrace();
}
}
I tried to put flush() but that didn't help. Any ideas? Besides, no errors on server side.
I think you're misunderstanding what reset() does. It resets the stream to disregard any object instances previously written to it. This is pretty clearly not what you want in your case, since you're sending an object to the stream and then resetting straight away, which is pointless.
It looks like all you need is a flush(); if that's insufficient then the problem is on the receiving side.
I think you are confusing close() with reset().
use
oos.close();
instead of oos.reset();
calling reset() is a perfectly valid thing to want to do. It is possible that 'data' is reused, or some field in data is reused, and the second time he calls sendUpdatedVersion, that part is not sent. So those who complain that the use is invalid are not accurate. Now as to why you are getting this error message
What the error message is saying is that you are not at the top level of your writeObject call chain. sendUpdatedVersion must be being called from an method that was called from another writeObject.
I'm assuming that some object is implementing a custom writeObject() and that method, is calling this method.
So you have to differentiate when sendUpdatedVersion is being called at the top level of the call chain and only use reset() in those cases.

ThreadLocal sharing data?

By all means I know the following is not possible, but it is occurring in one of our production environments:
SETUP
ESAPI 2.01
Main servlet filter setting and removing a current request thread local object:
try {
ESAPI.httpUtilities().setCurrentHTTP(request, response);
// filter logic ...
} catch (Exception e) {
LOG.error(Logger.SECURITY_FAILURE, "Error in ESAPI "
+ "security filter: " + e.getMessage(), e);
request.setAttribute("message", e.getMessage());
} finally {
ESAPI.clearCurrent();
}
all requests pass through this filter, and ESAPI.currentRequest() is used throughout the system.
Path A (http://server/path_a/)
goes through until it reaches method_a, this method is not accessible from path_b
Path B (http://server/path_b)
goes through until it reaches method_b, not accessible from path_a
Both of these paths go through the servlet filter (mapping "/*")
One of our error mails that I received suggests that path_a is throwing an error, which in turn initiates the error mail, in the mail code, the current request (via ESAPI.currentRequest()) is enumerated for request info.
PROBLEM
In the error mail, request info from path_a correlates with stacktrace info from method_b, to me this seems impossible as both run in separate threads.
QUESTION
How is this possible? I cannot re-create this locally, are their certain precautions I have to take other than setting and clearing the ThreadLocal? Can this be a problem with tomcat setup? I'm lost.
PS: code from the question has been simplified as the code base is to large for an example
Reading ESAPI code https://code.google.com/p/owasp-esapi-java/source/browse/trunk/src/main/java/org/owasp/esapi/reference/DefaultHTTPUtilities.java there are some questionable practices regarding thread local.
The biggest problem I'd say is it uses InheritableThreadLocal. If thread A spawns a thread B, B will inherit A's thread local value; however, when A then clears the thread local, it doesn't affect B, so B's inherited value will stay. ESAPI probably shouldn't use InheritableThreadLocal.
I can't say how this may produce the problem you see, without knowing more about threads in your app.

How one can know if the client has closed the connection

I've been playing with the new Servlet 3.0 async features with Tomcat 7.0.4. I found this Chat Application, that lets clients hang on GET request to get message updates. This is working just fine when it comes to receiving the messages.
The problem arises when the client is disconnected i.e. the user closes the browser. It seems that the server does not raise IOException, even though the client has disconnected. The message thread (see the source code from link above) is happily writing to all stored AsyncContext's output streams.
Is this a Tomcat bug? or am I missing something here? If this is not a bug, then how I'm supposed to detect whether the client has closed the connection?
The code there at line 44 - 47 is taking care of it,
} catch(IOException ex) {
System.out.println(ex);
queue.remove(ac);
}
And here too at 75 - 83, using timeout thingie,
req.addAsyncListener(new AsyncListener() {
public void onComplete(AsyncEvent event) throws IOException {
queue.remove(ac);
}
public void onTimeout(AsyncEvent event) throws IOException {
queue.remove(ac);
}
});
EDIT: After getting a little more insight.
Tomcat 7.0.4 is still in beta. So, you can expect such behaviour
I tried hard but can't find the method setAsyncTimeout() in the doc, neither here, nor here. So, I think they dropped it completely in the final version due to some unknown valid reason
The example states, "why should I use the framework instead of waiting for Servlet 3.0 Async API". Which infers that its written before the final thingie
So, what I can say, after combining all these fact, that you are trying to work with the thing that is broken in a sense. That also, may be, the reason for different and weird results.

Categories