RMI/Tomcat 6 Memory Leak - java

My application uses both RMI and JDBC to talk to a remote system and a database. While the database issues have been resolved, it turns out that RMI is causing some form of Memory Leak being detected by Tomcat 6 (I have also tried this with Tomcat 7 and we have the same issue).
Basically, when we start the application and the user enters information into the webpage, an RMI call is made to a backend system. If we stop/start or restart the application, Tomcat Manager now can detect a memory leak. If we start the application and do NOT make the RMI call, we can start/stop & restart the application all day long without issues.
Does anyone know what needs to be done to prevent RMI calls from causing Memory Leaks in the WebappClassLoader upon reload or stop/start while the webserver is still running?

My application uses both RMI and JDBC to talk to a remote system and a database. While the database issues have been resolved, it turns out that RMI is causing some form of Memory Leak being detected by Tomcat 6 ... Does anyone know what needs to be done to prevent RMI calls from causing Memory Leaks in the WebappClassLoader upon reload or stop/start while the webserver is still running?
RMI calls do not cause memory leaks. I have eight Tomcats that interact heavily via RMI, among other things, that have been up for several months without any sign of a leak.

The DGCClient had not cleaned-up any resources related to RMI and needed to wait for the timeout in order to fire. Since the container had tried to stop, but the RMI resources were hanging around, this produced a Memory Leak according to Tomcat Manager, that cleaned-itself and corrected the Memory Leak condition after the RMI resources were collected by the DGC.

Related

Appdynamics Agent connection causing memory leak in Java applications

We're using Appdynamics Java agent for monitoring our production applications. We have noticed slow growth in memory and the application eventually stalls. We ran a head dump on one of the JVMs and got the below reports.
Problem Suspect 1:
The thread com.singularity.ee.agent.appagent.kernel.config.xml.a# 0x1267......
AD thread config Poller keeps local variable config size of 28546.79(15.89%) KB
Problem Suspect 2:
280561 Instances of
com.singularity.ee.agent.appagent.services.transactionmonitor.com.exitcall.p loaded by com.singularity.ee.agent.appagent.kernel.classloader.d# 0x6c000....
occupy 503413.3(28.05%) KB. These instances are referenced from one instance of java.util.HashMap$Node[]...
We figured that these classes were from the Appdynamics APM that hooks on to the running JVM and sends monitored events to the controller. There is so much convoluted process associated with reaching out to the vendor, so I am wondering if there are any work arounds for this like we enabling our java apps with JMX and Appd getting the monitoring events from JMX rather than directly hooking on to the applications' JVM. Thanks for your suggestions.

Tomcat 7 stops responding

I have a REST (Jersey) web server using Tomcat 7 + Hibernate + Spring + Ehcache (as local cache).
The server randomly stops responding. I haven't captured (reproduced) the stopping behavior so it is hard to tell exactly when the server hangs. Once the server hangs, if I send a request, the request can't even hit the server (I don't see any request coming in from the application log file)
I understand it is very generic questions. But where do I need to take a look to find out more info?
After spending googling quite some time, I found out that I need to look at catalina.out log file and need to see the heap dump for possible deadlock, and JDBC connection, etc.
Where/How do I find out the heap dump? And where do I see any logs for JDBC connections?
I am using Spring + Hibernate and use transaction manager to manage the transaction. Is there any particular configuration I need to specify in the data source?
Very hard to give any definitive advice with such a generic question.
Before going for a heap dump, I would start with a thread dump using the jstack tool found in a JDK install.
This could give you an idea of what your Tomcat is doing (or not doing) when it stops responding.

current server status information for tomcat7

I am currently load testing my web application ( Spring + Hibernate based) on a standalone tomcat server (v7.0.27)on a Windows Server 2008 machine. I need to know how tomcat behaves as bulk requests come. e.g.
300 requests recevied - current heap size, server is hung up, server is unable to process, heap size, size of objects, number of objects. So on and so forth.
Is there a way to see this already ? (Info from the manager app is insufficient "current Threads active and memory occupied is not for my requirement).
P.S. maxThreads property for Connector element is 350.
Update : Another issue I faced while load testing - (Tomcat hangs up when i send 300 requests in some cases).
Any help would be highly and greatly appreciated.
you can use jconsole that ships with jdk.
http://docs.oracle.com/javase/6/docs/technotes/guides/management/jconsole.html
If the server hangs, there might be a deadlock.
You can try to attach with JProfiler, the monitoring section will show you the current locking situation and a possible deadlock.
Disclaimer: My company develops JProfiler.

Multiple RMI Connections from single JVM using Weblogic RMI over T3

I'm attempting to use JMeter with some custom samplers to load test a Java application that is normally accessed via Weblogic RMI over T3 from a Swing-based GUI. Intention is to load the application server and measure the response time of particular transactions by simulating many concurrent user connections/interactions (up to ~500).
I have implemented a couple of JMeter samplers that acquire a RMI connection to the server via a JNDI lookup and that works fine. However I've noticed that, even if I acquire two contexts on different threads using different credentials, only one T3 connection is opened.
Is there a way to effectively create multiple independent connections to the app server from within one JVM, or will I be forced to run one user per JVM?
App is running in WLS 11g, currently on Hotspot 32bit but will be moving to JRockit 64bit.
Thanks.
You are running up against RMI connection pooling. There are ways to turn it down, see the RMI Home Page and the Properties pages linked from them, but it's still an unrealistic test for other reasons such as port exhaustion on the client host. You should really look at using as many client hosts as possible with as many separate JVMs as possible.

How to integrate memcached in a Servlet? Tomcat & memory leaks

Searching memcached java in google, the first result is Using Memcached with Java.
The guy (who calls himself Just some Random Asshole in the Internet!) proposes a Singleton based on net.spy.memcached. It basically creates 20 threads and connections by creating 20 instances of MemcachedClient. For every request it chooses one at random.
However those threads and connections are never closed and they pile up every time I hot swap the application during development (with warnings from Tomcat 7).
SEVERE: The web application [/MyAppName] appears to have started a thread named
[...] but has failed to stop it. This is very likely to create a memory leak.
By looking at MemcachedClient JavaDoc, I see a method called shutdown with the only description being "Shut down immediately." Shut down what? The client? The server? I suppose is the client, since it's in MemcachedClient and I suppose that this method would close the connection and terminate the thread. EDIT: yes, it shuts down the client.
Question 1 How to force the execution of cleanup code in Tomcat 7, before the application is hot swapped?
Question 2 Is this approach of using memcached (with cleanup code), correct or is better I start over in a different way?
I think creating 20 memcache clients is silly - that's like creating 20 separate copies of your DB connection pool. The idea with that client is that it multiplexes a variety of requests with asynch IO.
http://code.google.com/p/spymemcached/wiki/Optimizations
As far as shutting it down, simply call:
yourClient.shutdown() to shutdown immediately, or
yourClient.shutdown(3, TimeUnit.SECONDS) for example, to allow some time for a more graceful shutdown.
That could be called from your Servlet's .destroy method, or a context listener for your whole WAR.
I don't know anything about memcached, but you could probably write a custom context listener and put some kind of shutdown hook in the context listener so that when the context shutdown you could loop through the items in your singleton and shut them down.
It turned out that it was a bug of Java AWS SDK and was not related to memcached. Version 1.2.2 of Java AWS SDK has this bug fixed.

Categories