Appdynamics Agent connection causing memory leak in Java applications - java

We're using Appdynamics Java agent for monitoring our production applications. We have noticed slow growth in memory and the application eventually stalls. We ran a head dump on one of the JVMs and got the below reports.
Problem Suspect 1:
The thread com.singularity.ee.agent.appagent.kernel.config.xml.a# 0x1267......
AD thread config Poller keeps local variable config size of 28546.79(15.89%) KB
Problem Suspect 2:
280561 Instances of
com.singularity.ee.agent.appagent.services.transactionmonitor.com.exitcall.p loaded by com.singularity.ee.agent.appagent.kernel.classloader.d# 0x6c000....
occupy 503413.3(28.05%) KB. These instances are referenced from one instance of java.util.HashMap$Node[]...
We figured that these classes were from the Appdynamics APM that hooks on to the running JVM and sends monitored events to the controller. There is so much convoluted process associated with reaching out to the vendor, so I am wondering if there are any work arounds for this like we enabling our java apps with JMX and Appd getting the monitoring events from JMX rather than directly hooking on to the applications' JVM. Thanks for your suggestions.

Related

JProfiler-13 blocking application's JVM (Port-8080)

I have an JAVA application running in Kubernetes which listens to port 8080. When I connect JProfiler to JVM and run a few requests sequentially, everything works fine. But as soon as I fire some load using Jmeter, my application stops responding on Port 8080 and I get request timeout.
When JProfiler is detached from JVM, everything starts working fine again.
I explored a lot but couldn't find any help regarding what in JProfiler is blocking my application to respond.
From the feedback you have sent me by email, the overhead becomes noticeable when you switch on allocation recording. Just with CPU and probe recording you don't experience any problem.
Allocation recording is an expensive operation that should only be used when you have a related problem. The added overhead can be reduced by reducing the allocation sampling rate.

How many simultaneous threads can be used to load test a web application?

I need to find my web application's performance and load test it. The application currently has a Tomcat configuration of 25 threads maximum and there are two such servers.
Does it mean that I should do load testing for 50 concurrent requests?
And what happens where there are more requests; does it go to the thread waiting queue in Tomcat?
In case it goes for a thread wait queue, can I test the application with more than 50 requests?
Tomcat can work in 2 modes:
BIO (blocking IO) where 1 thread can serve maximum 1 connection
NIO (non-blocking IO) where 1 thread can serve many more connections
Most probably your application is using the latter one, check out Understanding the Tomcat NIO Connector and How to Configure It guide for the overview. On the other hand even in case of using the BIO connector application might still be able to operate fast enough to serve more than 50 users.
In both cases, you should treat your backend as a "black box" (imagine you don't know anything about the configuration) and focus on testing non-functional requirements.
The essential performance testing types you should be considering are:
Load Testing: check how does your system behave when the anticipated amount of concurrent users are using it.
Soak Testing: the same, but assuming longer test duration, i.e. overnight or weekend. This way you will be able to see if there are any memory leaks, how does log rotation work, whether application cleans after itself so it won't run out of disk space, etc.
Stress Testing: the process of identifying the boundaries of your application, i.e. start with 1 virtual user and increase the load until application response time will be within reasonable boundaries or errors start occurring.
See Why ‘Normal’ Load Testing Isn’t Enough for more information.

RMI/Tomcat 6 Memory Leak

My application uses both RMI and JDBC to talk to a remote system and a database. While the database issues have been resolved, it turns out that RMI is causing some form of Memory Leak being detected by Tomcat 6 (I have also tried this with Tomcat 7 and we have the same issue).
Basically, when we start the application and the user enters information into the webpage, an RMI call is made to a backend system. If we stop/start or restart the application, Tomcat Manager now can detect a memory leak. If we start the application and do NOT make the RMI call, we can start/stop & restart the application all day long without issues.
Does anyone know what needs to be done to prevent RMI calls from causing Memory Leaks in the WebappClassLoader upon reload or stop/start while the webserver is still running?
My application uses both RMI and JDBC to talk to a remote system and a database. While the database issues have been resolved, it turns out that RMI is causing some form of Memory Leak being detected by Tomcat 6 ... Does anyone know what needs to be done to prevent RMI calls from causing Memory Leaks in the WebappClassLoader upon reload or stop/start while the webserver is still running?
RMI calls do not cause memory leaks. I have eight Tomcats that interact heavily via RMI, among other things, that have been up for several months without any sign of a leak.
The DGCClient had not cleaned-up any resources related to RMI and needed to wait for the timeout in order to fire. Since the container had tried to stop, but the RMI resources were hanging around, this produced a Memory Leak according to Tomcat Manager, that cleaned-itself and corrected the Memory Leak condition after the RMI resources were collected by the DGC.

Running GAE Development Server of Google Compute Engine Instance <phew>

I'm trying to run the local dev server (java) for Google AppEngine on a Google compute instance. (we're using compute engine instances as test servers).
When trying to start the dev server using appcfg.sh we notice that 90% of the time, the server doesn't get started and hangs for 10minutes before finnaly starting.
I know that the server hasn't started because this line is never printed to the console when it hangs:
Server default is running at http://localhost:8080/
Has anyone seen anything like this?
In a nutshell:
-The App Engine java SDK uses jetty as the servlet container for the development appserver
-Jetty relies on java.security.SecureRandom
-SecureRandom consumes entropy from /dev/random by default
-/dev/random will block when insufficient entropy is available for a read
The GCE instance, when lightly used (for example, solely as a test appengine server), does not generate entropy quickly. Thus, repeated startups of the java appengine server consume entropy from /dev/random more rapidly than it is replenished, causing the blocking behavior on startup that you observed as the hangs on startup.
You can confirm that the hang is due to the SecureRandom issue by increasing the logging levels of the dev appserver. You should see a message similar to "init SecureRandom" and then the blocking behavior.
Some possible ways to address this:
1) Adding the following to the dev_appserver.sh invocation will cause SecureRandom to consume the /dev/urandom entropy source rather than /dev/random:
--jvm_flag="-Djava.security.egd=file:/dev/./urandom"
2) Having a GCE instance that's more heavily utilized should cause entropy data to be collected more rapidly, which will in turn make /dev/random less susceptible to blocking on subsequent restarts of the development appserver.

current server status information for tomcat7

I am currently load testing my web application ( Spring + Hibernate based) on a standalone tomcat server (v7.0.27)on a Windows Server 2008 machine. I need to know how tomcat behaves as bulk requests come. e.g.
300 requests recevied - current heap size, server is hung up, server is unable to process, heap size, size of objects, number of objects. So on and so forth.
Is there a way to see this already ? (Info from the manager app is insufficient "current Threads active and memory occupied is not for my requirement).
P.S. maxThreads property for Connector element is 350.
Update : Another issue I faced while load testing - (Tomcat hangs up when i send 300 requests in some cases).
Any help would be highly and greatly appreciated.
you can use jconsole that ships with jdk.
http://docs.oracle.com/javase/6/docs/technotes/guides/management/jconsole.html
If the server hangs, there might be a deadlock.
You can try to attach with JProfiler, the monitoring section will show you the current locking situation and a possible deadlock.
Disclaimer: My company develops JProfiler.

Categories