I'm using Tomee, Primefaces 5.0 and Apache SHiro.
When I start the server, it consumes 600 Mb of memory.
If I open and close a certain page, that contains a lot of information, but is related to a ViewScoped bean, the memory usage goes to 1,6 GiB. The same thing if I open other things, even RequestScopped beans.
I have checked and the PreDestroy method is being called, so my problem isn't it.
Using Eclipse Memory Analyser:
One instance of "org.apache.openejb.core.WebContext" loaded by
"org.apache.catalina.loader.StandardClassLoader # 0xa34f0cf0" occupies
1,189,717,200 (97.83%) bytes. The memory is accumulated in one
instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded
by "system class loader".
Keywords
java.util.concurrent.ConcurrentHashMap$Segment[]
org.apache.openejb.core.WebContext
org.apache.catalina.loader.StandardClassLoader # 0xa34f0cf0
And when I run shutdown.sh, I have the following in catalina.out
org.apache.catalina.loader.WebappClassLoader
checkThreadLocalMapForLeaks SEVERE: The web application [/projeto-bim]
created a ThreadLocal with key of type
[org.apache.shiro.util.ThreadContext.InheritableThreadLocalMap] (value
[org.apache.shiro.util.ThreadContext$InheritableThreadLocalMap#5720d785])
and a value of type [java.util.HashMap] (value
[{org.apache.shiro.util.ThreadContext_SECURITY_MANAGER_KEY=org.apache.shiro.web.mgt.DefaultWebSecurityManager#2d258973,
org.apache.shiro.util.ThreadContext_SUBJECT_KEY=org.apache.shiro.web.subject.support.WebDelegatingSubject#7b62f42c}]) but failed to remove it when the web application was stopped. Threads
are going to be renewed over time to try and avoid a probable memory
leak.
I tried several things, like setting some configuration in web.xml to maintain only one session or set Tomee to save session information on disk, but nothing worked.
What should I do?
// New information:
The memory goes to 1.6 GiB and stops because that is my maximum heap space. The web server begins to throw OutOfMemoryError. I'll try to increase this to see how much more it uses.
Ok. Now I increased the java heap space to 3 GB. And my application uses it all. It is clearly a memory leak, because each time I open a certain page, which contains a lot of information, the memory goes up 300 Mb, and it never decreases!
What could I do?
I don't see much thing linked to any leaks.
WebContext is bound to your webapp in TomEE (normal you see it while your app is running)
then all warnings are just that apache shiro is in your webapp and uses threadlocal to keep security context by thread. Since it was loaded by the webapp it can creates leaks but excepting patching shiro you can't help it.
Related
I have WebApplication which is deployed in Tomcat 7.0.70. I simulated the following situation:
I created the heap dump.
Then I sent the Http request and in service's method I printed the current thread and its classLoader. And then I invoked Thread.currentThread.sleep(10000).
And at the same moment I clicked 'undeploy this application' in Tomcat's admin page.
I created new heap dump.
After some minutes I created new hep dump.
RESULTS
Thread dump
On the following screen you can see that after I clicked "redeploy", all threads (which were associated with this web application) were killed except the thread "http-apr-8081-exec-10". As I set Tomcat's attribute "renewThreadsWhenStoppingContext == true", so you can see that after some time this thread ("http-apr-8081-exec-10") was killed and new thread (http-apr-8081-exec-11) was created instead of it. So I didn't expect to have the old WCL after creation of heap dump 3, because there are not any old threads or objects.
Heapd dump 1
On the following two screens you can see that when the application was running there was only one WCL(its parameter "started" = true).
And the thread "http-apr-8081-exec-10" had the contextClassLoader = URLClassLoader ( because it was in the Tomcat's pool).
I'm speaking only about this thread because you will able to see that this thread will handle my future HTTP request.
Sending HTTP request
Now I send the HTTP request and in my code I get information about the current thread.You can see that my request is being handled by the thread "http-apr-8081-exec-10"
дек 23, 2016 9:28:16 AM c.c.c.f.s.r.ReportGenerationServiceImpl INFO: request has been handled in
thread = http-apr-8081-exec-10, its contextClassLoader = WebappClassLoader
context: /hdi
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader: java.net.URLClassLoader#4162ca06
Then I click "Redeploy my web application" and I get the following message in console.
дек 23, 2016 9:28:27 AM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesThreads
SEVERE: The web application [/hdi] appears to have started a thread named [http-apr-8081-exec-10] but has failed to stop it. This is very likely to create a memory leak.
Heapd dump 2
On the following screens you can see that there are two instances WebAppClassLoader. One of them( number #1) is old( its attribute "started" = false).
And the WCL #2 was created after redeploying application (its attribute "started" = true).
And the thread we review has contextClassLoader = "org.apache.catalina.loader.WebappClassLoader".
Why? I expected to see contextClassLoader = "java.net.URLClassLoader" (after all, when any thread finishes its work it is returned to the Tomcat's pool
and its attribute "contextClassLoader" is set to any base classloader).
Heapd dump 3
You can see that there isn't thread "http-apr-8081-exec-10", but there is thread "http-apr-8081-exec-11" and it has contextClassLoader = "WebappClassLoader"
(Why not URLClassLoader?).
In the end we have the following: there is thread "http-apr-8081-exec-11" which has the ref to the WebappClassLoader #1.
And obviosly when I make "Nearest GC Root" on the WCL #1 I will see the ref to the thread 11.
Questions.
How can I forcibly say to Tomcat to return old value contextClassLoader (URLClassLoader) after thread will finish its work?
How can I make sure Tomcat doesn't copy old value "contextClassLoader" during the thread renewal?
Maybe, do you know other way to resolve my problem?
Tomcat is usually not a good option on production environments. I was using Tomcat on a few production applications and I found that even if the heap size and other configurations are properly setup - and every time you reload your application, the memory consumption goes up and up. Until you don't restart the tomcat service, memory is not fully reclaimed. We did testing all such experiments like, clearing logs, redeploying all apps, regularly restarting tomcat once a month or a week during least busy hours. But at the end I have to say that we have shifted our production environments to Glassfish and WebSphere.
I hope you would already have gone through these pages:
Memory leak in a Java web application
Tomcat Fix Memory Leak?
https://developers.redhat.com/blog/2014/08/14/find-fix-memory-leaks-java-application/
http://www.tomcatexpert.com/blog/2010/04/06/tomcats-new-memory-leak-prevention-and-detection
If your web applications are not tightly coupled with Tomcat then you can think of using another web container. Now we use the Glassfish even on development machines and production and the day we make this decision, we saved a lot of our time. Though Glassfish and other such server take more time while they start as they are not as lightweight as the Tomcat is but after life is bit more easy.
From my experience with this problem, what was preventing tomcat to properly GC older class loaders was some ThreadLocals a couple of frameworks I was using were creating (and not properly handling).
Something similar to what is explained here: ThreadLocal & Memory Leak
I tried to properly finalize this ThreadLocals and my leak reduced A LOT. It was still leaking, but I could handle 10 times more redeploys than before.
I would definitely check your memory dumps to objects that could be connected somehow to ThreadLocals (they are very common, specially if you use something to control transactions or anything that is thread-isolated).
I hope it helps!
Memory leak in tomcat's redeploing is very old problem.
The only real way to solve it is restart tomcat instead of redeploy application. If you have several apps you need to run several tomcat's services on different ports and join it with nginx.
We have hundreds of Tomcat instances running in several environments (also production) and the only reasonable solution we've found to this issue is to stop and restart every Tomcat at a set time daily (in the nighttime).
We've tried many tricks, but this is the lasting solution for our uptime requirements.
Tomcat is usually not a good option on production environments. I was using Tomcat on a few production applications and I found that even if the heap size and other configurations are properly setup - and every time you reload your application, the memory consumption goes up and up. Until you don't restart the tomcat service, memory is not fully reclaimed. We did testing all such experiments like, clearing logs, redeploying all apps, regularly restarting tomcat once a month or a week during least busy hours. But at the end I have to say that we have shifted our production environments to Glassfish and WebSphere.
Check for ThreadLocal uses that prevent your ClassLoader to be garbage collected. Either remove references to your classes in ThreadLocal values or use https://github.com/codesinthedark/ImprovedThreadLocal instead of ThreadLocal
I have scoured the internet for over a day and still cannot seem to find a working solution. I am using MySQL Connector/J 5.1.35 and Tomcat JDBC Connection Pool with Tomcat 6.0.32 and I am not able to start/stop/reload an existing Web Application without the tomcat jdbc cleaner pool thread not being able to be stopped and causing memory leaks. I have tried everything that I could find on the net about it and I still have the same issues. If I stop the container, it wouldn't matter since once the container stops, the threads are dead. BUT, since I am not stopping the container, just merely stopping the application, BAM! - Memory Leak.
This is beyond frustrating and you would think that Apache would have a detailed solution for this, but they don't.
Anyone know how to resolve this once and for all?
Oh, and yes - I de-register the drivers and run AbandonedConnectionCleanupThread.shutdown() in a ServletContextListener and have tried both to deploy the drivers in $CATALINA_HOME/lib and WEB-INF/lib, but still have the same problem.
Update:
It looks like the connection pool may NOT be the issue. If I use DBCP, I do not get errors about memory leaks, but if I run "Find Leaks" on the manager page, Tomcat definitely discovers a leak in the application - just from starting it, running an operation that reads from the database, then stopping it. If I just start/stop it, then no issue reported. I have made sure that all connections/statements/resultSets are properly closed. If I switch back to Tomcat JDBC, I still have the same issue, it just gets reported (at least as far as I can tell) during stop/restart of the application as [Pool-Cleaner] Thread not being able to be stopped, and the Manager finding leaks when clicking the "Find Leaks" button.
Your JDBC drivers should be in $CATALINA_HOME/lib, and your database connection pools should be configured in the server.xml file using <Resource> tags. You context.xml file for the webapp should then link to the global data source with a <ResourceLink> tag.
This way connections are global and shared by all webapps connecting to the same database. Connections will not be released when a webapp is stopped or restarted, because they don't belong to the webapp, they belong to Tomcat.
No database connection memory leak, assuming all your webapp servlets/handlers clean up their resources correctly in finally blocks.
FYI: Deregistring a driver doesn't release any resources, such as open Connections, Statements, and ResultSets. It just prevents new connections from being created.
Ok, discovered that this is NOT AT ALL related to JDBC (face-palm). Turns out this is an RMI issue, so I am now going to re-post this question as a new question.
Thanks everyone for your time.
I am having theses messages in my tomcat logs :
" org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc
A web application registered the JBDC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered."
and
"org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: A web application appears to have started a thread named [pool-820-thread-1] but has failed to stop it. This is very likely to create a memory leak."
Actually I have a JDBC driver (.jar) in my java project that I always deploy on a tomcat server as a .war (meaning the driver is always in the war/libs directory).
After my searches, I found a good starting of answer here but unfortunately I cannot comment yet on stackoverflow to have more details on the accepted answer.
Here are my questions :
- Is the answer suggesting to remove completely the .jar from the war/libs directory ?
- If yes, where do I put it? Because I do not get how to get rid completely of the .jar and still be able to make my tests locally to the database.
Please advice on this.
Since Tomcat 6.0 , there has been a feature to detect classloader memory leaks. Read more here. The above messages by tomcat are purely for information purpose and tomcat already took sufficient measures to avoid classloader leak by de-registering the Driver.
To prevent it as you have rightly pointed out , you can either move the jar completely to tomcat's lib folder where it wont be affected by a application context reload. Or you can explicitly call a DriverManager.deregister(driver). (Read here)
To understand more about ClassLoader leaks (here)
To understand why its suggested to move it to tomcat lib from applications WEB_INF/lib you can read more here.
Edit in response to query in the comment
No its not recommended to have multiple jars as it can lead to classcastexception. Each class is identified by combination of class name and classloader. Look at this for a clearer understanding with an example.
The better approach according to me would be to write a servlet context listener and explicitly de-register your driver at context destroyed. This way you can keep you jdbc driver in your web-inf/lib and need not move it to tomcat/lib.
Even if you keep jars at multiple locations, according to the classloader hierarchy that tomcat follows which is different from what java delegation model (more info here) , tomcat will pick up the jar in your web-inf/lib first.
I am running spring-mvc application. When i was closing Tomcat server, it shows
SEVERE: The web application [/myapp] appears to have started a thread named [metrics-meter-tick-thread-1] but has failed to stop it. This is very likely to create a memory leak.
SEVERE: The web application [/myapp] appears to have started a thread named [metrics-meter-tick-thread-2] but has failed to stop it. This is very likely to create a memory leak.
and this one :
SEVERE: The web application [/myapp] appears to have started a thread named [New I/O client worker #1-3] but has failed to stop it. This is very likely to create a memory leak.
SEVERE: The web application [/anant] created a ThreadLocal with key of type [org.jboss.netty.util.CharsetUtil$2] (value [org.jboss.netty.util.CharsetUtil$2#5db3978d]) and a value of type [java.util.IdentityHashMap] (value [{UTF-8=sun.nio.cs.UTF_8$Decoder#39a2da0a}]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
this one don't know which jar is related with may be netty
when i explored jars dependency i saw that there are two metrics-core jars:
metrics-core:2.2.0 (used by `datastax`)
metrics-core:3.0.1 (used by `Titan`)
I am attaching all snaps to make it more clear. So what is the solution ????
I am using
jdk1.7
cassandra-driver-core-1.0.4
titan-0.4.4
cassandra-1.2.2
tomcat-7.0.34
I know I'm a bit late with this. I had the same issue and eventually found the solution. The problem is that the metrics 2.2 JAR, (which spawns these threads) uses the ManagementFactory.getPlatformMBeanServer() method, as suggested by Oracle. This class is in the java.lang package, so it will be loaded centrally by the VM and not for each module. So, since the Metrics package will only shut these threads down by itself on a VM exit (via adding a shutdown-hook), the classloader that loaded this package will let the MXBeans specified through the package linger on on the module's unload. What makes this even worse is that classloader that loaded the war file will also stay loaded on the VM, which (transitively) will also include any classes loaded in the module and any static state.
You can call Metrics.shutdown() manually, that sometimes solves the problem. I did have some exotic problems with that solution (sometimes the threads still stayed around after this, but I have a very peculiar setup and didn't want to waste any more time on this issue).
Check what started metrics-meter-tick-thread* threads , either your webapp or some library. And stop the thread before you shutdown application.
See Tomcat wiki link explains how it creates a memory leak. Also it explains how uncleaned ThreadLocal variables causes memory leaks.
Threads spawned by webapps
If a webapp creates a thread, by default
its context classloader is set to the one of the parent thread (the
thread that created the new thread). In a webapp, this parent thread
is one of tomcat worker threads, whose context classloader is set to
the webapp classloader when it executes webapp code.
Furthermore, the spawned thread may be executing (or blocked in) some
code that involves classes loaded by the webapp, thus preventing the
webapp classloader from being collected.
So, if the spawned thread is not properly terminated when the
application is stopped, the webapp classloader will leak because of
the strong reference held by the spawned thread.
I've got an application leaking out java heap at a decent rate (400 users leaves 25% free after 2hours...after logoff all memory is restored) and we've identified the items causing the memory leak as Strings placed in session that appear to be generated by Portal itself. The values are the encoded Portal URIs (very long endcoded strings ... usually sized around 19kb), and the keys seem to be seven (7) randomly generated characters prefixed by RES# (for example, RES#NhhEY37).
We've stepped through the application using session tracing and snapping off heapdumps which has resulted in determining that there is one of these objects created and added to session on almost every page ... in fact, it seems like it is on each page that submits data (which is most pages). So, it's either 1:1 with pages in general, or 1:1 with forms.
Has anyone encountered a similar problem as this? We are opening a ticket with IBM, but wanted to ask this community as well. Thanks in advance!
Can it be the portlet cache? You could have servlet caching activated and declare a long portlet expiry time. Quoting from techjournal:
Portlets can advertise their ability to be cached in the fragment cache by setting their expiry time in their portlet.xml descriptor (see Portlet descriptor example)
<!-Expiration value is in seconds, -1 = no time limit, 0 = deactivated-->
<expiration-cache>3600</expiration-cache> <!- 1 Hour cache -->
To use the fragment caching functions, servlet caching needs to be activated in the Web Container section of WebSphere Application Server administrative console (see Portlet descriptor example). WebSphere Application Server also provides also a cache monitor enterprise application (CacheMonitor.ear), which is very useful for visualizing the contents of the fragment cache.
Update
Do you have portlets that set EXPIRATION_CACHE? Quote:
Modifying the local cache at runtime
For standard portlets, the portlet window can modify the expiration time at runtime by setting the EXPIRATION_CACHE property in the RenderResponse, as follows:
RenderResponse.setProperty(
PortletResponse.EXPIRATION_CACHE,
(new Integer(3000)).toString() );
Note that for me the value is a bit counter-intuitive, -1 means never expire, 0 means don't cache.
The actual issue turned out to be a working feature within Portal. Specifically, Portal's action protection which prevents the same action from being submitted twice, while keeping the navigational ability of the portal. There is a cache that retains the actions results for every successful action and uses them to compare and reject duplicates.
The issue for us was the fact that we required "longer than normal" user sessions (60+ minutes) and with 1,000+ concurrent users, we leaked out on this protection mechanism after just a couple hours.
IBM recommended that we just shut off the cache entirely using the following portlet.xml configuration entry:
wps.multiple.action.execution = true
This allows double submits, which may or may not harm business functionality. However, our internal Portal framework already contained a mechanism to prevent double submits, so this was not an issue for us.
At our request, IBM did come back with a patch for this issue which makes the cache customizeable, that is, let's you configure the number of action results that you store in cache for each user and thus you can leverage Portal's mechanism again, at a reduced session overhead. Those portal configuration settings were:
wps.multiple.action.cache.bound.enabled = true
wps.multiple.action.cache.key.maxsize = 40
wps.multiple.action.cache.value.maxsize = 10
You'll need to contact IBM about this patch as it is not currently in a released fixpack.
Is your Websphere Portal Server having latest fix pack installed?
http://www-01.ibm.com/support/docview.wss?uid=swg24024380&rs=0&cs=utf-8&context=SSHRKX&dc=D420&loc=en_US&lang=en&cc=US
Also you may be interested in following discussion
http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14427700&tstart=0
Update:
Just throwing some blind folded darts.
"RES#" to me sounds like resource.
From the forum stack trace,
"DefaultActionResultManager.storeDocument"
indicates it is storing the document.
Hence looks like your resources(generated portal pages) are being cached. Check if there is some paramater that can lmit cache size of resource.
Also in another test set cache expiration to 5 minutes instead of an hour.