We are getting an OOME on our Java Web Start application
When analyzing the heap size in JVisualVM we are seeing a large increase in memory over a short period.
The following memory consumption is more or less the same every time when running the application.
I have imported the heap dump to Eclipse MAT.
I can see that most of the heap is occupied by DeploymentRuleSets
The internal drs map of the DeploymentRuleSet class contains tens of thousands entries all containing the same jar
I've investigated at least 50 of these HashMap entries. They contain the same data. Screenshot attached - I have taken out some sensitive company data but I really triple checked if the values are the same. I've pasted screenshots of just 2 RuleId objects but you get the picture
Here are my questions/doubts:
The obvious one: has anyone encountered anything like this?
Looking at the RuleId class, it does not have the "equals" nor "hashcode" method implemented. It is therefor expected that duplicated entries will be inserted to the hashmap since it will compare the objects by reference.
However I've analyzed different heap dumps withing our company for other JavaWebStart applications. None of them have the same problem. The drs map contains only a couple hundred entries and there are only unique jars from classpath so exactly how I would expect it to work
Therefor I doubt that it is a "core java bug" but rather our specific app problem
Do you have an idea why the same jar would be inserted into the drsMap over and over again?
I've attached the remote debugger to our JavaWebStart app. I have imported the core java "deploy.jar" to the classpath and put some breakpoints in some of the DeploymentRuleSet methods. Sadly none of them were hit even though the DeploymentRuleSet increases in memory. Any way to debug core java stuff like this? Would be really helpful to see when and why are those DRS triggered
Related
My Java app is experiencing a memory leak which I am trying to tackle. However depending on which mode I select during the profiling run I get opposite results. Consequently I am not sure the memory leak I targeted has been solved since moreover an "Out Of Memory Error : Java Heap Space" recently spurt out in production.
Visual Studio Code has a guide to understand the difference between the two modes (Sampling / Instrumentation or Tracing Profiler and there is also this Ninjas' guide about the same topic on VisualVM. I understand that with the instrumented mode the overhead is much higher. But this modes is required (at least in Netbeans profiler) to get the list of the objects that have the highest number of "surviving generations" which is the indicator to watch in order to see the efficiency of the counter-measures that fight against a memory leak.
Thanks to the OOM stack trace I have a clue which method caused the memory leak and so I was able to isolate it in a unit test.
My unit test starts with #RunWith(JfxTestRunner.class) to avoid errors regarding "toolkit not initialized errors" when something is sent to the Java Fx App Thread (ie when something is shown to the user in the GUI app).
Now if I launch this unit test (loop that reads the content of a Solr index and populates a List based on this content) and profile it in sampling mode I get the following graph :
This makes me think the problem is solved since the heap used increases during the index reading and decreases afterwards.
But now if I run the very same test in Instrumented mode with all classes (see "*" below), I get a very different memory shape :
This time the used heap keeps increasing over time as if there were still a memory leak.
I first used the instrumented mode in order to get the objects that were keeping surviving garbage collections and then without paying attention I switched to the sampling mode and thought the problem had been solved. Now I am quite unsure.
Consequently I wonder which mode (Sampled / Instrumented) I should base my assessment on, and why such differences appear (memory footprint is higher in instrumented mode and surviving generations are approximately heighfold greater in this mode).
Any help appreciated,
Kind Regards
I have created an application using Vaadin having respective UI.
I am running in a server with having a maximum heapload of 250 Mb. The application gets crashed because of the heapload since it is not garbage collected.
I tried to run with a visualVM analyzer. Found to have lot of instances and somehow the vaadin ScssCache is making this mess.
How can I rectify this error? Is it because of the browser cache settings or should I do something with the vaadinservletcache entry?
I really do not understand please help. I have attached my VisualVm screen shot for the reference. Thank you very much. I am using vaadin 7.6.3.
The attached VisualVM screenshot shows that the entire scssCache retains 1248kb of memory, out of which 1200kb is used for the actual cached CSS contents. This is less than 1% of your 250mb heap size and most likely not the problem.
That 1200kb char[] with the compiled CSS might be the biggest individual object on the heap, but there's only one such object. You will thus have to look for something else that consumes lots of memory. I'd recommend looking at the list of Classes sorted by their retained size, and ignore low-level classes such as char[], java.lang.String or java.Util.HashMap and instead try to pinpoint anything related to your own application.
I would also encourage you to verify that your application is actually running in production mode since the only code path that I can identify that does anything with scssCache is through VaadinServlet.serveOnTheFlyCompiledScss which checks whether production mode is enabled and in that case returns before touching the cache.
An application I have uses Java Agents with need large jar Libraries (the biggest one is PDFBox - all in all 11MB). They were running for 3 years without any issue with the jars in jvm/lib/ext.
During an upgrade to Domino 9.0.1FP6 the administrator forgot to reinstall the jars in jvm/lib/ext - with obvious repercussions. (Such an annoyance that IBM just completely replaces the whole jvm sometimes without being gentle to the jars)
Upon request, I changed the code by including the jars directly into the Java Agents. Things worked well for 2-3 days, and now we're getting OutOfMemory errors.
As far as I understand it, the jars get loaded onto the Java Heap when the agents get started, but the garbage collection is working slower than the continuous loading of the jars into the heap. I couldn't find any precise documentation by IBM on this matter.
We've increased JavaMaxHeapSize in the notes.ini of the servers but that didn't bring the expected results.
I'm dismissing the possibility that I have forgotten a recycle() in my code because it run beforehand with no memory leaks for three years.
I have thought of the possibility of running a separate Agent that checks total memory usage and then runs Sytem.gc() but I'm not convinced since I have no guarantee that the garbage collector will actually fire.
Apart from the obvious move of putting back the jars in jvm/lib/ext, is there an alternative that I haven't considered?
And is there anywhere some sort of documentation about how these classes get loaded into the Heap, and whether there's a possibility that the jars erroneously are not recognized as being garbage-collectible?
It's a memory leak bug - see http://www-01.ibm.com/support/docview.wss?uid=swg1LO49880 for details.
You need to go back to placing the jar files in jvm/lib/ext.
I am currently developing an application that processes several files, containing around 75,000 records a piece (stored in binary format). When this app is ran (manually, about once a month), about 1 million records are contained entirely with the files. Files are put in a folder, click process and it goes and stores this into a MySQL database (table_1)
The records contain information that needs to be compared to another table (table_2) containing over 700k records.
I have gone about this a few ways:
METHOD 1: Import Now, Process Later
In this method, I would import the data into the database without any processing from the other table. However when I wanted to run a report on the collected data, it would crash assuming memory leak (1 GB used in total before crash).
METHOD 2: Import Now, Use MySQL to Process
This was what I would like to do but in practice it didn't seem to turn out so well. In this I would write the logic in finding the correlations between table_1 and table_2. However the MySQL result is massive and I couldn't get a consistent output, sometimes causing MySQL giving up.
METHOD 3: Import Now, Process Now
I am currently trying this method and although the memory leak is subtle, It still only gets to about 200,000 records before crashing. I have tried numerous forced garbage collections along the way, destroying properly classes, etc. It seems something is fighting me.
I am at my wits end trying to solve the issue with memory leaking / the app crashing. I am no expert in Java and have yet to really deal with very large amounts of data in MySQL. Any guidance would be extremely helpful. I have put thought into these methods:
Break each line process into individual class, hopefully expunging any memory usage on each line
Some sort of stored routine where once a line is stored into the database, MySQL does the table_1 <=> table_2 computation and stores the result
But I would like to pose the question to the many skilled Stack Overflow members to learn properly how this should be handled.
I concur with the answers that say "use a profiler".
But I'd just like to point out a couple of misconceptions in your question:
The storage leak is not due to massive data processing. It is due to a bug. The "massiveness" simply makes the symptoms more apparent.
Running the garbage collector won't cure a storage leak. The JVM always runs a full garbage collection immediately before it decides to give up and throw an OOME.
It is difficult to give advice on what might actually be causing the storage leak without more information on what you are trying to do and how you are doing it.
The learning curve for a profiler like VirtualVM is pretty small. With luck, you'll have an answer - at least a very big clue - within an hour or so.
you properly handle this situation by either:
generating a heap dump when the app crashes and analyzing that in a good memory profiler
hook up the running app to a good memory profiler and look at the heap
i personally prefer yjp, but there are some decent free apps as well (e.g. jvisualvm and netbeans)
Without knowing too much about what you're doing, if you're running out of memory there's likely some point where you're storing everything in the jvm, but you should be able to do a data processing task like this the severe memory problems you're experiencing. In the past, I've seen data processing pipelines that run out of memory because there's one class reading stuff out of the db, wrapping it all up in a nice collection, and then passing it off to another, which of course requires all of the data to be in memory simultaneously. Frameworks are good for hiding this sort of thing.
Heap dumps/digging with virtualVm hasn't been terribly helpful for me , as the details I'm looking for are often hidden - e.g. If you've got a ton of memory filled with maps of strings it doesn't really help to tell you that Strings are the largest component of your memory useage, you sort of need to know who owns them.
Can you post more detail about the actual problem you're trying to solve?
We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this.
Upon analysis of the heap dumps we find the problem to be objects used in JSPs.
Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)?
We have a clustered Websphere appserver with 2 nodes and an IHS.
EDIT: The findings above are based on the heap-dump and nativestderr log analysis given below using the IBM support assistant
nativestd err log analysis:
alt text http://saregos.com/wp-content/uploads/2010/03/chart.jpg
Heap dump analysis:
![alt text][2]
Heap dump analysis showing the immediate dominators (2 levels up of hastable entry in the image above)
![alt text][3]
The last image shows that the immediate dominators are in fact objects being used in JSPs.
EDIT2: More info available at http://saregos.com/?p=43
I'd first attach a profile tool to tell you what these "Objects" are that are taking up all the memory.
Eclipse has TPTP,
or there is JProfiler
or JProbe.
Any of these should show the object heap creaping up and allow you to inspect it to see what is on the heap.
Then search the code base to find who is creating these.
Maybe you have a cache or tree/map object with elements in and you have only implemented the "equals()" method on these objects, and you need to implement "hashcode()".
This would then result in the map/cache/tree getting bigger and bigger till it falls over.
This is only a guess though.
JProfiler would be my first call
Javaworld has example screen shot of what is in memory...
(source: javaworld.com)
And a screen shot of object heap building up and being cleaned up (hence the saw edge)
(source: javaworld.com)
UPDATE *************************************************
Ok, I'd look at...
http://www-01.ibm.com/support/docview.wss?uid=swg1PK38940
Heap usage increases over time which leads to an OutOfMemory
condition. Analysis of a heapdump shows that the following
objects are taking up an increasing amount of space:
40,543,128 [304] 47 class
com/ibm/wsspi/rasdiag/DiagnosticConfigHome
40,539,056 [56] 2 java/util/Hashtable 0xa8089170
40,539,000 [2,064] 511 array of java/util/Hashtable$Entry
6,300,888 [40] 3 java/util/Hashtable$HashtableCacheHashEntry
Triggering the garbage collection manually doesn't solve your problem - it won't free resources that are still in use.
You should use a profiling tool (like jProfiler) to find your leaks. You problably use code that stores references in lists or maps that are not released during runtime - propably static references.
If you run under the Sun 6 JVM strongly consider to use the jvisualvm program in the JDK to get an inital overview of what actually goes on inside the program. The snapshot comparison is really good to help you get further in which objects sneak in.
If Sun 6 JVM is not an option, then investigate which profiling tools you have. Trials can get you really far.
It can be something as simple as gigantic character arrays underlying a substring you are collecting in a list, for e.g. housekeeping.
I suggest reading Effective Java, chapter 2. Following it, together with a profiler, will help you identify the places where your application produces memory leaks.
Freeing up memory isn't the way to solve extensive memory consumption. The extensive memory consumption may be a result of two things:
not properly written code - the solution is to write it properly, so that it does not consume more than is needed - Effective Java will help here.
the application simply needs this much memory. Then you should increase the VM memory using Xmx, Xms, XX:MaxHeapSize,...
There is no specific to free up objects allocated in JSPs, at least as far as I know. Rather than investigationg such options, I'd rather focus on finding the actual problem in your application codes and fix it.
Some hints that might help:
Check the scope of your beans. Aren't
you e.g. storing something user or
request specific into "application"
scope (by mistake)?
Check settings of web session timeout in your web application and
appserver settings.
You mentioned the heap consumption grows gradually. If it's indeed so,
try to see by how much the heap size
grows with various user scenarios:
Grab a heapdump, run a test, let the
session data timeout, grab another
dump, compare the two. That might
give you some idea where do the objects on heap come from
Check your beans for any obvious memory leaks, for sure :)
EDIT: Checking for unreleased static resources that Daniel mentions is another worthwhile thing :)
As I understand those top-level memory-eaters are cache storage and objects stored in it. Probably you should make sure that your cache is going to free objects when it takes too much memory. You may want to use weak-ref if you need cache for live objects only.