How to recover from Outofmemoryerror in maven based struts2 application - java

I hosted my web application in windows server.It suffered outofmemoryerror in three days once.Then i restart the tomcat it works fine for next three days and again suffered after three days.I googled for solve this problem,some bodies say to increase the perm-gen space.That only postpones that error to six days.Now six days once my web app generates out of memory error.I also performed code optimization too.For examples
`String s="example";`
It's string literal.Garbage collector does not take literals.so i changed literals to object in all places and reduce some unnecessary code also.But now also i sufferd from the same.
How to prevent permanently from outofmemoryerror?
Any Help Will Be Greatly Appreciated!!

Obviously you have some memory leakage in your application. Look for any object being created repeatedly like Statement or PreparedStatement object and not being closed.(I have had this issue).Or you can check if any object being created in any servlet that is not being disposed of.Do provide the stack trace with the file which contains the code that caused this exception for further specific help.

Related

How to overcome the heapdump created by Vaadin ScssCache?

I have created an application using Vaadin having respective UI.
I am running in a server with having a maximum heapload of 250 Mb. The application gets crashed because of the heapload since it is not garbage collected.
I tried to run with a visualVM analyzer. Found to have lot of instances and somehow the vaadin ScssCache is making this mess.
How can I rectify this error? Is it because of the browser cache settings or should I do something with the vaadinservletcache entry?
I really do not understand please help. I have attached my VisualVm screen shot for the reference. Thank you very much. I am using vaadin 7.6.3.
The attached VisualVM screenshot shows that the entire scssCache retains 1248kb of memory, out of which 1200kb is used for the actual cached CSS contents. This is less than 1% of your 250mb heap size and most likely not the problem.
That 1200kb char[] with the compiled CSS might be the biggest individual object on the heap, but there's only one such object. You will thus have to look for something else that consumes lots of memory. I'd recommend looking at the list of Classes sorted by their retained size, and ignore low-level classes such as char[], java.lang.String or java.Util.HashMap and instead try to pinpoint anything related to your own application.
I would also encourage you to verify that your application is actually running in production mode since the only code path that I can identify that does anything with scssCache is through VaadinServlet.serveOnTheFlyCompiledScss which checks whether production mode is enabled and in that case returns before touching the cache.

DeploymentRuleSet occupying most of the heap space

We are getting an OOME on our Java Web Start application
When analyzing the heap size in JVisualVM we are seeing a large increase in memory over a short period.
The following memory consumption is more or less the same every time when running the application.
I have imported the heap dump to Eclipse MAT.
I can see that most of the heap is occupied by DeploymentRuleSets
The internal drs map of the DeploymentRuleSet class contains tens of thousands entries all containing the same jar
I've investigated at least 50 of these HashMap entries. They contain the same data. Screenshot attached - I have taken out some sensitive company data but I really triple checked if the values are the same. I've pasted screenshots of just 2 RuleId objects but you get the picture
Here are my questions/doubts:
The obvious one: has anyone encountered anything like this?
Looking at the RuleId class, it does not have the "equals" nor "hashcode" method implemented. It is therefor expected that duplicated entries will be inserted to the hashmap since it will compare the objects by reference.
However I've analyzed different heap dumps withing our company for other JavaWebStart applications. None of them have the same problem. The drs map contains only a couple hundred entries and there are only unique jars from classpath so exactly how I would expect it to work
Therefor I doubt that it is a "core java bug" but rather our specific app problem
Do you have an idea why the same jar would be inserted into the drsMap over and over again?
I've attached the remote debugger to our JavaWebStart app. I have imported the core java "deploy.jar" to the classpath and put some breakpoints in some of the DeploymentRuleSet methods. Sadly none of them were hit even though the DeploymentRuleSet increases in memory. Any way to debug core java stuff like this? Would be really helpful to see when and why are those DRS triggered

What are the main possible causes for getting OutOfMemoryError?

Today while working i encountered this problem in one of our application.
Scenario is: We have a load button by clicking on it, it will load lacks of records from the data base. But the issue is while reloading the same records by clicking on refresh button it is giving OutOfMemoryError. Can anyone give brief explanation what might be the possible cause because at first attempt it is loading all the records fine but why we are getting exception on refreshing it.
If any good resource available to study this scenario also would help alot.
Thanks in advance...
The only reason is that you are constantly creating new objects without freeing enough, or you are creating too many threads.
You can use Java VisualVM's profiler to do some memory profiling. This allows you to get an overview, which objects are in memory, and which other object/thread has references to them.
The Java VisualVM should be part of Sun's JDK.
See also:
How to identify the issue when Java OutOfMemoryError?
What is an OutOfMemoryError and how do I debug and fix it
I came to know that our application has lots of thread processing. I tried to reduce the stack size in the server by using the below command and it worked.
-Xss512k (setting the java thread stack size to this)
Here is the resource like i have used to resolve this issue.

Jade: java.lang.OutOfMemoryError: Java Heap Space

I've been using jade (Java Agent Development Framework) for creating a network based messaging system.
So far, I notice that jade was running without issues but one fine day, I get this message,
A JVM Heap Space error!
After investigating, I find out that this is due to the collection variable that might be clogging up objects which is occupying the JVM space without flushing it out. (you can see that Exception is raised from the jade perspective and nothing from my code side)
How can I remove this?
My code consists of a simple TickerBehaviour class as below:
public class MyBehaviour extends TickerBehaviour {
#Override
public void onTick() {
// Runs every second.
ACLMessage msg = new ACLMessage(ACLMessage.INFORM);
msg.setOntology(username);
msg.addReceiver(new AID(username, AID.ISLOCALNAME));
msg.setContent(<my intended message to that identifier>);
agent.send(msg);
}
}
I further checked if my code is creating unnecessary referenced objects (by commenting the code which finally generates my intended message as a way) I bring it down to removing all my functionality and just run the jade agent, and surprisingly, I notice that jade task itself is creating this issue.
I used visualVM to inspect the ongoing Heap Space inspection for live object creation to check how many referenced objects are still being located in JVM Heap space.
Older solutions aren't helping much either. Can anyone help me tackle this issue?
I have used the options recommended during the start of the jade Container but there are referenced objects still present which aren't being removed by GC.
System Setup:
OS: Linux 64-bit.
JVM Version: IcedTea, 1.6.0.27 , 64-bit.
JVM Options: -Xms1024m, -Xmx2048m and -XX:MaxPermSize=512M
Thank you in advance.
How can I remove this?
You seem to have investigated this as a memory leak, and concluded that the leak is in Jade.
If that is the case, then the first thing to do is to trawl the Jade mailing lists and bug tracker to see if this is a known problem, and if there is a known fix or workaround.
If that fails, you've got three choices:
Investigate further and track down the cause of the memory leak, and develop a fix for it. If the fix is general, contribute it back to the Jade team.
Report the bug on the Jade bug tracker and hope that this results in a fix ... eventually.
Bandaid. Run your application with a larger heap, and restart it whenever you get an OOME.
The other possibility is that the memory leak is in, or is caused by your code. For instance, you say:
After investigating, I find out that this is due to the collection variable that might be clogging up objects which is occupying the JVM space without flushing it out. (you can see that Exception is raised from the jade perspective and nothing from my code side)
This is not watertight evidence that the problem is in Jade code. All it means is that you were executing a Jade method when the memory finally ran out. I'd advise you to download the Jade source code and investigate this (supposed) memory leak further. Figure out exactly what really causes it rather than basing your diagnosis on assumptions and faulty inferences.
Bear in mind that Jade is a stable product that many people are using successfully ... without memory leak issues.
One of the simplest things I can recommend is to use Plumbr. It is meant exactly for such cases. If Plumbr reports that the problem lies in Jade code, then you should submit a bug report to them. Or it will help you spot the problem in you own application.
The problem was with another engine that was buffering objects for processing. JADE wasn't the culprit. I was using a common ESPER Engine and creating new objects for event processing from the data being parsed.
I'm investigating how to flush out those contents periodically without crashing the application.
Sorry for the trouble!

Java Memory Leak Due to Massive Data Processing

I am currently developing an application that processes several files, containing around 75,000 records a piece (stored in binary format). When this app is ran (manually, about once a month), about 1 million records are contained entirely with the files. Files are put in a folder, click process and it goes and stores this into a MySQL database (table_1)
The records contain information that needs to be compared to another table (table_2) containing over 700k records.
I have gone about this a few ways:
METHOD 1: Import Now, Process Later
In this method, I would import the data into the database without any processing from the other table. However when I wanted to run a report on the collected data, it would crash assuming memory leak (1 GB used in total before crash).
METHOD 2: Import Now, Use MySQL to Process
This was what I would like to do but in practice it didn't seem to turn out so well. In this I would write the logic in finding the correlations between table_1 and table_2. However the MySQL result is massive and I couldn't get a consistent output, sometimes causing MySQL giving up.
METHOD 3: Import Now, Process Now
I am currently trying this method and although the memory leak is subtle, It still only gets to about 200,000 records before crashing. I have tried numerous forced garbage collections along the way, destroying properly classes, etc. It seems something is fighting me.
I am at my wits end trying to solve the issue with memory leaking / the app crashing. I am no expert in Java and have yet to really deal with very large amounts of data in MySQL. Any guidance would be extremely helpful. I have put thought into these methods:
Break each line process into individual class, hopefully expunging any memory usage on each line
Some sort of stored routine where once a line is stored into the database, MySQL does the table_1 <=> table_2 computation and stores the result
But I would like to pose the question to the many skilled Stack Overflow members to learn properly how this should be handled.
I concur with the answers that say "use a profiler".
But I'd just like to point out a couple of misconceptions in your question:
The storage leak is not due to massive data processing. It is due to a bug. The "massiveness" simply makes the symptoms more apparent.
Running the garbage collector won't cure a storage leak. The JVM always runs a full garbage collection immediately before it decides to give up and throw an OOME.
It is difficult to give advice on what might actually be causing the storage leak without more information on what you are trying to do and how you are doing it.
The learning curve for a profiler like VirtualVM is pretty small. With luck, you'll have an answer - at least a very big clue - within an hour or so.
you properly handle this situation by either:
generating a heap dump when the app crashes and analyzing that in a good memory profiler
hook up the running app to a good memory profiler and look at the heap
i personally prefer yjp, but there are some decent free apps as well (e.g. jvisualvm and netbeans)
Without knowing too much about what you're doing, if you're running out of memory there's likely some point where you're storing everything in the jvm, but you should be able to do a data processing task like this the severe memory problems you're experiencing. In the past, I've seen data processing pipelines that run out of memory because there's one class reading stuff out of the db, wrapping it all up in a nice collection, and then passing it off to another, which of course requires all of the data to be in memory simultaneously. Frameworks are good for hiding this sort of thing.
Heap dumps/digging with virtualVm hasn't been terribly helpful for me , as the details I'm looking for are often hidden - e.g. If you've got a ton of memory filled with maps of strings it doesn't really help to tell you that Strings are the largest component of your memory useage, you sort of need to know who owns them.
Can you post more detail about the actual problem you're trying to solve?

Categories