How to reduce VM consumption in the startup of Java application? - java

I'm working on a Java (Hibernate+Spring+JavaFX) application. To successfully run this application, I have to set VM : "-Xms512m " otherwise it is failing with below mentioned error.
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [*path/*]:
Constructor threw exception; nested exception is java.lang.OutOfMemoryError: Java heap space.
I have observed that the session factory is consuming 250+ MB to initiate.
There are some hbm (hbm POJO's) files which are consuming 180 MB
I also tried the Netbeans Profiler session to reduce memory leaks.
Could you please suggest few steps to reduce VM consumption in the start up of application.
What is the best possible approach to be followed to reduce VM consumption?

Not Sure about the correct reason for insufficient heap spcae. To find the correct reason you need to debug it.
Steps to Debug
Use JVM arg for memory dump to a given location in the case of Out Of
Memory. It will create a hevy file (upto few GBs)
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=path_to_file
Load the dump file using any profiler. Find the culprit. Below are
the links to use heap dump
heap dump using visualvm
heap dump tricks

Inorder Application to come up, JVM would need minimum java heap space for initialization once it is up due to lack of memory space the application may throw OutOfMemory. There is no option to reduce the memory consumption by VM. JVM will allocate memory if application request for object allocation. It is duty of Garbage Collector to clean up the dead objects.
Following Option could help in identify the leak
1) -verbose:gc This argument will record the GC occupancy behaviour. Once System throws OOM, You could load the logs in Garbage Collector and Memory vzisualizer tool to see the pattern of allocation and tuning recommendations provided by the tool.
2) Collect Heap dump on OOM and Load it in MemoryAnalyzerTool(MAT) and check for Leak suspects.

Solution:
Earlier I was mapping 3 schema in database with 3 session factories. Now mapping with one. Saved 110 MB there.
To further improve the performance, Integerated Ehcache.

Related

Memory Leak Suspects

In our team, we are using a service which has spill over problem. It is caused by long API latency in which GC time took most of the parts. Then I found that the heap memory usage is very high. I got the heap dump using jmap for the service which is about 4.4 GB. I used the Eclipse Memory Analyzer to parse the heap dump. I found that 2.8GB of the heap dump is unreachable objects.
Anyone has the suggestions that what should I do to further debug this problem?
Thank you.
If you have a heap dump from when it run out of memory, I suggest use MAT to find any suspicious dominator trees which is narrow reference path to a large retained set size.
It could be same classes are ending up in different class loaders or could be HTTP session retention if web application or bad cache problem.
I suggest you, start with simple things first.
Quick look at what and where jars are being loaded.
Make sure class unloading is enabled (with CMS).
Use memory analyser tool on the heap dump to see exactly what is being retained and by whom.

WebSphere out of memory error

We use WebSphere application server for our application and we regularly get out of memory error. To debug this we added log to check used memory at certain places and below is the observation.
The used memory is not decreasing until it reaches threshold limit. We use below memory configuration:
InitialHeapSize="1024" maximumHeapSize="2048"
So until it crosses 1024 the memory is not released. In the case of OOM error, the memory is not released only even though some threads are not in use.
I assumed that the heap size was not released. But the java Runtime API is displaying that there is memory available. Java operations like method class, string opertaions are working but its failing when JNDI look up is made with outofmemory exception. As a result, the system is failing because of unavilability of connection.
Stack trace:
com.ibm.websphere.naming.CannotInstantiateObjectException: Exception occurred while the JNDI NamingManager was processing a javax.naming.Reference object. [Root exception is java.lang.OutOfMemoryError]
at com.ibm.ws.naming.util.Helpers.processSerializedObjectForLookupExt(Helpers.java:1033)
at com.ibm.ws.naming.util.Helpers.processSerializedObjectForLookup(Helpers.java:730)
Dynamo , you will have to perform a heap analysis to find out what causes the OOM for your. It is a free tooling that allows you to find out what is causing the issue in the server. May be it is a rogue application that is blocking too much memory or a resource that is leaking too much memory etc.
you can look at this for more information. Your setting of initial heap and maximum heap is something you want to tune (If you have it too deep for GC , your CPU will be very high during GC vs constant overhead usage issues if it is too frequent)
https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=4544bafe-c7a2-455f-9d43-eb866ea60091
You need to generate Heap Dump and Thread Dump via wasadmin and analyze for root causes
There will be some differences depending on the platform and edition you are using, but, there is built in support for generating heap dumps:
See, for example:
http://www.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/tprf_enablingheapdump.html
Generally, you will either want to enable generation of heap dumps, then force an OOM, then use the HeapAnalyzer to analyze the resulting heap dump. Or, you can manually generate heaps when large memory usage is seen.
Some caution: What may look like a memory leak may be a very large but transient memory use. A view of memory usage over time will be needed to conclude that there is an actual leak.
Regardless, the path for handling this sort of problem inevitably leads to generating a heap dump and doing analysis.

How to diagnose OutOfMemoryErrors in Tomcat 7?

I have several applications running in a Tomcat7 instance.
Once in a while, I notice that there are OutOfMemoryErrors in the log.
How can I find out, which application (ideally - which) class causes them?
Update 1 (25.12.2014 11:44 MSK):
I changed something in the application (added a shutdown call to a Quartz scheduler, when the servlet context is destroyed), which may have caused memory leaks.
Now my memory consumption charts look like shown below.
Does any of them indicate memory leaks in the application?
If yes, which one?
There is a good documentation about that http://www.oracle.com/technetwork/java/javase/clopts-139448.html
create a heapdump with the vm parameters described in the link above.
analyze this heapdump, for example use memoryanalzyer(https://eclipse.org/mat/).
OOM can occur because of many reasons.
1.) Memory Leaks
2.) Generation of a large number of local variables etc.
OOM is a common indication of a memory leak. Essentially, the error is thrown when there’s insufficient space to allocate a new object.
Few Exception Messages
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: PermGen space
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
java.lang.OutOfMemoryError: request bytes for . Out of swap space?
java.lang.OutOfMemoryError: (Native method)
More detailed information here and official
refer this and this
Need to analyze the Heap dump/thread dumps etc.
Detecting Memory leak
You can use jmap . It will give snap shot of java process.
How many objects in memory with size of objects .
jmap -histo #processID

Exception in thread "http-8080-10" java.lang.OutOfMemoryError: Java

I have a Web application running on my 64-bit Windows Server 2003, Oracle 11G database and Apache Tomcat 6.0 Web Server.
Application is on live environment and around 3000 of user using the application I have encountered Java Heap Out Of Memory Error. After increasing Heap space it's resolved.
Now again I am facing same issue, below is the error stack trace:
Exeption in thread "http-8080-10" java.lang.OutOfMemoryError: Java
heap space Aug 23, 2013 8:48:00 PM com.SessionClunter
getActiveSessions Exeption in thread "http-8080-11"
java.lang.OutOfMemoryError: Java heap space Exeption in thread
"http-8080-4" Exeption in thread "http-8080-7"
java.lang.OutOfMemoryError: Java heap space
Your problem could be caused by a few things (at a conceptual level):
You could simply have too many simultaneous users or user sessions.
You could be attempting to process too many user requests simultaneously.
You could be attempting to process requests that are too large (in some sense).
You could have a memory leak ... which could be related to some of the above issue, or could be unrelated.
There is no simple solution. (You've tried the only easy solution ... increasing the heap size ... and it hasn't worked.)
The first step in solving this is to change your JVM options to get it to take a heap dump when a OOME occurs. Then you use a memory dump analyser to examine the dump, and figure out what objects are using too much memory. That should give you some evidence that will allow you to narrow down the possible causes ...
If you keep getting OutOfMemoryError no matter how much you increase the max heap, then your application probably has a memory leak, which you must solve by getting into the code and optimizing it. Short of that, you have no other choice but keep increasing the max heap until you can.
You can look for memory leaks and optimize using completely free tools like this:
Create a heap dump of your application when it uses a lot of memory, but before it would crash, using jmap that is part of the Java installation used by your JVM container (= tomcat in your case):
# if your process id is 1234
jmap -dump:format=b,file=/var/tmp/dump.hprof 1234
Open the heap dump using the Eclipse Memory Analyzer (MAT)
MAT gives suggestions about potential memory leaks. Try to follow those.
Look at the histogram tab. It shows all the objects that were in memory at the time of the dump, grouped by their class. You can order by memory use and number of objects. When you have a memory leak, usually there are shockingly too many instances of some objects that clearly don't make sense all. I often tracked down memory leaks based on that info alone.
Another useful free JVM monitoring tool is VisualVM. A non-free but very powerful tool is JProfiler.

How to debug Java OutOfMemory exceptions?

What is the best way to debug java.lang.OutOfMemoryError exceptions?
When this happens to our application, our app server (Weblogic) generates a heap dump file. Should we use the heap dump file? Should we generate a Java thread dump? What exactly is the difference?
Update: What is the best way to generate thread dumps? Is kill -3 (our app runs on Solaris) the best way to kill the app and generate a thread dump? Is there a way to generate the thread dump but not kill the app?
Analyzing and fixing out-of-memory errors in Java is very simple.
In Java the objects that occupy memory are all linked to some other objects, forming a giant tree. The idea is to find the largest branches of the tree, which will usually point to a memory leak situation (in Java, you leak memory not when you forget to delete an object, but when you forget to forget the object, i.e. you keep a reference to it somewhere).
Step 1. Enable heap dumps at run time
Run your process with -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp
(It is safe to have these options always enabled. Adjust the path as needed, it must be writable by the java user)
Step 2. Reproduce the error
Let the application run until the OutOfMemoryError occurs.
The JVM will automatically write a file like java_pid12345.hprof.
Step 3. Fetch the dump
Copy java_pid12345.hprof to your PC (it will be at least as big as your maximum heap size, so can get quite big - gzip it if necessary).
Step 4. Open the dump file with IBM's Heap Analyzer or Eclipse's Memory Analyzer
The Heap Analyzer will present you with a tree of all objects that were alive at the time of the error.
Chances are it will point you directly at the problem when it opens.
Note: give HeapAnalyzer enough memory, since it needs to load your entire dump!
java -Xmx10g -jar ha456.jar
Step 5. Identify areas of largest heap use
Browse through the tree of objects and identify objects that are kept around unnecessarily.
Note it can also happen that all of the objects are necessary, which would mean you need a larger heap. Size and tune the heap appropriately.
Step 6. Fix your code
Make sure to only keep objects around that you actually need. Remove items from collections in a timely manner. Make sure to not keep references to objects that are no longer needed, only then can they be garbage-collected.
I've had success using a combination of Eclipse Memory Analyzer (MAT) and Java Visual VM to analyze heap dumps. MAT has some reports that you can run that give you a general idea of where to focus your efforts within your code. VisualVM has a better interface (in my opinion) for actually inspecting the contents of the various objects that you are interested in examining. It has a filter where you can have it display all instances of a particular class and see where they are referenced and what they reference themselves. It has been a while since I've used either tool for this they may have a closer feature set now. At the time using both worked well for me.
What is the best way to debug java.lang.OutOfMemoryError exceptions?
The OutOfMemoryError describes type of error in the message description. You have to check the description of the error message to handle the exception.
There are various root causes for out of memory exceptions. Refer to oracle documentation page for more details.
java.lang.OutOfMemoryError: Java heap space:
Cause: The detail message Java heap space indicates object could not be allocated in the Java heap.
java.lang.OutOfMemoryError: GC Overhead limit exceeded:
Cause: The detail message "GC overhead limit exceeded" indicates that the garbage collector is running all the time and Java program is making very slow progress
java.lang.OutOfMemoryError: Requested array size exceeds VM limit:
Cause: The detail message "Requested array size exceeds VM limit" indicates that the application (or APIs used by that application) attempted to allocate an array that is larger than the heap size.
java.lang.OutOfMemoryError: Metaspace:
Cause: Java class metadata (the virtual machines internal presentation of Java class) is allocated in native memory (referred to here as metaspace)
java.lang.OutOfMemoryError: request size bytes for reason. Out of swap space?:
Cause: The detail message "request size bytes for reason. Out of swap space?" appears to be an OutOfMemoryError exception. However, the Java HotSpot VM code reports this apparent exception when an allocation from the native heap failed and the native heap might be close to exhaustion
java.lang.OutOfMemoryError: Compressed class space
Cause: On 64-bit platforms a pointer to class metadata can be represented by a 32-bit offset (with UseCompressedOops). This is controlled by the command line flag UseCompressedClassPointers (on by default).
If the UseCompressedClassPointers is used, the amount of space available for class metadata is fixed at the amount CompressedClassSpaceSize. If the space needed for UseCompressedClassPointers exceeds CompressedClassSpaceSize, a java.lang.OutOfMemoryError with detail Compressed class space is thrown.
Note: There is more than one kind of class metadata - klass metadata and other metadata. Only klass metadata is stored in the space bounded by CompressedClassSpaceSize. The other metadata is stored in Metaspace.
Should we use the heap dump file? Should we generate a Java thread dump? What exactly is the difference?
Yes. You can use this heap heap dump file to debug the issue using profiling tools like visualvm or mat
You can use Thread dump to get further insight about status of threads.
Refer to this SE question to know the differenes:
Difference between javacore, thread dump and heap dump in Websphere
What is the best way to generate thread dumps? Is kill -3 (our app runs on Solaris) the best way to kill the app and generate a thread dump? Is there a way to generate the thread dump but not kill the app?
kill -3 <process_id> generates Thread dump and this command does not kill java process.
It is generally very difficult to debug OutOfMemoryError problems. I'd recommend using a profiling tool. JProfiler works pretty well. I've used it in the past and it can be very helpful, but I'm sure there are others that are at least as good.
To answer your specific questions:
A heap dump is a complete view of the entire heap, i.e. all objects that have been created with new. If you're running out of memory then this will be rather large. It shows you how many of each type of object you have.
A thread dump shows you the stack for each thread, showing you where in the code each thread is at the time of the dump. Remember that any thread could have caused the JVM to run out of memory but it could be a different thread that actually throws the error. For example, thread 1 allocates a byte array that fills up all available heap space, then thread 2 tries to allocate a 1-byte array and throws an error.
You can also use jmap/jhat to attach to a running Java process. These (family of) tools are really useful if you have to debug a live running application.
You can also leave jmap running as a cron task logging into a file which you can analyse later (It is something which we have found useful to debug a live memory leak)
jmap -histo:live <pid> | head -n <top N things to look for> > <output.log>
Jmap can also be used to generate a heap dump using the -dump option which can be read through the jhat.
See the following link for more details
http://www.lshift.net/blog/2006/03/08/java-memory-profiling-with-jmap-and-jhat
Here is another link to bookmark
http://java.sun.com/developer/technicalArticles/J2SE/monitoring/
It looks like IBM provides a tool for analyzing those heap dumps: http://www.alphaworks.ibm.com/tech/heaproots ; more at http://www-01.ibm.com/support/docview.wss?uid=swg21190476 .
Once you get a tool to look at the heap dump, look at any thread that was in the Running state in the thread stack. Its probably one of those that got the error. Sometimes the heap dump will tell you what thread had the error right at the top.
That should point you in the right direction. Then employ standard debugging techniques (logging, debugger, etc) to hone in on the problem. Use the Runtime class to get the current memory usage and log it as the method in or process in question executes.
I generally use Eclipse Memory Analyzer. It displays the suspected culprits (the objects which are occupying most of the heap dump) and different call hierarchies which is generating those objects. Once that mapping is there we can go back to the code and try to understand if there is any possible memory leak any where in the code path.
However, OOM doesn't always mean that there is a memory leak. It's always possible that the memory needed by an application during the stable state or under load is not available in the hardware/VM. For example, there could be a 32 bit Java process (max memory used ~ 4GB) where as the VM has just 3 GB. In such a case, initially the application may run fine, but OOM may be encountered as and when the memory requirement approaches 3GB.
As mentioned by others, capturing thread dump is not costly, but capturing heap dump is. I have observed that while capturing heap dump application (generally) freezes and only a kill followed by restart helps to recover.

Categories