How to find non-heap space memory leak in java? - java

We have a java webserver that is using eclipse-jetty version 8.1.6. Recently we started noticing Out of memory error.
We had few profiling on the number of threads active. This seems to be reasonable around 100. The process has 5GB max heap memory and 4GB initial heap memory.
Process Details
Environment: Docker(kubernetes)
java.version="1.8.0_91"
java.vm.info="mixed mode"
java.vm.name="Java HotSpot(TM) 64-Bit Server VM"
thread size = 1024K
ulimit is unlimited for max process per user.
Container(Pod) Max memory is allocated to be 8GB
The webserver receives on a average of 350 request per minute. Also we run many such instances behind ELB(kubernetes service). After running for few hours we notice this OOM. The issue is random and it occurs on stress test.
OOM StackTrace:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method) [na:1.8.0_91]
at java.lang.Thread.start(Thread.java:714) [na:1.8.0_91]
at org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPool.java:441) [jetty-util-8.1.16.v20140903.jar:8.1.16.v20140903]
at org.eclipse.jetty.util.thread.QueuedThreadPool.dispatch(QueuedThreadPool.java:366) [jetty-util-8.1.16.v20140903.jar:8.1.16.v20140903]
Since the number of thread count is reasonable. I suspect some memory leak. But I am not able to find the off heap memory size on a docker container.
Is there a way to find it ?
After searching for a while ,I found this bug below in jetty.
How to verify that if the bug is due to below issue without upgrading jetty ?
Related Bug Id: https://bugs.eclipse.org/bugs/show_bug.cgi?id=477817

You could try to add -XX:-HeapDumpOnOutOfMemoryError into your java start parameters and to look into the dump.

Related

RAM memory usage is showing as 21g for a process where as Heap memory usage for the same process is 4gb. Where the remaining memory is going to?

I have recently upgraded application server of a java application from JBoss to Wildfly. The application starts fine but RAM is filling up so fast. It is showing as 21gb used out of 23gb with free command.
When I look at the processes using top command, the java process in question is taking 20.7g
But, when I try to debug the remote java process using jconsole, heap memory usage is ~4g
If heap memory usage is 4g, then how come top command is showing that the same process memory usage is 20.7g and total RAM usage is 21g? What am I missing here and how to resolve this memory issue?

committed heap vs max heap

Hi JVM configuration is :
Xmx = 4096
Xms = 1024
In my monitoring tools : I have always a committed heap at 2g and my server goes to OutOfMemory errors .
I don't understand why my JVM committed heap is limited to 2g and didn't grow up to 4g (Max heap)
Why the free heap is not used and my server goes to OutOfMemory exception ?
NB:
My application server is websphere 8.5.
My server is Linux 64 Bits .
Thanks in advance
While setting preferredHeapBase may resolve the problem, there could be a number of other reasons for OOM:
A very large memory request could not be satisfied - check the
verboseGC output just before OOM timestamp.
Inadequate User Limits
Insufficient ulimit -u (NPROC) Value Contributes to Native
OutOfMemory. Check the user limits.
The requirement from the application for more than 4gb native memory.
In that case, -Xnocompressedrefs will resolve the problem (but with a
larger java memory footprint and performance impact).
There are other reasons but I find those to be the top hits when diagnosing OOM when there is plenty of java heap space free.
Check this page - Using -Xgc:preferredHeapBase with -Xcompressedrefs. You may be hitting native out of memory error.
Try to set the following flag in JVM arguments:
-Xgc:preferredHeapBase=0x100000000

Java running out of memory on VPS

The program has no apparent memory leaks and while I am observing it run the locally in my machine it works fine. On the VPS it crashes after few hours with a sequence of error messages as shown below.
Exception in thread "Thread-10422" java.lang.OutOfMemoryError: Java heap space
I don't understand why such an error would occur after hours instead of few minutes if there is a memory leak. I used tools such as VisualVM to observe the behavior of the program and the memory runs constant throughout.
Is anyone aware of any ways I can debug this and get to the bottom of this or how to avoid it?
Is there a tool that requires no installation and can observe the memory usage of a process over ssh?
Edit:
There is no stack trace on all the exceptions which is weird. But the error happens in different threads for different classes.
at java.io.BufferedWriter.<init>(BufferedWriter.java:104)
at java.io.BufferedWriter.<init>(BufferedWriter.java:87)
at java.io.PrintStream.init(PrintStream.java:100)
at java.io.PrintStream.<init>(PrintStream.java:142)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:411)
at sun.net.www.http.HttpClient$2.run(HttpClient.java:457)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.http.HttpClient.privilegedOpenServer(HttpClient.java:454)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:521)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
at sun.net.www.http.HttpClient.New(HttpClient.java:321)
at sun.net.www.http.HttpClient.New(HttpClient.java:338)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:914)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:801)
at java.util.HashMap.resize(HashMap.java:479)
at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:431)
at java.util.HashMap.put(HashMap.java:402)
at org.jsoup.nodes.Attributes.put(Attributes.java:58)
at org.jsoup.parser.Token$Tag.newAttribute(Token.java:65)
at org.jsoup.parser.TokeniserState$34.read(TokeniserState.java:791)
at org.jsoup.parser.Tokeniser.read(Tokeniser.java:42)
at org.jsoup.parser.TreeBuilder.runParser(TreeBuilder.java:47)
at org.jsoup.parser.TreeBuilder.parse(TreeBuilder.java:41)
at org.jsoup.parser.HtmlTreeBuilder.parse(HtmlTreeBuilder.java:37)
at org.jsoup.parser.Parser.parseInput(Parser.java:30)
at org.jsoup.helper.DataUtil.parseByteData(DataUtil.java:102)
at org.jsoup.helper.HttpConnection$Response.parse(HttpConnection.java:498)
at org.jsoup.helper.HttpConnection.get(HttpConnection.java:154)
EDIT:
After setting maximum memory I am getting this error
OpenJDK 64-Bit Server VM warning: Attempt to allocate stack guard pages failed.
This clearly indicates that you are running out of the heap space. So you can try by increasing the heap space of the Java virtual machine using coomand java -Xms<initial heap size> -Xmx<maximum heap size>
As per my knowledge, default values are initial: 32M and maximum: 128M. So can make it max value as 256M or 512M.
Have a look at this for information about the Java VM. http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html

Exception in thread "http-8080-10" java.lang.OutOfMemoryError: Java

I have a Web application running on my 64-bit Windows Server 2003, Oracle 11G database and Apache Tomcat 6.0 Web Server.
Application is on live environment and around 3000 of user using the application I have encountered Java Heap Out Of Memory Error. After increasing Heap space it's resolved.
Now again I am facing same issue, below is the error stack trace:
Exeption in thread "http-8080-10" java.lang.OutOfMemoryError: Java
heap space Aug 23, 2013 8:48:00 PM com.SessionClunter
getActiveSessions Exeption in thread "http-8080-11"
java.lang.OutOfMemoryError: Java heap space Exeption in thread
"http-8080-4" Exeption in thread "http-8080-7"
java.lang.OutOfMemoryError: Java heap space
Your problem could be caused by a few things (at a conceptual level):
You could simply have too many simultaneous users or user sessions.
You could be attempting to process too many user requests simultaneously.
You could be attempting to process requests that are too large (in some sense).
You could have a memory leak ... which could be related to some of the above issue, or could be unrelated.
There is no simple solution. (You've tried the only easy solution ... increasing the heap size ... and it hasn't worked.)
The first step in solving this is to change your JVM options to get it to take a heap dump when a OOME occurs. Then you use a memory dump analyser to examine the dump, and figure out what objects are using too much memory. That should give you some evidence that will allow you to narrow down the possible causes ...
If you keep getting OutOfMemoryError no matter how much you increase the max heap, then your application probably has a memory leak, which you must solve by getting into the code and optimizing it. Short of that, you have no other choice but keep increasing the max heap until you can.
You can look for memory leaks and optimize using completely free tools like this:
Create a heap dump of your application when it uses a lot of memory, but before it would crash, using jmap that is part of the Java installation used by your JVM container (= tomcat in your case):
# if your process id is 1234
jmap -dump:format=b,file=/var/tmp/dump.hprof 1234
Open the heap dump using the Eclipse Memory Analyzer (MAT)
MAT gives suggestions about potential memory leaks. Try to follow those.
Look at the histogram tab. It shows all the objects that were in memory at the time of the dump, grouped by their class. You can order by memory use and number of objects. When you have a memory leak, usually there are shockingly too many instances of some objects that clearly don't make sense all. I often tracked down memory leaks based on that info alone.
Another useful free JVM monitoring tool is VisualVM. A non-free but very powerful tool is JProfiler.

How can a track down a non-heap JVM memory leak in Jboss AS 5.1?

After upgrading to JBoss AS 5.1, running JRE 1.6_17, CentOS 5 Linux, the JRE process runs out of memory after about 8 hours (hits 3G max on a 32-bit system). This happens on both servers in the cluster under moderate load. Java heap usage settles down, but the overall JVM footprint just continues to grow. Thread count is very stable and maxes out at 370 threads with a thread stack size set at 128K.
The footprint of the JVM reaches 3G, then it dies with:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
Internal Error (allocation.cpp:117), pid=8443, tid=1667668880
Error: ChunkPool::allocate
Current JVM memory args are:
-Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:ThreadStackSize=128
Given these settings, I would expect the process footprint to settle in around 1.5G. Instead, it just keeps growing until it hits 3G.
It seems none of the standard Java memory tools can tell me what in the native side of the JVM is eating all this memory. (Eclipse MAT, jmap, etc). Pmap on the PID just gives me a bunch of [ anon ] allocations which don't really help much. This memory problem occurs when I have no JNI nor java.nio classes loaded, as far as I can tell.
How can I troubleshoot the native/internal side of the JVM to find out where all the non-heap memory is going?
Thank you! I am rapidly running out of ideas and restarting the app servers every 8 hours is not going to be a very good solution.
As #Thorbjørn suggested, profile your application.
If you need more memory, you could go for a 64bit kernel and JVM.
Attach with Jvisualvm in the JDK to get an idea on what goes on. jvisualvm can attach to a running process.
Walton:
I had similar issue, posted my question/finding in https://community.jboss.org/thread/152698 .
Please try adding -Djboss.vfs.forceCopy=false to java start up parameter to see if it helps.
WARN: even if it cut down process size, you need to test more to make sure everything all right.

Categories