I'm getting an error "java.lang.OutOfMemoryError: PermGen space" very often (1-2 times per week) and I cannot localize the problem.
Thread dump shows no blocked processes. And a heap dump cannot be done when this error occurs (even when parameter Heap dump on OOME: enabled).
I've tried many options when restart the JVM but nothing helps.
Maybe somebody knows what can be a problem seeing this screenshots from Java VisualVM?
Tomcat 6, Java 1.6.
Overview JVM parameters:
Monitor:
Monitor (PermGen):
Related
I am running Tomcat-6.0.32 on the RHEL 5.4 with JDK-1.6.0_23 version. I am running almost more than 15 applications. Applications are small applications only. My RAM is 8GB and swap is 12GB. I set the heap size from 512Mb to 4GB.
The issue is after a few hours or days of running, the tomcat is not providing service though it is up and running. While I could see the catalina.out log file, it is showing memory leak problem.
Now, my concern is I need to show a solution to that issue or at least I need to highlight the application which is causing the memory leaks.
Could anyone explain how I can discover which application is causing the memory leak issue?
One option is to use heap dumps (see How to get a thread and heap dump of a Java process on Windows that's not running in a console) and analyze heap dump later on.
Or another option is to analyse process directly using tools like jmap, VisualVM and similar.
You may use the combination of jmap/jhat tools (Both these are unsupported as of Java 8) to gather the heap dump (using mmap) and identify the top objects in heap (using jhat). Try to co-relate these objects with the application and identify the rogue one.
The program has no apparent memory leaks and while I am observing it run the locally in my machine it works fine. On the VPS it crashes after few hours with a sequence of error messages as shown below.
Exception in thread "Thread-10422" java.lang.OutOfMemoryError: Java heap space
I don't understand why such an error would occur after hours instead of few minutes if there is a memory leak. I used tools such as VisualVM to observe the behavior of the program and the memory runs constant throughout.
Is anyone aware of any ways I can debug this and get to the bottom of this or how to avoid it?
Is there a tool that requires no installation and can observe the memory usage of a process over ssh?
Edit:
There is no stack trace on all the exceptions which is weird. But the error happens in different threads for different classes.
at java.io.BufferedWriter.<init>(BufferedWriter.java:104)
at java.io.BufferedWriter.<init>(BufferedWriter.java:87)
at java.io.PrintStream.init(PrintStream.java:100)
at java.io.PrintStream.<init>(PrintStream.java:142)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:411)
at sun.net.www.http.HttpClient$2.run(HttpClient.java:457)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.http.HttpClient.privilegedOpenServer(HttpClient.java:454)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:521)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
at sun.net.www.http.HttpClient.New(HttpClient.java:321)
at sun.net.www.http.HttpClient.New(HttpClient.java:338)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:914)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:801)
at java.util.HashMap.resize(HashMap.java:479)
at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:431)
at java.util.HashMap.put(HashMap.java:402)
at org.jsoup.nodes.Attributes.put(Attributes.java:58)
at org.jsoup.parser.Token$Tag.newAttribute(Token.java:65)
at org.jsoup.parser.TokeniserState$34.read(TokeniserState.java:791)
at org.jsoup.parser.Tokeniser.read(Tokeniser.java:42)
at org.jsoup.parser.TreeBuilder.runParser(TreeBuilder.java:47)
at org.jsoup.parser.TreeBuilder.parse(TreeBuilder.java:41)
at org.jsoup.parser.HtmlTreeBuilder.parse(HtmlTreeBuilder.java:37)
at org.jsoup.parser.Parser.parseInput(Parser.java:30)
at org.jsoup.helper.DataUtil.parseByteData(DataUtil.java:102)
at org.jsoup.helper.HttpConnection$Response.parse(HttpConnection.java:498)
at org.jsoup.helper.HttpConnection.get(HttpConnection.java:154)
EDIT:
After setting maximum memory I am getting this error
OpenJDK 64-Bit Server VM warning: Attempt to allocate stack guard pages failed.
This clearly indicates that you are running out of the heap space. So you can try by increasing the heap space of the Java virtual machine using coomand java -Xms<initial heap size> -Xmx<maximum heap size>
As per my knowledge, default values are initial: 32M and maximum: 128M. So can make it max value as 256M or 512M.
Have a look at this for information about the Java VM. http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
I have a Web application running on my 64-bit Windows Server 2003, Oracle 11G database and Apache Tomcat 6.0 Web Server.
Application is on live environment and around 3000 of user using the application I have encountered Java Heap Out Of Memory Error. After increasing Heap space it's resolved.
Now again I am facing same issue, below is the error stack trace:
Exeption in thread "http-8080-10" java.lang.OutOfMemoryError: Java
heap space Aug 23, 2013 8:48:00 PM com.SessionClunter
getActiveSessions Exeption in thread "http-8080-11"
java.lang.OutOfMemoryError: Java heap space Exeption in thread
"http-8080-4" Exeption in thread "http-8080-7"
java.lang.OutOfMemoryError: Java heap space
Your problem could be caused by a few things (at a conceptual level):
You could simply have too many simultaneous users or user sessions.
You could be attempting to process too many user requests simultaneously.
You could be attempting to process requests that are too large (in some sense).
You could have a memory leak ... which could be related to some of the above issue, or could be unrelated.
There is no simple solution. (You've tried the only easy solution ... increasing the heap size ... and it hasn't worked.)
The first step in solving this is to change your JVM options to get it to take a heap dump when a OOME occurs. Then you use a memory dump analyser to examine the dump, and figure out what objects are using too much memory. That should give you some evidence that will allow you to narrow down the possible causes ...
If you keep getting OutOfMemoryError no matter how much you increase the max heap, then your application probably has a memory leak, which you must solve by getting into the code and optimizing it. Short of that, you have no other choice but keep increasing the max heap until you can.
You can look for memory leaks and optimize using completely free tools like this:
Create a heap dump of your application when it uses a lot of memory, but before it would crash, using jmap that is part of the Java installation used by your JVM container (= tomcat in your case):
# if your process id is 1234
jmap -dump:format=b,file=/var/tmp/dump.hprof 1234
Open the heap dump using the Eclipse Memory Analyzer (MAT)
MAT gives suggestions about potential memory leaks. Try to follow those.
Look at the histogram tab. It shows all the objects that were in memory at the time of the dump, grouped by their class. You can order by memory use and number of objects. When you have a memory leak, usually there are shockingly too many instances of some objects that clearly don't make sense all. I often tracked down memory leaks based on that info alone.
Another useful free JVM monitoring tool is VisualVM. A non-free but very powerful tool is JProfiler.
On Linux server vm arguments (Xmx=3GB, Xms=3GB) have been specified for application. By seeing the heap dump it can be seen that more than 2.9 GB memory has been utilized. 32 MB memory is there for unreachable objects.
But the application did not throw OOM, instead it has stopped responding. So it became necessary to restart the application manually.
I can see many threads(96) waiting on monitor of some specific object in stack trace. Does that help? Also most of the 2.9 GB space is occupied by cache objects, which is normal I think. MAT is showing these cache objects only as leak suspects.
Trying to find out what made it to not respond but don't see any thing special by looking heap dump and stack traces.
Your application has a memory leak. Try to find it, there are good tools like VisualVM.
Usually, before having the OOM applications are VERY slow.
In your case you should profile your application with VisualVM by example.
My application is deployed on a cluster environment. Recently the server went down with the following stacktrace. It doesn't seem to be coming from the code. It was running all right until recently when this error pop up. No major changes were made to the server. Can someone advise?
java.lang.OutOfMemoryError: Java heap space
at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:44)
at java.lang.StringBuilder.<init>(StringBuilder.java:69)
at java.io.ObjectStreamClass$FieldReflectorKey.<init>(ObjectStreamClass.java:2106)
at java.io.ObjectStreamClass.getReflector(ObjectStreamClass.java:2039)
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:586)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1552)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1591)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at weblogic.utils.io.ChunkedObjectInputStream.readObject(ChunkedObjectInputStream.java:195)
at weblogic.rjvm.MsgAbbrevInputStream.readObject(MsgAbbrevInputStream.java:565)
at weblogic.utils.io.ChunkedObjectInputStream.readObject(ChunkedObjectInputStream.java:191)
at weblogic.rmi.internal.dgc.DGCServerImpl_WLSkel.invoke(Unknown Source)
at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:589)
at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:479)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:475)
at weblogic.rmi.internal.BasicServerRef.access$300(BasicServerRef.java:59)
at weblogic.rmi.internal.BasicServerRef$BasicExecuteRequest.run(BasicServerRef.java:1016)
at weblogic.work.SelfTuningWorkManagerImpl.schedule(SelfTuningWorkManagerImpl.java:126)
at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:321)
at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:918)
at weblogic.rjvm.RJVMImpl.dispatchRequest(RJVMImpl.java:1084)
at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:1001)
at weblogic.rjvm.ConnectionManagerServer.handleRJVM(ConnectionManagerServer.java:240)
at weblogic.rjvm.ConnectionManager.dispatch(ConnectionManager.java:877)
at weblogic.rjvm.MsgAbbrevJVMConnection.dispatch(MsgAbbrevJVMConnection.java:446)
at weblogic.rjvm.t3.MuxableSocketT3.dispatch(MuxableSocketT3.java:368)
at weblogic.socket.AbstractMuxableSocket.dispatch(AbstractMuxableSocket.java:383)
at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:872)
You are running out of memory, which suggests one of the following:
you need to give your process more memory (with the -Xmx java command line option); or
you have a memory leak
Without more information, it's hard to say which is the case. The stack trace for an OutOfMemoryError is rarely useful, as it only shows the point at which heap was exhausted; it doesn't show you why your heap is being filled up.
The answer by Simon Nickerson is correct
Just to add, your stack trace begins from weblogic.socket.SocketMuxer.readReadySocketOnce which is the internal weblogic class that accepts incoming requests. So this means the server is not having enough memory to accept requests also.
Are you using the JRockit JVM? If you are you can use JRockit Mission Control and monitor the Java heap usage. You can also use the JRockit Flight Recorder to record JVM events for offline analysis. There is an Oracle webcast on this here: http://www.vimeo.com/22109838. You can skip to 4:54 which is where the overview of JRockit, WLDF and JRF starts.
Keep in mind that when the heap is full it is the NEXT operation that fails with the OutOfMemory Exception, and therefore this stack trace may not indicate any cause of the failure. This simply indicates that when this code ran there wasn't enough heap, not that this code caused the heap to fill up.
** Edits...
Clearly the server is out of memory - at the time of this specific operation. The question is... why? This stack trace doesn't tell you -why- it just indicates that whatever was happening at the time could not complete because there was not enough memory available at that time. This does not mean that it is the cause of the problem.
Sure, you can add more memory but that may not fix the problem - it may only take longer for it to appear.
set catalina.sh/bat
find set JAVA_OPTS=%JAVA_OPTS%
what ever your RAM - adjust but don't give above half of RAM
set JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG% -server -Xms512M -Xmx512M -XX:MaxPermSize=256M
It means that the JVM has run out of all the memory that has been allocated to it. You can change the amount of memory allocated for use by your JVM using the -Xms and -Xmx command line parameters.check the root cause here
OutOfMemoryError in Java is a subclass of java.lang.VirtualMachineError and JVM throws java.lang.OutOfMemoryError when it ran out of memory in heap. OutOfMemoryError in Java can come any time in heap mostly while you try to create an object and there is not enough space in heap to allocate that object