In my Tomcat application I am eventually getting "Out of memory" and "Cannot allocate memory" errors. I suppose it is nothing to do with the heap as it completely fulls up the system memory and I am hardly able to run bash commands.
How this problem is connected to the heap? How can I correctly set up heap size so that the application has enough memory and so it does not consume too much of the system resources?
Strange thing is that "top" command keeps saying that tomcat consumes only 20% of mem and there is still free memory, once the problem happens.
Thanks.
EDIT:
Follow-up:
BufferedImage leaks - are there any alternatives?
Problems with running bash scripts may indicate I/O issues, and this might be the case if your JVM is doing Full GCs all the time (which is the case, if your heap is almost-full).
The first thing to do, is to increase the heap with -Xmx. This may solve the problem, or - if you have a memory leak, it won't, and you will eventually get OutOfMemoryError again.
In this case, you need to analyze memory dumps. See my answer in this thread for some instructions.
Also, it might be useful to enable Garbage Collection Logs (using -Xloggc:/path/to/log.file -XX:+PrintGCDetails) and then analyzing them with GCViewer or HPJmeter.
You can set JVM Heap size by specifying the options
-Xmx1024m //for 1024 MB
Refer this for setting the option forTomcat
If you have 4 GB ram then can allocate 3GB to HEAP -
-Xmx3GB
you can also change the available perm gen size by using the following commands:
-XX:PermSize=128m
-XX:MaxPermSize=256m
Related
We are running a process that has a cache that consumes a lot of memory.
But the amount of objects in that cache keeps stable during execution, while memory usage is growing with no limit.
We have run Java Flight Recorder in order to try to guess what is happening.
In that report, we can see that UsedHeap is about half of UsedSize, and I cannot find any explanation for that.
JVM exits and dumps a report of OutOfMemory that you can find here:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/hs_err_pid26210.log
Here it is the whole Java Flight Recorder report:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/test.7z
Does anybody know why this outOfMemory is arising?
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
The log file says this:
# Native memory allocation (mmap) failed to map 520093696 bytes
for committing reserved memory.
So what has happened is that the JVM has requested a ~500MB chunk of memory from the OS via an mmap system call and the OS has refused.
When I looked at more of the log file, it is clear that G1GC itself is requesting more memory, and it looks like it is doing it while trying to expand the heap1.
I can think of a couple of possible reasons for the mmap failure:
The OS may be out of swap space to back the memory allocation.
Your JVM may have hit the per-process memory limit. (On UNIX / Linux this is implemented as a ulimit.)
If your JVM is running in a Docker (or similar) container, you may have exceeded the container's memory limit.
This is not a "normal" OOME. It is actually a mismatch between the memory demands of the JVM and what is available from the OS.
It can be addressed at the OS level; i.e. by removing or increasing the limit, or adding more swap space (or possibly more RAM).
It could also be addressed by reducing the JVM's maximum heap size. This will stop the GC from trying to expand the heap to an unsustainable size2. Doing this may also result in the GC running more often, but that is better than the application dying prematurely from an avoidable OOME.
1- Someone with more experience in G1GC diagnosis may be able to discern more from the crash dump, but it looks like normal heap expansion behavior to me. There is no obvious sign of a "huge" object being created.
2 - Working out what the sustainable size actually would involve analyzing the memory usage for the entire system, and looking at the available RAM and swap resources and the limits. That is a system administration problem, not a programming problem.
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
What you are seeing is the difference between memory that is currently allocated to to the heap, and the heap limit that you have set. The JVM doesn't actually request all of the heap memory from the OS up front. Instead, it requests more memory incrementally ... if required ... at the end of a major GC run.
So while the total heap size appears to be ~24GB, the actual memory allocated is substantially less than that.
Normally, that is fine. The GC asks the OS for more memory and adds it to the relevant pools for the memory allocators to use. But in this case, the OS cannot oblige, and G1GC pulls the plug.
I'm debugging a fairly large project I've been working on (but did not originally create) and I've noticed that sometimes it crashes with an OutOfMemoryError. The code is loading a lot of data from files so this isn't entirely surprising in general.
However, what confuses me is that I'm using VisualVM 1.3.4 to profile the program, and it behaves inconsistently. Most times I've run it, the heap gradually expands up to about 2GB (the computer has 16GB of RAM; it's for academic research) with the used heap spiking higher and higher underneath it. Around 2GB, it will crash. The program isn't proccessing more information as time goes on though, so it shouldn't grow the heap to 2GB in just a few minutes.
Sometimes, though, I get a sudden crash after about 30 seconds, with a heap size of 250MB and only about 100MB in use. How am I getting a java.lang.OutOfMemoryError: Java heap space if my heap isn't full?
Edit: I'm using Eclipse to run the program, and I have the VisualVM plugin so it gets launched automatically. Also, I'm using Java 7.
Start the application with the VM argument -XX:+HeapDumpOnOutOfMemoryError.
Analyse the Heap Dump and find out what is causing the issue.
Eclipse MAT is an excellent tool for finding out such issues.
you need to setup the JVMs min and max heap memory
set JAVA_OPTS="-Xms128m -Xmx256m"
something like that but with bigger values like 2G, 4G whatever
LE: As you all know you can't force JVM to run the garbage collector (even though you can ask for it), but there are some ways of convincing it to get rid of some items by null-ing their references. Another thing to watch is the database object that might be lazy initialised. That error could appear when you try to create an object exceding the max heap memory.
Another ideea could be some retarded developer that programatically threw the OutOfMemoryError in some method for some retarded reason. When you reach that part of code, that's what you get (search the project)
There can be at least 2 reasons for the application to crash with OutOfMemoryError.
Your java heap is just too small for the amount of data it needs to process. Then you can either increase it as suggested Matei, or analyze heap dump as suggest Ajay.
Your application leaks memory. Which means that it leaves some unneeded data in memory after processing it. Then increasing heap will not help in the long run. And your options are either heap dump analysis (again) or specialised memory leak detection tool, such as Plumbr
Turned out the crash was caused by using the OpenJDK JRE rather than Oracle's JRE. I don't know exactly what the bug is in OpenJDK that makes it crash like this, but changing to Oracle's JRE ultimately solved the problem.
(I was using OpenJDK because I'm on a Linux computer that someone was using for open-source work before me. When I mentioned the crash to him he had the idea that that might be the cause. He was right.)
Do you have a 32bit operative system without large memory support (PAE on win, huge mem kernel on linux...)? If yes, you may encounter the 2GB memory segment limit per process on 32 bit systems.
As a work around, try to set the JVM parameter -Xss192k to dispose 192kb of stack space per thread, and the parameter -Xmx1024m to use no more than 1GB of heap.
I have a problem with a Java app. Yesterday, when i deployed it to have a test run, we noticed that our machine started swapping, even though this is not a real monster app, if you know what i mean.
Anyway, i checked the results of top and saw that it eats around 100mb of memory (RES in top) I tried to profile memory and check if there is a memory leak, but i couldn't find one. There was an unclosed PreparedStatement, which i fixed, but it didn't mean much.
I tried setting the min and max heap size (some said that min heap size is not required), and it didn't make any difference.
This is how i run it now:
#!/bin/sh
$JAVA_HOME/bin/java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9025 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -XX:MaxPermSize=40m -Xmx32M -cp ./jarName.jar uk.co.app.App app.properties
Here is the result of top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16703 root 20 0 316m 109m 6048 S 0.0 20.8 0:14.80 java
The thing i don't understand that i configure max PermSize and max Heap size, which add up to 72mb. Which is enough, the app runs well. Why is it eating 109mb of memory still and what is eating it up? It is a 37mb difference, which is quite high ratio. (34%).
I don't think this is a memory leak, because max heap size is set and there is no out of memory error, or anything.
One intertesting thing may be that i made a heap dump with VisualVM and then checked it with EclipseMAT and it said that there is a possible leak in a classloader.
This is what it says:
The classloader/component "sun.misc.Launcher$AppClassLoader #
0x87efa40" occupies 9,807,664 (64.90%) bytes. The memory is
accumulated in one instance of "short[][]" loaded by "".Keywords sun.misc.Launcher$AppClassLoader # 0x87efa40
I cannot make much of this, but may be useful.
Thanks for your help in advance.
EDIT
I found this one, maybe there is nothing i can do...
Tomcat memory consumption is more than heap + permgen space
Java's memory includes
the heap space.
the perm gen space
thread stack areas.
shared libraries, including that of the JVM (will be shared)
the direct memory size.
the memory mapped file size (will be shared)
There is likely to be others which are for internal use.
Given that 37 MB of PC memory is worth about 20 cents, I wouldn't worry about it too much. ;)
Did you try using JConsole to profile the application http://docs.oracle.com/javase/1.5.0/docs/guide/management/jconsole.html
Otherwise you can also use JProfiler trial version to profile the application
http://www.ej-technologies.com/products/jprofiler/overview.html?gclid=CKbk1p-Ym7ACFQ176wodgzy4YA
However first step to check high memory usage should be to check if you are using collection of objects in your application like array,map,set,list etc. If yes then check if they keep the references to objects (even though of not used) with them ?
hi we are getting out of memory exception for one of our process which is running in unix environmnet . how to identify the bug (we observed that there is very little chance of memory leaks in our java process). so whatelse we need analyse to find the rootcauase
I would suggest using a profiler like YourKit (homepage) so that you can easily find what is allocating so much memory.
In any case you should check which settings are specified for your JVM to understand if you need more heap memory for your program. You can set it by specifying -X params:
java -Xmx2g -Xms512m
would start JVM with 2Gb of maximum heap and a starting size of 512Mb
If there are no memory leaks then the application needs more memory. Are you getting out of heap memory, or perm memory or native memory? For heap memory and perm memory you can increase allocation using -Xmx.or -XX:PermSize arguments respectively.
But first try using a profiler to verify that your application is really not leaking any memory.
I'm using ASANT to run a xml file which points to a NARS.jar file. (i do not have the project file of the NARS.jar)
I'm getting "java.lang.OutOfMemoryError: Java heap space.
I used VisualVM to look at the heap while running the NARS.jar, and it says that it max uses 50 MB of the heapspace.
I've set the initial and max size of heapspace to 512 MB.
Does anyone have an ide of what could be wrong?
I got 1 GB physical Memory and created a 5 GB pagefile (for test purpose).
Thanks in advance.
Your app may be trying to allocate memory that exceeds your 512m limit, thus you see an outofmemory error even though only 50m is being used. To test this, I would set:
-Xms512m -Xmx1024m
And see what happens. I would also try a smaller test file, say 1g. Keep reducing the file size until you stop seeing the error. If you succeed, then the trouble is that what you're trying to do and the way you're trying to do it takes too much memory. Time to look for an alternate approach.
Are you forking the process when running the NARS.jar file? Setting ANT_OPTS will only have effect on the VM running the ant system. If you use the java task to start/fork an additional VM process, the ANT_OPTS settings will not be inherited.
If this is the case, set either fork="false" in the java task (if you are not using any other options, which require fork to be enabled), or set maxmemory="512m".
XML files are notorious memory hogs since the DOM representation can often require ten times their size on disk.
My guess is that this is where you hit the limit. Is there a stack trace with the out of memory exception?