My question may look naive but I do not know how to formulate it more correctly. The problem is that I create and use large simple type arrays in my application. And I get errors like:
ERROR/dalvikvm-heap(1763): Out of memory on a 7907344-byte allocation.
Yes, it's big enough but task management tools claim that my application is using only 30MB of memory, while other at the same time use 50MB and even 110MB (have seen once) and there is still 190MB of free memory in the system (not system applications, just other ordinary applications I have installed). If all applications are provided with the same heap size at startup how can they grow so big?
The maximum heap size of an Android application will depend on the device it is running on. For early devices the maximum heap size was 16MB but for some later devices it can be 24MB or possibly even 32MB.
This is a property of the Dalvik VM on each device and is not something you can change (without rebuilding Android from source).
You can query the "per-application memory class" with ActivityManager.getMemoryClass() which seems to be a figure which is not entirely unrelated to the Heap Size.
Applications can use memory which isn't on the heap but 100+MB seems like a surprisingly large amount.
If you want to find out about analysing memory usage on Android you can't do better than this Stack Overflow answer by Dianne Hackborn, who is one of the Android developers at Google. In short it says analysing memory usage is very difficult and you should take any figures you have with a pinch of salt.
Did you try to configure your heap size? Use the following options to do this.
-Xms set initial Java heap size
-Xmx set maximum Java heap size
To obtain more help run
java -X
Example: java -Xms20m -xMX100m -cp . MyMain
Good luck.
We all know that java has by default garbage collector system but at some level we need to clear or flush some object.
And you also can set or configure heap size for the JVM.
Related
We have a few containers running java processes with docker. One thing we've been noticing is a huge amount of memory that is taken up just by running a simple spring-boot app without even including our own code (just to try and get some kind of memory profile independent of any issues we might introduce).
What I saw was the memory consumed by docker/the JVM was hovering around 2.5. We did have a decent amount of extra deps included in it (camel, hibernate, some spring-boot deps) but that wasn't what really threw me off. What I saw was that despite docker saying it consumed 2.5GB of memory for the app, running jconsole against it read that it was consuming up to 1GB (down to ~200MB after a GC and slowly climbing). The memory footprint on docker remained where it was after the GC as well (2.5GB).
Furthermore, when I dumped the heap to see what kinds of object are taking up that space, it looks like the heap was only 33MB large after I loaded the .hprof file into MAT. None of this makes much sense to me. Currently, I'm looking at the non-heap space in jconsole reported at 115MB while the heap space is at 331MB.
I've already read a ton (on SO and other sites) about the JVM memory regions and some things specifically reporting that the heap dumps might be smaller but none of them were this far off that I could tell and beyond that, many of the suggested things to watch for were that the GC is run whenever a heap dump is taken and that MAT has a setting to show or hide unreachable objects. All of this was taken into account before posting here and now I just feel like something else is at play that I can't capture myself and I haven't found online.
I fully expect that the numbers might be a little off but it seems extreme that they're off by a factor of 10 in the best case scenario and off by nearly a factor of 100 when looking at the docker-reported memory usage.
Does anyone know what I might be missing here?
EDIT: This is also an app running with Java 8, not yet running with Java 11. It's on the JIRA board to do but not yet planned for.
EDIT2: Adding screenshots. Spike in the JConsole screen shot is from running GC.
JConsole gives you the amount of committed memory: 3311616 KiB ~= 3GiB
This is how much memory your java process consumes, as seen by the OS.
It is unrelated to how much heap is currently in use to hold Java objects, also reported by JConsole as 130237 kbyte ~= 130 MiB.
This is also unrelated to how many Objects are actually alive: By default MAT will remove unreachable Objects when you load the heap dump. You can enable the option by going to Preferences -> Memory Analyzer -> Keep Unreachable Objects (See the MAT documentation). So if you have a lot of short lived objects, the difference can be quite massive.
I see that it also reports a Max Heap of about 9GiB. It means that you have set Xmx parameter to a large value.
Hotspot GC's are not very good at reclaiming unused memory. They tend to use all the space available to them (the Max heap size, set by Xmx) and then never decommit the heap, effectively keeping it reserved for the Java process instead of releasing it to the OS.
If you want to minimize the memory footprint of your process from the OS perspective, I recommend that you set a lower Xmx, maybe -Xmx1g, so as to not allow Java to grow too much (of course, Xmx will also need to be high enough to accomodate for your application workload!).
If really want an adaptative heap, you can also switch to G1 (-XX:+UseG1GC) and a more recent Java, as the hotspot team has delivered some improvements recently.
Dave
OS monitoring tools will show to you the amount of memory that is allocated by a process. So this:
mean that your java process have 2.664G of memory allocated (java heap + meta space)
JConsole shows to you the memory that your code is "consuming" (ignoring the meta space)
I see 2 possible explanations:
You have set -Xms with a huge value
You have a lot of static
code (or other content) loaded on your meta space.
I'm debugging a fairly large project I've been working on (but did not originally create) and I've noticed that sometimes it crashes with an OutOfMemoryError. The code is loading a lot of data from files so this isn't entirely surprising in general.
However, what confuses me is that I'm using VisualVM 1.3.4 to profile the program, and it behaves inconsistently. Most times I've run it, the heap gradually expands up to about 2GB (the computer has 16GB of RAM; it's for academic research) with the used heap spiking higher and higher underneath it. Around 2GB, it will crash. The program isn't proccessing more information as time goes on though, so it shouldn't grow the heap to 2GB in just a few minutes.
Sometimes, though, I get a sudden crash after about 30 seconds, with a heap size of 250MB and only about 100MB in use. How am I getting a java.lang.OutOfMemoryError: Java heap space if my heap isn't full?
Edit: I'm using Eclipse to run the program, and I have the VisualVM plugin so it gets launched automatically. Also, I'm using Java 7.
Start the application with the VM argument -XX:+HeapDumpOnOutOfMemoryError.
Analyse the Heap Dump and find out what is causing the issue.
Eclipse MAT is an excellent tool for finding out such issues.
you need to setup the JVMs min and max heap memory
set JAVA_OPTS="-Xms128m -Xmx256m"
something like that but with bigger values like 2G, 4G whatever
LE: As you all know you can't force JVM to run the garbage collector (even though you can ask for it), but there are some ways of convincing it to get rid of some items by null-ing their references. Another thing to watch is the database object that might be lazy initialised. That error could appear when you try to create an object exceding the max heap memory.
Another ideea could be some retarded developer that programatically threw the OutOfMemoryError in some method for some retarded reason. When you reach that part of code, that's what you get (search the project)
There can be at least 2 reasons for the application to crash with OutOfMemoryError.
Your java heap is just too small for the amount of data it needs to process. Then you can either increase it as suggested Matei, or analyze heap dump as suggest Ajay.
Your application leaks memory. Which means that it leaves some unneeded data in memory after processing it. Then increasing heap will not help in the long run. And your options are either heap dump analysis (again) or specialised memory leak detection tool, such as Plumbr
Turned out the crash was caused by using the OpenJDK JRE rather than Oracle's JRE. I don't know exactly what the bug is in OpenJDK that makes it crash like this, but changing to Oracle's JRE ultimately solved the problem.
(I was using OpenJDK because I'm on a Linux computer that someone was using for open-source work before me. When I mentioned the crash to him he had the idea that that might be the cause. He was right.)
Do you have a 32bit operative system without large memory support (PAE on win, huge mem kernel on linux...)? If yes, you may encounter the 2GB memory segment limit per process on 32 bit systems.
As a work around, try to set the JVM parameter -Xss192k to dispose 192kb of stack space per thread, and the parameter -Xmx1024m to use no more than 1GB of heap.
I have a java app that has a max heap of 1024M,it has perm gen space of 256M.
Does it guarantee that this app will never use more than 1280M (1024+256) ?
Does the stack memory also come from the heap size above or is it extra memory consumption?
What if the java app uses native code that consumes memory then where does this memory come from? heap/perm gen / more ram?
I am interested to know how java uses memory.
please comment.
Any links that can provide a clear picture are also welcome
thankyou
An executing Java app uses more memory than the main heap and permgen space. For example:
There is the memory that holds the executable code of the java program and any shared libraries that are dynamically linked by the executable.
There is the memory used to represent out-of-heap data structures, buffers, etc that are created by the java executable, by its native libraries or by the application's native libraries.
There is the memory used to represent Java thread stacks.
And there's probably more.
There is no recommended way to predict the total memory usage of a Java application. Even measuring it is tricky, especially when you consider that some of that memory may be shared with other JVMs or even other non-Java applications.
From your question I see how confused you are about memory management in Java.
Please go through this white paper for a better understanding: Memory Management in the JavaHotSpotâ„¢ Virtual Machine.
I wish to fine tune my eclipse.ini file to best suit my system and development environment.
http://wiki.eclipse.org/Eclipse.ini is not very helpful.
I would like to know for example:
Given a processing power of X RAM Memory of size Y and Java version Z; What the values of -Xms & -Xmx should be.
Generally speaking, is there a guide or tutorial out there, and if not what has practice taught you?
It's really situation dependent. However keep in mind that these are standard Java VM parameters, not eclipse specific.
In any case, here's a rundown on how to decide:
Xmx is your maximum heap size - If you're going to be using some really memory intensive plug-ins, you're going to want to increase your Xmx size to at least 1024m (-Xmx1024m) whereas if memory is not that important (say you're running vanilla eclipse) it really doesn't matter. Another time that you'd want to increase this is if you're consistently running out of memory.
Xms is your minimum heap size - Again, if you KNOW you're going to be using a ton of memory, why waste time growing the heap? You can start the heap at a specific size immediately. For example, you can set it to -Xms256m and your heap size will start at that.
If you're really looking to tweak eclipse's memory settings, you can't overlook the -XX:MaxPermSize parameter (you set it via -XX:MaxPermSize=256m) which increases the maximum permanent generation space. By default, Java's PermGenSpace is really small so you may receive errors related to this as you load more and more plug-ins into eclipse.
Check this out:
What are the best jvm-settings for Eclipse
Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?
You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.
I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.
You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.
See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?
What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.
JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free