Java : Memory utilization issue (sudden spike observed) - java

I was observing memory utilization for my application / service.
I am running the same load and at that time i have seen through Jconsole that memory was ranging between 1.5 to 1.7 GB(can see on image). Suddenly i have noticed that memory goes high for few second but here i would like to mention that nothing has been changed in terms of use case ( same load ).
I need to know that reason of why memory goes up high suddenly.In my setup nothing has changed that cause the reason of memory goes high.
Is there any bug in GC parameters ??
Yours thoughts are requested.
GC parameters I am using is:
export GC1_OPTS="-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
-XX:CMSInitiatingOccupancyFraction=50 -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseAdaptiveGCBoundary" export GC2_OPTS="-XX:+ExplicitGCInvokesConcurrent"

You need to know what it is doing in that period. Your load may not have changed but you may find that a tasks which normally uses a small amount of memory happens to rarely use a large amount of memory. I would use a memory profiler on your application to see how memory is being used and which code is causing it.

You are missing an important part of the question: "What is the problem"
You see that the memory used increases. That could be a bug in your application code that is triggered somehow, but it is not a problem from Garbage Collection point of view.
If you can produce this, I would do two heap dumps, one at 1.7 GB and then one later at 2.5 GB heap.
The you can use Eclipse MAT and the delta mode to compare the dumps. You will see what extra objects you have. Then you can find out if it is a problem or not.
With regards to your GC Settings, I would get rid of "-XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled" because they usually cause long pause times which are not really needed. Also "-XX:+UseAdaptiveGCBoundary" raises the question what your motivation was to use this parameter. I personally would not.

Related

JConsole heap dump much smaller than memory usage

We have a few containers running java processes with docker. One thing we've been noticing is a huge amount of memory that is taken up just by running a simple spring-boot app without even including our own code (just to try and get some kind of memory profile independent of any issues we might introduce).
What I saw was the memory consumed by docker/the JVM was hovering around 2.5. We did have a decent amount of extra deps included in it (camel, hibernate, some spring-boot deps) but that wasn't what really threw me off. What I saw was that despite docker saying it consumed 2.5GB of memory for the app, running jconsole against it read that it was consuming up to 1GB (down to ~200MB after a GC and slowly climbing). The memory footprint on docker remained where it was after the GC as well (2.5GB).
Furthermore, when I dumped the heap to see what kinds of object are taking up that space, it looks like the heap was only 33MB large after I loaded the .hprof file into MAT. None of this makes much sense to me. Currently, I'm looking at the non-heap space in jconsole reported at 115MB while the heap space is at 331MB.
I've already read a ton (on SO and other sites) about the JVM memory regions and some things specifically reporting that the heap dumps might be smaller but none of them were this far off that I could tell and beyond that, many of the suggested things to watch for were that the GC is run whenever a heap dump is taken and that MAT has a setting to show or hide unreachable objects. All of this was taken into account before posting here and now I just feel like something else is at play that I can't capture myself and I haven't found online.
I fully expect that the numbers might be a little off but it seems extreme that they're off by a factor of 10 in the best case scenario and off by nearly a factor of 100 when looking at the docker-reported memory usage.
Does anyone know what I might be missing here?
EDIT: This is also an app running with Java 8, not yet running with Java 11. It's on the JIRA board to do but not yet planned for.
EDIT2: Adding screenshots. Spike in the JConsole screen shot is from running GC.
JConsole gives you the amount of committed memory: 3311616 KiB ~= 3GiB
This is how much memory your java process consumes, as seen by the OS.
It is unrelated to how much heap is currently in use to hold Java objects, also reported by JConsole as 130237 kbyte ~= 130 MiB.
This is also unrelated to how many Objects are actually alive: By default MAT will remove unreachable Objects when you load the heap dump. You can enable the option by going to Preferences -> Memory Analyzer -> Keep Unreachable Objects (See the MAT documentation). So if you have a lot of short lived objects, the difference can be quite massive.
I see that it also reports a Max Heap of about 9GiB. It means that you have set Xmx parameter to a large value.
Hotspot GC's are not very good at reclaiming unused memory. They tend to use all the space available to them (the Max heap size, set by Xmx) and then never decommit the heap, effectively keeping it reserved for the Java process instead of releasing it to the OS.
If you want to minimize the memory footprint of your process from the OS perspective, I recommend that you set a lower Xmx, maybe -Xmx1g, so as to not allow Java to grow too much (of course, Xmx will also need to be high enough to accomodate for your application workload!).
If really want an adaptative heap, you can also switch to G1 (-XX:+UseG1GC) and a more recent Java, as the hotspot team has delivered some improvements recently.
Dave
OS monitoring tools will show to you the amount of memory that is allocated by a process. So this:
mean that your java process have 2.664G of memory allocated (java heap + meta space)
JConsole shows to you the memory that your code is "consuming" (ignoring the meta space)
I see 2 possible explanations:
You have set -Xms with a huge value
You have a lot of static
code (or other content) loaded on your meta space.

Java - Allocated space not reduced

I'm developing a Java application which sometimes do some heavy work.
When this is the case, it use more ram than usually, so the allocated memory space of the app is increased.
My question is why the allocated space is not reduced once the work is finished ?
Using a profiler, I can see that for example 70mb is assigned, but only 5mb are used !
It looks like the allocated space can only grow, and not shrink.
Thanks
Usually the JVM is very restrictive when it comes to freeing memory it has allocated. You can configure it to free more agressively though. Try sending these settings to the JVM when you start your program:
-XX:GCTimeRatio=5
-XX:AdaptiveSizeDecrementScaleFactor=1
The JVM decides when to release the memory back to the operating system. In my experience with Windows XP, this almost never happens. Occasionally I've seem memory released back when the Command Prompt (or Swing window) is minimized. I believe that the JVM on Linux is better at returning memory.
Generally there can be 2 reasons.
Probably your program has memory management problem. If for example you store some objects in collection and never remove these objects from collection they will never be garbage collected. If this is a case you have a bug that should be found and fixed.
But probably your code is OK but GC still does not remove objects that are not used more. The reason for this is that GC lives its own live and decides its own decisions. If for example it thinks that it has enough memory it does not remove used objects until the memory usage arrives to some threshold.
To recognize which case you are having here try to call System.gc() either programmatically or using profiler (usually profilers have button that run GC). If used objects are removed after forcing GC to run your code is OK. Otherwise try to locate the bug. Profiler that you are already using should help you.

How am I getting a "java.lang.OutOfMemoryError: Java heap space" if my heap isn't full?

I'm debugging a fairly large project I've been working on (but did not originally create) and I've noticed that sometimes it crashes with an OutOfMemoryError. The code is loading a lot of data from files so this isn't entirely surprising in general.
However, what confuses me is that I'm using VisualVM 1.3.4 to profile the program, and it behaves inconsistently. Most times I've run it, the heap gradually expands up to about 2GB (the computer has 16GB of RAM; it's for academic research) with the used heap spiking higher and higher underneath it. Around 2GB, it will crash. The program isn't proccessing more information as time goes on though, so it shouldn't grow the heap to 2GB in just a few minutes.
Sometimes, though, I get a sudden crash after about 30 seconds, with a heap size of 250MB and only about 100MB in use. How am I getting a java.lang.OutOfMemoryError: Java heap space if my heap isn't full?
Edit: I'm using Eclipse to run the program, and I have the VisualVM plugin so it gets launched automatically. Also, I'm using Java 7.
Start the application with the VM argument -XX:+HeapDumpOnOutOfMemoryError.
Analyse the Heap Dump and find out what is causing the issue.
Eclipse MAT is an excellent tool for finding out such issues.
you need to setup the JVMs min and max heap memory
set JAVA_OPTS="-Xms128m -Xmx256m"
something like that but with bigger values like 2G, 4G whatever
LE: As you all know you can't force JVM to run the garbage collector (even though you can ask for it), but there are some ways of convincing it to get rid of some items by null-ing their references. Another thing to watch is the database object that might be lazy initialised. That error could appear when you try to create an object exceding the max heap memory.
Another ideea could be some retarded developer that programatically threw the OutOfMemoryError in some method for some retarded reason. When you reach that part of code, that's what you get (search the project)
There can be at least 2 reasons for the application to crash with OutOfMemoryError.
Your java heap is just too small for the amount of data it needs to process. Then you can either increase it as suggested Matei, or analyze heap dump as suggest Ajay.
Your application leaks memory. Which means that it leaves some unneeded data in memory after processing it. Then increasing heap will not help in the long run. And your options are either heap dump analysis (again) or specialised memory leak detection tool, such as Plumbr
Turned out the crash was caused by using the OpenJDK JRE rather than Oracle's JRE. I don't know exactly what the bug is in OpenJDK that makes it crash like this, but changing to Oracle's JRE ultimately solved the problem.
(I was using OpenJDK because I'm on a Linux computer that someone was using for open-source work before me. When I mentioned the crash to him he had the idea that that might be the cause. He was right.)
Do you have a 32bit operative system without large memory support (PAE on win, huge mem kernel on linux...)? If yes, you may encounter the 2GB memory segment limit per process on 32 bit systems.
As a work around, try to set the JVM parameter -Xss192k to dispose 192kb of stack space per thread, and the parameter -Xmx1024m to use no more than 1GB of heap.

OutOfMemoryError

I have one main class that contains 5 buttons each link to a program/package. Each package runs a jmf program that capture images from a webcam and it also loads about 15 images from file.
The 1st program to load(regardless of which button i press) always runs correctly. But When i run a program after the 1st program ends, java.lang.OutOfMemoryError: java heap space occurs.
Im not sure if java can't handle all of our images or if it has something to do with jmf image capture.
Maybe you should give more memory to your JVM (-Xmx512m on the command line could be a good start),
then, if it solves the problem, investigate why your programs consumes so much memory.
The use of sun diagnostic tools like jvisualvm could be helpful.
Increase the Java maximum memory and re-rerun. If you still see OOM's, you may have a leak. To increase the max memory, append -Xmx<new heap size>m to your command line.
Example:
java -Xmx1024m Foo
How much memory are you giving to your JVM? You can give it more using the following: -Xmx1024m (for 1GB, adjust as necessary)
This assumes that you don't have some memory leak in your program. I don't know anything about JMF, this is just general advice for Out of Memory errors.
JVMs run with a limited amount of maximum memory available to them. This is a little counterintuitive and trips a lot of people up (I can't think of many similar environments).
You can increase the max memory the JVM takes by specifying
java -Xmx128m ...
or similar. If you know in advance that you're going to consume that amount of memory, use
java -Xms128m ...
to specify the memory that the JVM will allocate at startup. Note the -Xms vs -Xmx !
Try to check, if you still have some references around which prevent the first package/program to be garbage-collected.
When the launcher has detected that the first program has ended, set all references to the first program and maybe objects retrieved from it to NULL to allow the JVM to reclaim the memory again and have it ready for the second launch.
Java uses 64 MByte heap space by default. An alternative to the other suggestions (increasing heap space to 512M or 1024M) is to start separate JVMs for the controller and the 5 applications. Then if one of your JMF applications crashes (due to insufficient memory), the controller and the other apps are still running.
(this will only work if the applications and the controller are completely decoupled - otherwise, just increase the heap size and dispose all media as soon as you don't need it anymore to prevent from memory leaks)

How is your JVM 6 memory setting for JBOSS AS 5?

I'm using an ICEFaces application that runs over JBOSS, my currently heapsize is set to
-Xms1024m –Xmx1024m -XX:MaxPermSize=256m
what is your recommendation to adjust memory parameters for JBOSS AS 5 (5.0.1 GA) JVM 6?
According to this article:
AS 5 is known to be greedy when it comes to PermGen. When starting, it often throws OutOfMemoryException: PermGen Error.
This can be particularly annoying during development when you are hot deploying frequently an application. In this case, JBoss QA recommends to raise the permgen size, allow classes unloading and permgen sweep:
-XX:PermSize=512m -XX:MaxPermSize=1024 -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
But this is more FYI, I'm not suggesting to apply this configuration blindly (as people wrote in comments, "if it ain't broken, don't fix it").
Regarding your heap size, always keep in mind: the bigger the heap, the longer the major GC. Now, when you say "it was definitely too small", I don't really know what this means (what errors, symptoms, etc). To my knowledge, a 1024m heap is actually pretty big for a webapp and should really be more than enough for most of them. Just beware of the major GC duration.
Heap: Start with 512 MB, set the cap to where you believe your app should never get, and not to make your server start swapping.
Permgen: That's usually stable enough, once the app reads all classes used in the app. If you have tested the app and it works with 256 MB, then leave it so.
#wds: It's definitely not a good idea to set the heap maximum as high as possible for two reasons:
Large heaps make full GC take longer. If you have PermGen scanning enabled, a large PermGen space will take longer to GC as well.
JBoss AS on Linux can leave unused I/O handles open long enough to make Linux clean them up forcibly, blocking all processes on the machine until it is complete (might take over 1 minute!). If you forget to turn off the hot deploy scanner, this will happen much more frequently.
This would happen maybe once a week in my application until I:
decreased -Xms to a point where JBoss AS startup was beginning to slow down
decreased -Xmx to a point where full GCs happened more frequently, so the Linux I/O handle clean up stopped
For developers I think it's fine to increase PermGen, but in production you probably want to use only what is necessary to avoid long GC pauses.

Categories