Increasing heap memory utilization on Java Tomcat application - java

I have a Java application running on a Tomcat7 instance. I am using Java 8.
Now within my application there is a webservice of the format : http://zzz.xxx.zzz.xxx/someWebservice?somepar1=rrr&somePar2=yyy. This returns a String value of max 10 characters.
I have now started load testing this service using Jmeter. I am putting a load of 100 concurrent connections and getting a throughput of roughly 150 requests/second. The server is a 4 core-6GB machine and is only running Tomcat (application instance). The database instance is running on a separate machine. JVM is running with min 2GB and Max 4 GB memory allocation. Max Perm Size is 512 MB. Tomcat has enough threads to cater to my load (Max connections/threads/executor values have been setup correctly)
I am now trying to optimize this service and in order to do so I am trying to analyze memory consumption. I am using JConsole for the same. My CPU usage is not a concern , but when I look at the memory (HEAP) usage, I feel something is not right. What I observe is a sawtooth shaped graph, which I know is correct as regular GC clears heap memory.
My concern is that this sawtooth shaped graph has an upward trend. I mean that the troughs in the sawtooth seem to be increasing over time. With this trend eventually my server reaches max heap memory in an hour or so and then stabilizes at 4GB. I believed that if I am putting a CONSTANT load, then the heap memory utilization should have a constant trend, i.e. a saw tooth graph with its peaks and troughs aligned. If there is an upward trend I am suspecting that there is a memory leak of variables which are collecting over time and since GC isn't able to clear them there is an increase over a period of time. I am attaching a screenshot.
Heap Usage
Questions:
1). Is this normal behavior? If yes, then why does the heap continuously increase despite no change in load? I don't believe that a load of 100 threads should saturate 4GB heap in roughly 30 minutes.
2). What could be the potential reasons here? Do I need to look at memory leaks? Any JVM analyzer apart from JConsole which can help me pinpoint the variables which the GC is unable to clear?

The see-saw pattern most likely stems from minor-collections, the dip around 14:30 then is a major collection, which you did not take into account when doing your reasoning.
Your load may simply be so low that it needs a long time to reach a stable state.
With this trend eventually my server reaches max heap memory in an hour or so and then stabilizes at 4GB.
Supports that conclusion if you're not seeing any OOMEs.
But there's only so much one can deduce from such charts. If you want to know more you should enable GC logging and inspect the log outputs instead.

Related

Java 8 JVM hangs, but does not crash/ heap dump when out of memory

When running out of memory, Java 8 running Tomcat 8 never stops after a heap dump. Instead it just hangs as it max out memory. The server becomes very slow and non-responsive because of extensive GC as it slowly approaches max memory. The memory graph in JConsole flat lines after hitting max. 64 bit linux/ java version "1.8.0_102"/ Tomcat 8. Jconsole
I have set -XX:HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath. Anyone know how to force heap dump instead of JVM getting into unresponsive/ very slow response mode?
Anyone know how to force heap dump instead of JVM getting into unresponsive/ very slow response mode?
You need to use -XX:+UseGCOverheadLimit. This tells the GC to throw an OOME (or dump the heap if you have configured that) when the percentage time spent garbage collecting gets too high. This should be enabled by default for a recent JVM ... but you might have disabled it.
You can adjust the "overheads" thresholds for the collector giving up using -XX:GCTimeLimit=... and -XX:GCHeapFreeLimit=...; see https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc-ergonomics.html
The effect of "overhead" limits is that your application gets the GC failures earlier. Hopefully, this avoids the "death spiral" effect as the GC uses a larger and larger proportion of time to collect smaller and smaller amounts of actual garbage.
The other possibility is that your JVM is taking a very long time to dump the heap. That might occur if the real problem is that your JVM is causing virtual memory thrashing because Java's memory usage is significantly greater than the amount of physical memory.
jmap is the utility that will create a heap dump for any running jvm. This will allow you to create a heap dump before a crash
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr014.html
It will be a matter of timing, though, to know when you should create it. You can take subsequent heaps and use tools to compare the heaps. I highly recommend Eclipse Memory Access Tool and it's dominator tree view to identify potential memory issues (https://www.eclipse.org/mat/)

Understanding Heap graph from visualvm

I have a program which essentially reads many files (mixed PDF usually below 10 MB + XML size of Kb) from disk and upload them one at a time into a legacy system. I let it run under visualvm when going on weekend, and this morning I came back to a graph showing that the heap usage was rather uneven in the hour it took the program to run. I was expecting a roughly level amount of heap usage over time.
I am looking for an explanation for this. This is with 64-bit Oracle JVM under Ubuntu 17.04 (Desktop, not server) with 32 GB RAM on a 4-core i5-2400 (no hyperthreading). The program is essentially single-threaded utilizing about 50% of a core and took an expected time to run.
I understand that if memory is not fully used it is released over time. I do not understand that the usage goes down over time, as the load should be quite evenly distributed. Am I seeing the result of the CPU throttling as the system is otherwise idle? Is some JVM optimization kicking in?
Actual GC logs would be more helpful, but it's probably just the GC heuristics adjusting their decisions about young and old heap size to meet throughput and pause time goals.
As long as you don't place limits on the JVM it will happily use more memory than the bare minimum to keep the program running. If it did not you would experience very very frequent and inefficient GCs. In other words, heap size = n * live object set size with n > 1. Depending on the collector used there may be no upper limit for n other than available memory since ParallelGC defaults MaxHeapFreeRatio to 100 while G1 uses 70.
The heuristics are far from perfect, they can paint themselves into a corner in some edge-cases or unnecessarily fluctuate between different equilibrium points. But the graph itself does not indicate a problem.

How to make JVM use the max (all remain) memory of a server

I have an DFS algorithm java console application, that runs faster when more memory is provided. Just a DFS algorithm application, with neither I/O nor other outer-JVM resource usage. It consumes only CPU and memory. The application can run with an 1GB memory, but run much more faster with 2 GB memory. More memory provided, faster the application can run. I haven't touch the speed limit as 12GB of memory provided. So I must use all remain memory of a server to speed it up. And the application need not parallel, one request only at one time.
And I need to install the application on different server with different memory size.
Is there a way to let JVM use the all remain memory of the server?
-XX:MaxRAMFraction=1
MaxRAMFraction is not EVERY server good, some server will result in start JVM failure as location memory failure, some works good.
Use an wrapper application get system remain memory, and minus some memory usage other than Xmx, then start the real application with same Xms and Xms. The method will also result in JVM memory allocation error. Because the code below returns much more than memory we can use, not just a minus of Xss256m or some more non-heap JVM memory.
com.sun.management.OperatingSystemMXBean mbean = (com.sun.management.OperatingSystemMXBean)
ManagementFactory.getOperatingSystemMXBean();
long size = mbean.getFreePhysicalMemorySize();
So is there a good way to let JVM use all remain memory of a server?
For large regions of memory I use off heap and this reduces overhead on the GC, one of the benefitis is that is can be any size at runtime and even larger than main memory if you do it carefully. You can use direct ByteBuffers but I use a library I wrote which extends the ByteBuffer functionality (>> 2 GB and thread safe) Chronicle Bytes The largest any one uses this is ~100 TB of virtual memory mapped to disk.
We have two data structures on top of Chronicle Bytes, a key-value store Chronicle Map and a queue/journal Chronicle Queue. This can make storing data off heap easier with a higher level interface.
The way the heap works, it has to reserve the maximum heap size on start up as a single continuous block of virtual memory. In particular, the GC assumes random access to this memory on a clean up which means if you are have slightly over utilised your memory, possibly because a process started after yours and some of the heap is swapped out you will see a dramatic fall in performance for your whole machine. Windows tends to start swapping your GUI meaning you can't get back control without a power cycle. Linux isn't as bad, but you will want to kill your process at this point. This makes tuning it size to use all memory very hard if the usage of your machine changes.
By using virtual memory by comparison, the GC doesn't touch it so unused portions have little impact. You can have areas of virtual memory many times main memory but only your current working set matters, and this is a size entirely in your control at runtime. Note: on Linux you can have virtual memory sizes 1000x your free disk space, but use with care, if you run out by touching too many pages your program will crash.

java heap size grows by portions up to some limit

I am writing a simple server application in Java, and I'm making some benchmarks using Apache benchmark now.
Right after the start the resident memory used by server is about 40M. I make the series of 1000 requests (100 concurrent requests). After each series I check the memory usage again. The result seems very strange to me.
During the first runs of benchmark, the 99% of requests are processed in ≈20 msecs and the rest 1% of them in about 300 msecs (!). Meantime the memory usage grows. After 5-7 runs this growth stops at 120M and requests begin to run much faster - about 10 msecs per request. Moreover, thee time and mem values remain the same when I increase the number of requests.
Why could this happen? If there was a memory leak then my server would require more and more resources. I could only suggest that this is because of some adaptive JVM memory allocation mechanism which increases the heap size in advance.
The heap starts out at Xms and grows as needed up to the specified limit Xmx.
It is normal for the JVM to take a while to "warn up" - all sorts of optimisations happen at run time.
If the memory climbs up to a point and then stops climbing, you are fine.
If the memory climbs indefinitely and the program eventually throws an OutOfMemoryError then you have a problem.
From your description it looks as if the former is true, so no memory leak.

Tuning Garbage Collection parameters in Java

I have a server java component which has a huge memory demand at startup which mellows down gradually. So as an example at startup the memory requirement might shoot unto 4g; which after the initial surge is over will go down to 2g. I have configured the component to start with memory of 5g and the component starts well; the used memory surges upto 4g and then comes down close to 2g. The memory consumed by the heap at this point still hovers around 4g and I want to bring this down (essentially keep the free memory down to few hundred mb's rather than 2g. I tried using the MinFreeHeapRatio and MaxFreeHeapRatio by lowering them down from the default values but this resulted in garbage collection not being triggered after the initial run during the initial spike and the used memory stayed at a higher than usual level. Any pointers would greatly help.
First, I ask why you are worried about freeing up 2 GB of ram on a server? 2GB of ram is about $100 or less. If this is on a desktop I guess I can understand the worry.
If you really do have a good reason to think about it, this may be tied to the garbage collection algorithm you are using. Some algorithms will release unused memory back to the OS, some will not. There are some graphs and such related to this at http://www.stefankrause.net/wp/?p=14 . You might want to try the G1 collector, as it seems to release memory back to the OS easily.
Edit from comments
What if they all choose to max their load at once? Are you ok with some of them paging memory to disk and slowing the server to a crawl? I would run them on a server with enough memory to run ALL applications at max heap, + another 4-6GB for the OS / caching. Servers with 32 or 64 GB are pretty common, and you can get more.
You have to remember that the JVM reserves virtual memory on startup and never gives it back to the OS (until the program exits) The most you can hope for is that unused memory is swapped out (this is not a good plan) If you don't want the application to use more than certain amount of memory, you have to restrict it to that amount.
If you have a number of components which use a spike of memory, you should consider re-writing them to use less memory on startup or use multiple components in the same JVM so the extra memory is less significant.

Categories