Find java virtual memory resource hog - java

I'm currently facing a very strange problem. I have written a simple servlet which runs within a self hosting jetty container. This servlet is a logging endpoint for JS scripts. So the script just runs very simple code to log to graylog and some files (managed by a log4j file appender.)
The admin complained to me that the servlet hogs up to 10.5GB of Virtual Memory which caused the whole machine to slow down. This had an impact on the performance of some other monitoring services.
Restarting the servlet fixed the problem temporarly but the question is how can I find and fix spots in the code causing such memory hogging?
Edit:
I start the application with the -Xmx50m switch.
Edit:
The following things have been investigated: I started Eclipse Memory Analyzer and jConsole to have a look into the application while some ruby scripts sent requests. (40 to 70 requests per minute. That's more than the servlet is getting in production at the moment.)
With this setting:
Heap size: 4MB
Running threads average: 19 (peak at 23)
Virtual Memory: 5GB
Restarting the servlet speeded up the server. The only suspicious parameter of the servlet were the 10.5GB Virtual Memory.

Virtual memory doesn't use much resources, only resident memory matters. You can create a process which uses 8 TB of virtual memory and it still has little impact on resources.
On Linux the "simplest" way to check virtual memory is to read /proc/{pid}/mmap even this is pretty cryptic.
I would check the resident memory and this is what really matters, but I suspect it is close to your 10.5 GB if they are complaining ( assuming they know what they are talking, which I wouldn't assume )

how can I find and fix spots in the code causing such memory hogging
Start by searching this site. There are literally thousands of results.
For your specific case, I'd look for the following:
An unreasonably large heap specification, using the -Xmx command-line argument when starting Java. For a simple servlet, you should use maybe 100-200 Mb.
An excessive number of threads. Each thread requires space (2 Mb by default) for its internal stack.
Large memory-mapped files. The way you describe your servlet, you shouldn't be using any of these.

Related

How to prevent a Spring Boot / Tomcat (Java8) process be OOM-killed?

Since moving to Tomcat8/Java8, now and then the Tomcat server is OOM-killed. OOM = Out-of-memory kill by the Linux kernel.
How can I prevent the Tomcat server be OOM-killed?
Can this be the result of a memory leak? I guess I would get a normal Out-of-memory message, but no OOM-kill. Correct?
Should I change settings in the HEAP size?
Should I change settings for the MetaSpace size?
Knowing which Tomcat process is killed, how to retrieve info so that I can reconfigure the Tomcat server?
Firstly check that the oomkill isn't being triggered by another process in the system, or that the server isn't overloaded with other processes. It could be that Tomcat is being unfairly targeted by oomkill when some other greedy process is the culprit.
Heap should be set as a maximum size (-Xmx) to be smaller than the physical RAM on the server. If it is more than this, then paging will cause desperately poor performance when garbage collecting.
If it's caused by the metaspace growing in an unbounded fashion, then you need to find out why that is happening. Simply setting the maximum size of metaspace will cause an outofmemory error once you reach the limit you've set. And raising the limit will be pointless, because eventually you'll hit any higher limit you set.
Run your application and before it crashes (not easy of course but you'll need to judge it), kill -3 the tomcat process. Then analyse the heap and try to find out why metaspace is growing big. It's usually caused by dynamically loading classes. Is this something your application is doing? More likely, it's some framework doing this. (Nb oom killer will kill -9 the tomcat process, and you won't be able to diagnostics after that, so you need to let the app run and intervene before this happens).
Check out also this question - there's an intriguing answer which claims an obscure fix to an XML binding setting cleared the problem (highly questionable but may be worth a try) java8 "java.lang.OutOfMemoryError: Metaspace"
Another very good solution is transforming your application to a Spring Boot JAR (Docker) application. Normally this application has a lot less memory consumption.
So steps the get huge improvements (if you can move to Spring Boot application):
Migrate to Spring Boot application. In my case, this took 3 simple actions.
Use a light-weight base image. See below.
VERY IMPORTANT - use the Java memory balancing options. See the last line of the Dockerfile below. This reduced my running container RAM usage from over 650MB to ONLY 240MB. Running smoothly. So, SAVING over 400MB on 650MB!!
This is my Dockerfile:
FROM openjdk:8-jdk-alpine
ENV JAVA_APP_JAR your.jar
ENV AB_OFF true
EXPOSE 8080
ADD target/$JAVA_APP_JAR /deployments/
CMD ["java","-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-jar","/deployments/your.jar"]

All RAM usage getting exhausted for java project

My project is based on spring framework java. War size of my application is about 38mb. I hosted my application on vps with 1 gb RAM. Within some days i got to know that all RAM is getting exhausted.
Then i extend RAM by 1gb. Now single war file is working on 2 gb RAM using tomcat server. After 2-3 days i checked 2gb RAM also exhausted and it is showing around 80 to 90 percent usage.
Currently, system is under development and no one is using application still all RAM is getting used.
Is that a normal behavior Or there are any issues?
or do i need to make any settings?
Can anyone tell me how much RAM getting used for normal java project..
I checked vps ram usage by 'free -m' command, It is showing that -/+ buffers/cache as 557 [used ] 1444 [free]
Also Mem values are 2001[total] 1736[used] 265[free] 38[shared] 130[buffers] 1048[cached]
In addition to endless loops, check for memory leaks and issues related to not releasing resources like db links etc. Refer to similar issues reported by the community like below
Why is this Java program taking up so much memory?
How to reduce Spring memory footprint
http://www.toptal.com/java/hunting-memory-leaks-in-java
In my opinion, normally it need 1GB RAM for small Java application. You need look into your code if any endless loops are there or any schedulers are running forever.

Limit total memory consumption of Java process (in Cloud Foundry)

Related to these two questions:
How to set the maximum memory usage for JVM?
What would cause a java process to greatly exceed the Xmx or Xss limit?
I run a Java application on Cloud Foundry and need to make sure that the allocated memory is not exceeded. Otherwise, and this is the current issue, the process is killed by Cloud Foundry monitoring mechanisms (Linux CGROUP).
The Java Buildpack automatically sets sane values for -Xmx and -Xss. By tuning the arguments and configuring the (maximum) number of expected threads, I'm pretty sure that the memory consumed by the Java process should be less than the upper limit which I assigned to my Cloud Foundry application.
However, I still experience Cloud Foundry "out of memory" errors (NOT the Java OOM error!):
index: 3, reason: CRASHED, exit_description: out of memory, exit_status: 255
I experimented with the MALLOC_ARENA_MAX setting. Setting the value to 1 or 2 leads to slow startups. With MALLOC_ARENA_MAX=4 I still saw an error as described above, so this is no solution for my problem.
Currently I test with very tight memory settings so that the problem is easier to reproduce. However, even with this, I have to wait about 20-25 minutes for the error to occur.
Which arguments and/or environment variables do I have to specify to ensure that my Java process never exceeds a certain memory limit? Crashing with a Java OOM Error is acceptable if the application actually needs more memory.
Further information regarding MALLOC_ARENA_MAX:
https://github.com/cloudfoundry/java-buildpack/pull/160
https://www.infobright.com/index.php/malloc_arena_max/#.VmgdprgrJaQ
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
EDIT: A possible explaination is this: http://www.evanjones.ca/java-bytebuffer-leak.html. As I currently see the OOM issue when doing lots of outgoing HTTP/REST requests, these buffers might be to blame.
Unfortunately, there is no way to definitively enforce a memory limit on the JVM. Most of the memory regions are configurable (-Xmx, -Xss, -XX:MaxPermSize, -XX: MaxMetaspaceSize, etc.) but the one you can't control is Native memory. Native memory contains a whole host of things from memory mapped files to native libraries to JNI code. The best you can do is profile your application, find out where the memory growth is occurring, and either solve the growth or give yourself enough breathing room to survive.
Certainly unsatisfying, but in the end not much different from other languages and runtimes that have no control over their memory footprint.

Does the application server affect Java memory usage?

Let's say I have a very large Java application that's deployed on Tomcat. Over the course of a few weeks, the server will run out of memory, application performance is degraded, and the server needs a restart.
Obviously the application has some memory leaks that need to be fixed.
My question is.. If the application were deployed to a different server, would there be any change in memory utilization?
Certainly the services offered by the application server might vary in their memory utilization, and if the server includes its own unique VM -- i.e., if you're using J9 or JRockit with one server and Oracle's JVM with another -- there are bound to be differences. One relevant area that does matter is class loading: some app servers have better behavior than others with regard to administration. Warm-starting the application after a configuration change can result in serious memory leaks due to class loading problems on some server/VM combinations.
But none of these are really going to help you with an application that leaks. It's the program using the memory, not the server, so changing the server isn't going to affect much of anything.
There will probably be a slight difference in memory utilisation, but only in as much as the footprint differs between servlet containers. There is also a slight chance that you've encountered a memory leak with the container - but this is doubtful.
The most likely issue is that your application has a memory leak - in any case, the cause is more important than a quick fix - what would you do if the 'new' container just happens to last an extra week etc? Moving the problem rarely solves it...
You need to start analysing the applications heap memory, to locate the source of the problem. If your application is crashing with an OOME, you can add this to the JVM arguments.
-XX:-HeapDumpOnOutOfMemoryError
If the performance is just degrading until you restart the container manually, you should get into the routine of triggering periodic heap dumps. A timeline of dumps is often the most help, as you can see which object stores just grow over time.
To do this, you'll need a heap analysis tool:
JHat or IBM Heap Analyser or whatever your preference :)
Also see this question:
Recommendations for a heap analysis tool for Java?
Update:
And this may help (for obvious reasons):
How do I analyze a .hprof file?

Grails application hogging too much memory

Tomcat 5.5.x and 6.0.x
Grails 1.6.x
Java 1.6.x
OS CentOS 5.x (64bit)
VPS Server with memory as 384M
JAVA_OPTS : tried many combinations- including the following
export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m'
export JAVA_OPTS='-server -Xms128M -Xmx128M -XX:MaxPermSize=256M'
(As advised by http://www.grails.org/Deployment)
I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it
I am running Tomcat on a VPS Server
When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M
and used memory is about 156M
When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory
As you have seen, my app is as light as it can be.
Not sure why the memory consumption is as high it is.
I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.
UPDATE
I tested the same "blank" application on my local Tomcat 5.5.x on Windows and it worked fine
The memory consumption of the Java process shot from 32 M to 107M. But it did not crash and it remained under acceptable limits
So the hunt for answer continues... I wonder if something is wrong about my Linux box. Not sure what though...
UPDATE 2
Also see this http://www.grails.org/Grails+Test+On+Virtual+Server
It confirms my belief that my simple-blank app should work on my configuration.
It is a false economy to try to run a long running Java-based application in the minimal possible memory. The garbage collector, and hence the application will run much more efficiently if it has plenty of regular heap memory. Give an application too little heap and it will spend too much time garbage collecting.
(This may seem a bit counter-intuitive, but trust me: the effect is predictable in theory and observable in practice.)
EDIT
In practical terms, I'd suggest the following approach:
Start by running Tomcat + Grails with as much memory as you can possibly give it so that you have something that runs. (Set the permgen size to the default ... unless you have clear evidence that Tomcat + Grails are exhausting permgen.)
Run the app for a bit to get it to a steady state and figure out what its average working set is. You should be able to figure that out from a memory profiler, or by examining the GC logging.
Then set the Java heap size to be (say) twice the measured working set size or more. (This is the point I was trying to make above.)
Actually, there is another possible cause for your problems. Even though you are telling Java to use heaps of a given size, it may be that it is unable to do this. When the JVM requests memory from the OS, there are a couple of situations where the OS will refuse.
If the machine (real or virtual) that you are running the OS does not have any more unallocated "real" memory, and the OS's swap space is fully allocated, it will have to refuse requests for more memory.
It is also possible (though unlikely) that per-process memory limits are in force. That would cause the OS to refuse requests beyond that limit.
Finally, note that Java uses more virtual memory that can be accounted for by simply adding the stack, heap and permgen numbers together. There is the memory used by the executable + DLLs, memory used for I/O buffers, and possibly other stuff.
384MB is pretty small. I'm running a small Grails app in a 512MB VPS at enjoyvps.net (not affiliated in any way, just a happy customer) and it's been running for months at just under 200MB. I'm running a 32-bit Linux and JDK though, no sense wasting all that memory in 64-bit pointers if you don't have access to much memory anyway.
Can you try deploying a tomcat monitoring webapp e.g. psiprobe and see where the memory is being used?

Categories