Troubleshooting Java memory usage exception - java

I'm trying to troubleshoot a Java program that requires increasingly more memory until it cannot allocate any more and then it crashes.
EDIT More information about the program. The program is an indexer going through thousands of documents and indexing them for search. The documents are read from MongoDB and written to MongoDB as well after some processing is performed. During the processing I'm using RocksDB (rocksdb-jni version 5.13.4 from Maven). There is some mentioning in this GitHub issue of RocksDB memory usage growing uncontrollably, but I'm not sure it could be related.
Monitoring the process with visualvm results in the following plot:
but running htop on the machine shows totally different stats:
There is a difference of several GBs of memory that I'm unable to trace the source of.
The program is launched with the following VM arguments:
jvm_args: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=<port> -Djava.rmi.server.hostname=<hostname> -Xmx12G -XX:+UseStringDeduplication
The system has 32GB of RAM and no swap. Out of this 32 GB, ~10GB are always taken by a tmpfs partition, ~8GB by MongoDB and the remaining 12GB are assigned to the program. EDIT The visualvm screenshot above shows 20GB of heap size, because it was from a previous run where I passed -Xmx20G; the behaviour, however, is the same whether I assign 12GB or 20GB to the heap. The behaviour also does not change if I remove the tmpfs partition, freeing 10 more GB of memory: it just takes longer but eventually it will get out of memory.
I have no idea where this memory usage that is not shown in visualvm but appears in htop is coming from. What tools should I use to understand what is going on? The application is running on a remote server, so I would like a tool that only works in the console or can be configured to work remotely, like visualvm.

I always use JProfiler but i hear jetbrains has a good one as well both can connect remotely.
If possible i would try to create a (local) setup where you can freely test it.
in the RocksDB several possible solutions are mentioned, do they work?
RocksDB seems to need some configuration, how did you configure it?
I am not familiar with RocksDB but i see there is some indexing and caching going on. How much data are you processing and what indexes/caching configuration do you have? are you sure this should fit in memory?
as far as i know the memory mismatch is because jni usage is not shown by default by most. There are some flags to improve this NativeMemoryTracking. i can't recall if this will add them to your visualvm overviews as well.
karbos-538 i just noticed this is quite an old issue, i hope this application is already working by now. what part of the question is still relevant to you?

Related

Measure peak memory consumption (of a Java Application) at runtime?

I have to run a couple of java services on my machine to obtain a certain dev environment (and get my not-java-related work done)
java -Xmx400m -jar foo-app/target/foo-app-SNAPSHOT.jar
java -Xmx250m -jar bar-app/target/bar-app-SNAPSHOT.jar
...
To not run out of memory, I need to limit the memory usage. The default (512m afaik) ist too high for my machine so I lowered them somewhat (on a wild as guessing basis). Except for one, where I learned the hard way (crashed, even freezes, and thankfully some .pid error files left behind in the project folder...), that I better settle a little higher:
java -Xmx800m -jar doo-app/target/doo-app-SNAPSHOT.jar
Question: is there a way, to track memory usage of a certain app over time?
By some java command line parameter or even with ps -ae, htop or similar? (thus not fiddling in the source itself, remap garbage collectors, etc, etc)
I see plenty of numbers, but figuring out which belong to which java project running, and what could roughly indicate me a proper peak memory consumption (in a -Xmx___m sense)... I have no idea.
I work under Ubuntu-MATE 16.04, x64.
The best way to analyze memory consumption is a profiler. In your jdk there comes the jvisualvm profiler, which is absolutely sufficient for this task. A (lengthy) tutorial can be found here: https://engineering.talkdesk.com/ninjas-guide-to-getting-started-with-visualvm-f8bff061f7e7
Other approaches are basically shotgun-style -reduce the xmx and then generate load in the system and see if it runs oom. If you do NOT have a straight controll flow you have no way to predict the used memory.

Jmeter is out of heap memory

I'm trying to run a performance test in multiple databases, reading the info from a csv file, but after a while the Jmeter failed the test cases, because it rans out of memory.
I tried to increased "java -XX:MaxPermSize=1024m -Xms3072m -Xmx3072m -jar Apache-JMeter.jar" in this way but I having the same result.
Also Jmeter is creating a mysql connection to 5 different databases.
I'll assume you have a 64 bit operating system and JVM and lots of RAM available.
You can't guess about these things. Better to get some data using a profiler. Use something like dynaTrace, JProfiler or, if you have an Oracle JVM, the Visual VM tool that ships with the JVM.
You can't figure out what the problem is until you have some data. Maybe you need to write your tests differently so you don't have so much data in memory all at once.
first things first, are you modifying those command lines in the jmeter bat correct? because based on your dump we should be able to see how much ram you are ACTUALLY using..
e.g.
java.lang.OutOfMemoryError: Java heap space Dumping heap to java_pid999999.hprof ... Heap dump file created [591747609 bytes in 1321.244 secs] in this case this means that I was using a jmeter configured with a xmx of 591747609 bytes aka 512mb of ram..
By this we can understand if your settings are even being applied..
If that's fine, anyway post some profiler info and we'll see..
but consider these points:
1) jmeter in guimode is a real bottleneck if you want to see what's the performance of your java application... consider switching to remote testing.
2) check best practice on jmeter configuration, there are some sub-optimal settings in jmeter which hogs WAY too much memory, and you might want to turn them off...
give us an update and we'll see what advice can be given.
Make sure that you
Run JMeter in non-GUI mode
Disable all listeners, especially View Results Tree and View Results in Table ones
Use CSV as results output format
Avoid queries producing large result sets
Use Post Processors and Assertions only when and where required
Use latest version of JMeter on 64-bit OS and using latest version of 64-bit JDK
If you still experience OOM errors you will need to either switch to machine having more RAM or consider distributed testing.

Java : Get heap dump without jmap or without hanging the application

In few circumstance, our application is using around 12 GB of memory.
We tried to get the heap dump using jmap utility. Since the application is using some GB of memory it causes the application to stop responding and causes problem in production.
In our case the heap usage suddenly increases from 2-3 GB to 12GB in 6 hours. In an attempt to find teh memory usage trend we tried to collect the heap dump every one hour after restarting the application. But as said since using the jmap causes the application to hang we need to restart it and we are not able to get the trend of memory usage.
Is there a way to get the heap dump without hanging the application or is there a utility other than jmap to collect heap dump.
Thoughts on this highly appreciated, since without getting the trend of memory usage it is highly difficult to fix the issue.
Note: Our application runs in CentOS.
Thanks,
Arun
Try the following. It comes with JDK >= 7:
/usr/lib/jvm/jdk-YOUR-VERSION/bin/jcmd PID GC.heap_dump FILE-PATH-TO-SAVE
Example:
/usr/lib/jvm/jdk1.8.0_91/bin/jcmd 25092 GC.heap_dump /opt/hd/3-19.11-jcmd.hprof
This dumping process is much faster than dumping with jmap! Dumpfiles are much smaller, but it's enough to give your the idea, where the leaks are.
At the time of writing this answer, there are bugs with Memory Analyzer and IBM HeapAnalyzer, that they cannot read dumpfiles from jmap (jdk8, big files). You can use Yourkit to read those files.
First of all, it is (AFAIK) essential to freeze the JVM while a thread dump / snapshot is being taken. If JVM was able to continue running while the snapshot was created, it would be next to impossible to get a coherent snapshot.
So are there other ways to get a heap dump?
You can get a heap dump using VisualVM as described here.
You can get a heap dump using jconsole or Eclipse Memory Analyser as described here.
But all of these are bound to cause the JVM to (at least) pause.
If your application is actually hanging (permanently!) that sounds like a problem with your application itself. My suggestion would be to see if you can track down that problem before looking for the storage leak.
My other suggestion is that you look at a single heap dump, and use the stats to figure out what kind(s) of object are using all of the space ... and why they are reachable. There is a good chance that you don't need the "trend" information at all.
You can use GDB to get the heap dump without running jmap on the target VM however this will still hang the application for the amount of time required to write the heap dump to disk. Assuming a disk speed of 100MB/s (a basic mirrored array or single disk) this is still 2 minutes of downtime.
http://blogs.atlassian.com/2013/03/so-you-want-your-jvms-heap/
The only true way to avoid stopping the JVM is transactional memory and a kernel that takes advantage of it to provide a process snapshot facility. This is one of the dreams of the proponents of STM but it's not available yet. VMWare's hot-migration comes close but depends on your allocation rate not exceeding network bandwidth and it doesn't save snapshots. Petition them to add it for you, it'd be a neat feature.
A heap dump analyzed with the right tool will tell you exactly what is consuming the heap. It is the best tool for tracking down memory leaks. However, collecting a heap dump is slow let alone analyzing it.
With knowledge of the workings of your application, sometimes a histogram is enough to give you a clue of where to look for the problem. For example, if MyClass$Inner is at the top of the histogram and MyClass$Inner is only used in MyClass, then you know exactly which file to look for a problem.
Here's the command for collecting a histogram.
jcmdpidGC.class_histogram filename=histogram.txt
To add to Stephen's answers, you can also trigger a heap dump via API for the most common JVM implementations:
example for the Oracle JVM
API for the IBM JVM

Grails application hogging too much memory

Tomcat 5.5.x and 6.0.x
Grails 1.6.x
Java 1.6.x
OS CentOS 5.x (64bit)
VPS Server with memory as 384M
JAVA_OPTS : tried many combinations- including the following
export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m'
export JAVA_OPTS='-server -Xms128M -Xmx128M -XX:MaxPermSize=256M'
(As advised by http://www.grails.org/Deployment)
I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it
I am running Tomcat on a VPS Server
When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M
and used memory is about 156M
When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory
As you have seen, my app is as light as it can be.
Not sure why the memory consumption is as high it is.
I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.
UPDATE
I tested the same "blank" application on my local Tomcat 5.5.x on Windows and it worked fine
The memory consumption of the Java process shot from 32 M to 107M. But it did not crash and it remained under acceptable limits
So the hunt for answer continues... I wonder if something is wrong about my Linux box. Not sure what though...
UPDATE 2
Also see this http://www.grails.org/Grails+Test+On+Virtual+Server
It confirms my belief that my simple-blank app should work on my configuration.
It is a false economy to try to run a long running Java-based application in the minimal possible memory. The garbage collector, and hence the application will run much more efficiently if it has plenty of regular heap memory. Give an application too little heap and it will spend too much time garbage collecting.
(This may seem a bit counter-intuitive, but trust me: the effect is predictable in theory and observable in practice.)
EDIT
In practical terms, I'd suggest the following approach:
Start by running Tomcat + Grails with as much memory as you can possibly give it so that you have something that runs. (Set the permgen size to the default ... unless you have clear evidence that Tomcat + Grails are exhausting permgen.)
Run the app for a bit to get it to a steady state and figure out what its average working set is. You should be able to figure that out from a memory profiler, or by examining the GC logging.
Then set the Java heap size to be (say) twice the measured working set size or more. (This is the point I was trying to make above.)
Actually, there is another possible cause for your problems. Even though you are telling Java to use heaps of a given size, it may be that it is unable to do this. When the JVM requests memory from the OS, there are a couple of situations where the OS will refuse.
If the machine (real or virtual) that you are running the OS does not have any more unallocated "real" memory, and the OS's swap space is fully allocated, it will have to refuse requests for more memory.
It is also possible (though unlikely) that per-process memory limits are in force. That would cause the OS to refuse requests beyond that limit.
Finally, note that Java uses more virtual memory that can be accounted for by simply adding the stack, heap and permgen numbers together. There is the memory used by the executable + DLLs, memory used for I/O buffers, and possibly other stuff.
384MB is pretty small. I'm running a small Grails app in a 512MB VPS at enjoyvps.net (not affiliated in any way, just a happy customer) and it's been running for months at just under 200MB. I'm running a 32-bit Linux and JDK though, no sense wasting all that memory in 64-bit pointers if you don't have access to much memory anyway.
Can you try deploying a tomcat monitoring webapp e.g. psiprobe and see where the memory is being used?

Alfresco Community on Tomcat starts very slow

We're currently testing out Alfresco Community on an old server (only 1GB of RAM). Because this is the Community version we need to restart it every time we change the configuration (we're trying to add some features like generating previews of DWG files etc). However, restarting takes a very long time (about 4 minutes I think). This is probably due to the limit amount of memory available. Does anybody know some features or settings that can improve this restart time?
As with all performance issues there is rarely a magic bullet.
Memory pressure - the app is starting up but the 512m heap is only just enough to fit the applications in and it is spending half of the start up time running GC.
Have a look at any of the following:
1. -verbose:gc
2. jstat -gcutil
2. jvisualvm - much nicer UI
You are trying to see how much time is being spent in GC, look for many full garbage collection events that don't reclaim much of the heap ie 99% -> 95%.
Solution - more heap, nothing else for it really.
You may want to try -XX:+AggressiveHeap in order to get the JVM to max out it's memory usage on the box, only trouble is with only 1gb of memory it's going to be limited. List of all JVM options
Disk IO - the box it's self is not running at close to 100% CPU during startup (assuming 100% of a single core, startup is normally single threaded) then there may be some disk IO that the application is doing that is the bottle neck.
Use the operating system tools such as Windows Performance monitor to check for disk IO. It maybe that it isn't the application causing the IO it could be swap activity (page faulting)
Solution: either fix the app (not to likely) or get faster disks/computer or more physical memory for the box
Two of the most common reasons why Tomcat loads slowly:
You have a lot of web applications. Tomcat takes some time to create the web context for each of those.
Your webapp have a large number of files in a web application directory. Tomcat scans the web application directories at startup
also have a look at java performance tuning whitepaper, further I would recomend to you Lambda Probe www.lambdaprobe.org/d/index.htm to see if you are satisfied with your gcc settings, it has nice realtime gcc and memory tracking for tomcat.
I myself have Alfresco running with the example 4.2.6 from java performance tuning whitepaper:
4.2.6 Tuning Example 6: Tuning for low pause times and high throughput
Memory settings are also very nicely explained in that paper.
kind regards Mahatmanich

Categories