I have a problem of OOM error (OutOfMemory) with my app. It seems that it is due to an object that takes more and more room in memory but I don't manage to find which one.
I would like to use the Android Profiler to find the problem. But I have a real time app that uses a lot the device processor and memory, and when I use the Profiler the app becomes really slow and becomes almost impossible to use (I need to use the app at least 3 minutes at normal speed in order to see the memory growing progressively).
My questions are:
is there a way to use the Profiler without this slowdown issue?
if not, are there any other tools or methodology in order to help me finding which object grows in memory?
Thanks !
You can collect your app's heapdump using adb shell
adb shell am dumpheap <PID> <TARGET_FILE>
and view the exported file using android studio profiler
Alternatively, if you know when exactly in your app the OOM happens, you can collect the heapdump using this method Debug.dumpHprofData(String)
Related
In my work, we are running into a difficult to reproduce OOM issue. Or, more accurately, it is very easy to reproduce on one system, making that system unusable, but difficult to reproduce anywhere else, given the same inputs.
The application is being run as a service using a service wrapper. We did manage to get the configuration changed to launch it with the option of outputting a heap dump file on OOM but, unfortunately, they were truncated, most likely due to the service wrapper timing out and killing the process as it wrote the file. This is readily apparent, since the max memory is set to 1GB, and the hprof files are as small as 700MB, which is too small to be the entire heap upon OOM.
It would take a lot of jumping through hoops to additionally configure the wrapper to give the java process a longer time to write out the heap, but we are pursuing this using these 2 options:
wrapper.jvm_exit.timeout=600
wrapper.shutdown.timeout=600
The question is, is there anything useful I can do with the truncated hprof files I have? Eclipse MAT chokes on them. Jhat appears to load them, but then only shows 3 instances of Java.Object of size 0 and nothing else. I tried YourKit and it couldn't write its oids file.
It seems to me like these files should have some useful, accessible information in them. Is there a tool that can read what's there?
Thank you for your time!
Best option for analyzing the dump file which i came across till date, is text editors like vim.
Use Jpofiler(https://www.ejtechnologies.com/products/jprofiler/overview.html). It's not free, but, it has a trial period.
The live memory and CPU view options are your best bet to isolate your issues. It generally runs reasonably well even on large dumps.
In few circumstance, our application is using around 12 GB of memory.
We tried to get the heap dump using jmap utility. Since the application is using some GB of memory it causes the application to stop responding and causes problem in production.
In our case the heap usage suddenly increases from 2-3 GB to 12GB in 6 hours. In an attempt to find teh memory usage trend we tried to collect the heap dump every one hour after restarting the application. But as said since using the jmap causes the application to hang we need to restart it and we are not able to get the trend of memory usage.
Is there a way to get the heap dump without hanging the application or is there a utility other than jmap to collect heap dump.
Thoughts on this highly appreciated, since without getting the trend of memory usage it is highly difficult to fix the issue.
Note: Our application runs in CentOS.
Thanks,
Arun
Try the following. It comes with JDK >= 7:
/usr/lib/jvm/jdk-YOUR-VERSION/bin/jcmd PID GC.heap_dump FILE-PATH-TO-SAVE
Example:
/usr/lib/jvm/jdk1.8.0_91/bin/jcmd 25092 GC.heap_dump /opt/hd/3-19.11-jcmd.hprof
This dumping process is much faster than dumping with jmap! Dumpfiles are much smaller, but it's enough to give your the idea, where the leaks are.
At the time of writing this answer, there are bugs with Memory Analyzer and IBM HeapAnalyzer, that they cannot read dumpfiles from jmap (jdk8, big files). You can use Yourkit to read those files.
First of all, it is (AFAIK) essential to freeze the JVM while a thread dump / snapshot is being taken. If JVM was able to continue running while the snapshot was created, it would be next to impossible to get a coherent snapshot.
So are there other ways to get a heap dump?
You can get a heap dump using VisualVM as described here.
You can get a heap dump using jconsole or Eclipse Memory Analyser as described here.
But all of these are bound to cause the JVM to (at least) pause.
If your application is actually hanging (permanently!) that sounds like a problem with your application itself. My suggestion would be to see if you can track down that problem before looking for the storage leak.
My other suggestion is that you look at a single heap dump, and use the stats to figure out what kind(s) of object are using all of the space ... and why they are reachable. There is a good chance that you don't need the "trend" information at all.
You can use GDB to get the heap dump without running jmap on the target VM however this will still hang the application for the amount of time required to write the heap dump to disk. Assuming a disk speed of 100MB/s (a basic mirrored array or single disk) this is still 2 minutes of downtime.
http://blogs.atlassian.com/2013/03/so-you-want-your-jvms-heap/
The only true way to avoid stopping the JVM is transactional memory and a kernel that takes advantage of it to provide a process snapshot facility. This is one of the dreams of the proponents of STM but it's not available yet. VMWare's hot-migration comes close but depends on your allocation rate not exceeding network bandwidth and it doesn't save snapshots. Petition them to add it for you, it'd be a neat feature.
A heap dump analyzed with the right tool will tell you exactly what is consuming the heap. It is the best tool for tracking down memory leaks. However, collecting a heap dump is slow let alone analyzing it.
With knowledge of the workings of your application, sometimes a histogram is enough to give you a clue of where to look for the problem. For example, if MyClass$Inner is at the top of the histogram and MyClass$Inner is only used in MyClass, then you know exactly which file to look for a problem.
Here's the command for collecting a histogram.
jcmdpidGC.class_histogram filename=histogram.txt
To add to Stephen's answers, you can also trigger a heap dump via API for the most common JVM implementations:
example for the Oracle JVM
API for the IBM JVM
i am trying to analyze the CPU usage for a Java UI application running on Windows. I connected it to VisualVM, but it looks like the highest percentage for CPU usage is being used by
sum.rmi.transport.tcp.TCPTransport$ConnectionHandler.run();
I believe this is being used to supply information to VisualVM and hence VisualVM is skewing the results that i'm trying to investigate. Does any one have a way to get a better indication of what is occurring or a better method to determine what in a running java application is taking up so much CPU.
Try to use sampler first.
For detailed information use the profiler and set root methods. See Profiling With VisualVM, Part 1 and Profiling With VisualVM, Part 2 for more information about CPU and Memory profiling.
That sounds awfully suspicious. Try cross referencing the data with results from hprof. You won't need any external applications running, and the data will simply be dumped to a text file from your own process. Are you connecting to your process remotely?
I'm trying to find memory leaks and performance issues with my java application. Is there a program out there that can help me debug my application and display performance results?
Thanks.
Have a look at jvisualvm in the JDK - a subset of the Netbeans profiler - which can attach to a running Java 6 process and allow you to profile it and do memory analysis.
https://visualvm.dev.java.net/gettingstarted.html
I used a lot of tools to find why my program eats 100+ Mb of ram, polished the code to remove any possible memory leaks. Later I found that once jvm took some memory from the OS, I doesn't always return it, even if that memory is not used, which often looks like a memory leak. This depends on -Xmx and -XX:MaxHeapFreeRatio. I set Xmx to 40 which is roughly how much memory my app should use, and memory usage stays within 10-15 Mb of this range instead of increasing uncontrollably.
Also, jconsole is a great tool. It comes with jdk.
Eclipse has a good memory dump analyzer; but finding a memory leak can be very challenging and requires you to dive deeply into the way the objects are allocated by your application.
It took me 2 full days to figure out that one of my custom JTable cell editor classes was allocating a JDialog upon instantiation, without actually opening it, and the native part of the dialog kept the cell editor instance locked, thus the table, thus the screen and thus all entity objects that were associated with it.
You can try performance inspector tool.Following is the URL.
http://perfinsp.sourceforge.net/
Java Application performance is directly proportional to how JVM is running your application. This tool gives very good profiling information about JVM.But its not a graphical tool,you need to go through the text file generated. But its one time effort and you can get handy with this tool.I used it many time for performance related issues and it helped me lot.
We have an Java ERP type of application. Communication between server an client is via RMI. In peak hours there can be up to 250 users logged in and about 20 of them are working at the same time. This means that about 20 threads are live at any given time in peak hours.
The server can run for hours without any problems, but all of a sudden response times get higher and higher. Response times can be in minutes.
We are running on Windows 2008 R2 with Sun's JDK 1.6.0_16. We have been using perfmon and Process Explorer to see what is going on. The only thing that we find odd is that when server starts to work slow, the number of handles java.exe process has opened is around 3500. I'm not saying that this is the acual problem.
I'm just curious if there are some guidelines I should follow to be able to pinpoint the problem. What tools should I use? ....
Can you access to the log configuration of this application.
If you can, you should change the log level to "DEBUG". Tracing the DEBUG logs of a request could give you a usefull information about the contention point.
If you can't, profiler tools are can help you :
VisualVM (Free, and good product)
Eclipse TPTP (Free, but more complicated than VisualVM)
JProbe (not Free but very powerful. It is my favorite Java profiler, but it is expensive)
If the application has been developped with JMX control points, you can plug a JMX viewer to get informations...
If you want to stress the application to trigger the problem (if you want to verify whether it is a charge problem), you can use stress tools like JMeter
Sounds like the garbage collection cannot keep up and starts "halt-the-world" collecting for some reason.
Attach with jvisualvm in the JDK when starting and have a look at the collected data when the performance drops.
The problem you'r describing is quite typical but general as well. Causes can range from memory leaks, resource contention etcetera to bad GC policies and heap/PermGen-space allocation. To point out exact problems with your application, you need to profile it (I am aware of tools like Yourkit and JProfiler). If you profile your application wisely, only some application cycles would reveal the problems otherwise profiling isn't very easy itself.
In a similar situation, I have coded a simple profiling code myself. Basically I used a ThreadLocal that has a "StopWatch" (based on a LinkedHashMap) in it, and I then insert code like this into various points of the application: watch.time("OperationX");
then after the thread finishes a task, I'd call watch.logTime(); and the class would write a log that looks like this: [DEBUG] StopWatch time:Stuff=0, AnotherEvent=102, OperationX=150
After this I wrote a simple parser that generates CSV out from this log (per code path). The best thing you can do is to create a histogram (can be easily done using excel). Averages, medium and even mode can fool you.. I highly recommend to create a histogram.
Together with this histogram, you can create line graphs using average/medium/mode (which ever represents data best, you can determine this from the histogram).
This way, you can be 100% sure exactly what operation is taking time. If you can't determine the culprit, binary search is your friend (fine grain the events).
Might sound really primitive, but works. Also, if you make a library out of it, you can use it in any project. It's also cool because you can easily turn it on in production as well..
Aside from the GC that others have mentioned, Try taking thread dumps every 5-10 seconds for about 30 seconds during your slow down. There could be a case where DB calls, Web Service, or some other dependency becomes slow. If you take a look at the tread dumps you will be able to see threads which don't appear to move, and you could narrow your culprit that way.
From the GC stand point, do you monitor your CPU usage during these times? If the GC is running frequently you will see a jump in your overall CPU usage.
If only this was a Solaris box, prstat would be your friend.
For acute issues like this a quick jstack <pid> should quickly point out the problem area. Probably no need to get all fancy on it.
If I had to guess, I'd say Hotspot jumped in and tightly optimised some badly written code. Netbeans grinds to a halt where it uses a WeakHashMap with newly created objects to cache file data. When optimised, the entries can be removed from the map straight after being added. Obviously, if the cache is being relied upon, much file activity follows. You probably wont see the drive light up, because it'll all be cached by the OS.