Limiting the number of java core and heap dumps - java

Many times my java process is crashing with Java Cores and Heap Dumps.
When that happens, the resulting core files fill up my File System.
So, we need to periodically remove them to not fill up the File system.
Is there any way of limiting the size of heap or javacore?
More importantly, is there a way to make sure previous java core is overwritten by another core file Like a rollover mechanism(log4j) for Java process cores?
Thanks,
Sashi

If -XX:+HeapDumpOnOutOfMemoryError is present in your application, the heap dump is getting generated because your application is asking it to get generated on OOM. If you don't want that, remove the argument. But, that will not fix your actual problem for sure.

Related

fast heap dumps but on out of memory

We all know good old flag +HeapDumpOnOutOfMemoryError for taking heap dumps when JVM runs out of memory. Problem with this is that for large heaps this takes more and more time.
There is a way to take fast heap dumps using GNU Debugger...
You effectively take the core file of the process (very fast) then convert it to heapdump format using jmap... this is the slowest part of the work.
However this is only if you take it manually, when your java apps run in containers there is usually a fixed timeout until your app will be killed non gracefully... for kube i believe it is 30 seconds by default.
For many reasons i do not want to extend this timeout to larger number. Is there a way to trigger only core file dump when out of java runs out of memory? or we are just limited with whatever +HeapDumpOnOutOfMemoryError flag offers?
I can think of 2 possible solutions but it wont be only for out of memory situations but crashes too:
You can use java's -XX:OnError option to run your own script or gcore, (gdb) generate-core-file(depends to your OS) to create an core dump that you can later use a debugger(like gdb) to attach to it.
You can enable auto core dumps in your OS in the way it provides. For Redhat:
To enable: Edit the related line in file /etc/systemd/system.conf as DefaultLimitCORE=infinity
Reboot and remove the limits of core dump by ulimit -c unlimited.
When your application crashes, the dumb must be created in its working directory.

How do I analyze a Java heap dump when local memory is less than the size of the dumped heap? [duplicate]

I have a HotSpot JVM heap dump that I would like to analyze. The VM ran with -Xmx31g, and the heap dump file is 48 GB large.
I won't even try jhat, as it requires about five times the heap memory (that would be 240 GB in my case) and is awfully slow.
Eclipse MAT crashes with an ArrayIndexOutOfBoundsException after analyzing the heap dump for several hours.
What other tools are available for that task? A suite of command line tools would be best, consisting of one program that transforms the heap dump into efficient data structures for analysis, combined with several other tools that work on the pre-structured data.
Normally, what I use is ParseHeapDump.sh included within Eclipse Memory Analyzer and described here, and I do that onto one our more beefed up servers (download and copy over the linux .zip distro, unzip there). The shell script needs less resources than parsing the heap from the GUI, plus you can run it on your beefy server with more resources (you can allocate more resources by adding something like -vmargs -Xmx40g -XX:-UseGCOverheadLimit to the end of the last line of the script.
For instance, the last line of that file might look like this after modification
./MemoryAnalyzer -consolelog -application org.eclipse.mat.api.parse "$#" -vmargs -Xmx40g -XX:-UseGCOverheadLimit
Run it like ./path/to/ParseHeapDump.sh ../today_heap_dump/jvm.hprof
After that succeeds, it creates a number of "index" files next to the .hprof file.
After creating the indices, I try to generate reports from that and scp those reports to my local machines and try to see if I can find the culprit just by that (not just the reports, not the indices). Here's a tutorial on creating the reports.
Example report:
./ParseHeapDump.sh ../today_heap_dump/jvm.hprof org.eclipse.mat.api:suspects
Other report options:
org.eclipse.mat.api:overview and org.eclipse.mat.api:top_components
If those reports are not enough and if I need some more digging (i.e. let's say via oql), I scp the indices as well as hprof file to my local machine, and then open the heap dump (with the indices in the same directory as the heap dump) with my Eclipse MAT GUI. From there, it does not need too much memory to run.
EDIT:
I just liked to add two notes :
As far as I know, only the generation of the indices is the memory intensive part of Eclipse MAT. After you have the indices, most of your processing from Eclipse MAT would not need that much memory.
Doing this on a shell script means I can do it on a headless server (and I normally do it on a headless server as well, because they're normally the most powerful ones). And if you have a server that can generate a heap dump of that size, chances are, you have another server out there that can process that much of a heap dump as well.
First step: increase the amount of RAM you are allocating to MAT. By default it's not very much and it can't open large files.
In case of using MAT on MAC (OSX) you'll have file MemoryAnalyzer.ini file in MemoryAnalyzer.app/Contents/MacOS. It wasn't working for me to make adjustments to that file and have them "take". You can instead create a modified startup command/shell script based on content of this file and run it from that directory. In my case I wanted 20 GB heap:
./MemoryAnalyzer -vmargs -Xmx20g --XX:-UseGCOverheadLimit ... other params desired
Just run this command/script from Contents/MacOS directory via terminal, to start the GUI with more RAM available.
I suggest trying YourKit. It usually needs a little less memory than the heap dump size (it indexes it and uses that information to retrieve what you want)
The accepted answer to this related question should provide a good start for you (if you have access to the running process, generates live jmap histograms instead of heap dumps, it's very fast):
Method for finding memory leak in large Java heap dumps
Most other heap analysers (I use IBM http://www.alphaworks.ibm.com/tech/heapanalyzer) require at least a percentage of RAM more than the heap if you're expecting a nice GUI tool.
Other than that, many developers use alternative approaches, like live stack analysis to get an idea of what's going on.
Although I must question why your heaps are so large? The effect on allocation and garbage collection must be massive. I'd bet a large percentage of what's in your heap should actually be stored in a database / a persistent cache etc etc.
This person http://blog.ragozin.info/2015/02/programatic-heapdump-analysis.html
wrote a custom "heap analyzer" that just exposes a "query style" interface through the heap dump file, instead of actually loading the file into memory.
https://github.com/aragozin/heaplib
Though I don't know if "query language" is better than the eclipse OQL mentioned in the accepted answer here.
The latest snapshot build of Eclipse Memory Analyzer has a facility to randomly discard a certain percentage of objects to reduce memory consumption and allow the remaining objects to be analyzed. See Bug 563960 and the nightly snapshot build to test this facility before it is included in the next release of MAT. Update: it is now included in released version 1.11.0.
A not so well known tool - http://dr-brenschede.de/bheapsampler/ works well for large heaps. It works by sampling so it doesn't have to read the entire thing, though a bit finicky.
This is not a command line solution, however I like the tools:
Copy the heap dump to a server large enough to host it. It is very well possible that the original server can be used.
Enter the server via ssh -X to run the graphical tool remotely and use jvisualvm from the Java binary directory to load the .hprof file of the heap dump.
The tool does not load the complete heap dump into memory at once, but loads parts when they are required. Of course, if you look around enough in the file the required memory will finally reach the size of the heap dump.
I came across an interesting tool called JXray. It provides limited evaluation trial license. Found it very useful to find memory leaks. You may give it a shot.
Try using jprofiler , its works good in analyzing large .hprof, I have tried with file sized around 22 GB.
https://www.ej-technologies.com/products/jprofiler/overview.html
$499/dev license but has a free 10 day evaluation
When the problem can be "easily" reproduced, one unmentioned alternative is to take heap dumps before memory grows that big (e.g., jmap -dump:format=b,file=heap.bin <pid>).
In many cases you will already get an idea of what's going on without waiting for an OOM.
In addition, MAT provides a feature to compare different snapshots, which can come handy (see https://stackoverflow.com/a/55926302/898154 for instructions and a description).

What flag do I need to pass to java to limit the running time and memory usage of a .class file?

For testing purposes, I want to know if there is a flag I can pass to java in order to limit the total execution time and memory usage of a .class file.
Motivation: let's say that I automatically download someone else's code and I run it automatically, but I don't know if the code is (unintentionally) buggy and will crash my automated system if it is caught in an infinite loop or is increasingly using memory to the limit of the system.
I read here that using the flag -Xmx you can limit the memory usage, but I can't seem to find the way to limit running time. Reading the documentation it seems like there is a way of limiting the CPU usage during execution, but that is not what I want, I want the program ended (killed if necessary) if it has been running for more than, say, 5min.
Example of what I want:
java -Xmx1m -time5m a_java_program
for limiting memory usage to 1Mb and time to 5min.
Is there something like this for java?
Note that the Xmx option will limit the memory to the whole virtual machine, not a single class. For the running time, unless you have the full source code and you can run a profiler, you are in the same situation. A whole different thing would be if you were able to obtain the source code. Then you have several options to profile it for running time and memory usage (bear in mind that there will be some overhead by the profiler). For example, Eclipse: http://www.eclipse.org/tptp/home/documents/tutorials/profilingtool/profilingexample_32.html

How to increase heap size in java?

I create the autorun cd of which contains the dicom images.
it taks arround 10-15 min. to display dicomviewer on screen.
so, I want to increase the jvm heap size at runtime, programatically,(not from the commandline)
suppose i have to allocate 500mb to my app when i start the app.
is it possible?
i am using windows platform.
Literally, no. The max heap size is set at JVM launch time and cannot be increased.
In practice, you could just set the max heap size to as large as your platform will allow, and let the JVM grow the heap as it needs. There is an obvious risk in doing this; i.e. that your application will use all of the memory and cause the user's machine to grind to a halt. But that risk is implicit in your question.
One approach you can take in a Windows environment is to install a service that starts up your application. Via this method you can make the windows service point to a wrapper file that calls all the relvant files to start up your application. Here you can specify something like;
set JAVA_OPTS=-Xrs -Xms6G -Xmx6G -XX:MaxPermSize=384M
You can use this to specify your JVM memory settings on startup.
Please see http://docs.oracle.com/cd/E19900-01/819-4742/abeik/index.html for more information about the parameters.
Hope this helps,
V
Since the problem is about the duration of the pictures display and knowing that you cannot change the heap size programmatically, what about optimizing your program to load the pictures faster ?
You can use multiple threads or asynchronous loading to speedup the display.
You may also use paging in the user interface.
Can you edit the code of the user interface ?

Java big list object causing out of memory

I am using Java Spring ibatis
I have java based reporting application which displays large amount of data. I notice when system try to process large amount of data it throws "out of memory" error.
I know either we can increase the memory size or we can introduce paging in reporting application.
any idea ? i am curious if there is some thing like if list object is large enough split it into memory and disk so we don't have to make any major change in the application code ?
any suggestion appreciated.
The first thing to do should be to check exactly what is causing you to run out of memory.
Add the following to your command line
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/where/you/want
This will generate a heap dump hprof file.
You can use something like the Eclipse Memory Analyser Tool to see which part of the heap (if at all) you need to increase or whether you have a memory leak.

Categories