The log file from a JVM crash contains all sorts of useful information for debugging, such as shared libraries loaded and the complete environment. Can I force the JVM to generate one of these programmatically; either by executing code that crashes it or some other way? Or alternatively access the same information another way?
You can try throwing an OutOfMemoryError and adding the -XX:+HeapDumpOnOutOfMemoryError jvm argument. This is new as of 1.6 as are the other tools suggested by McDowell.
http://blogs.oracle.com/watt/resource/jvm-options-list.html
Have a look at the JDK Development Tools, in particular the Troubleshooting Tools for dumping the heap, printing config info, etcetera.
On Ubuntu 20.04.1 LTS I force core dump on jdk 11 process via
kill -4 <PID>
I am pretty sure this can be done with the IBM JDK as I was playing around with their stack analyzer some time ago. One option to force the dump would just to cause an outOfMemoryException.
These tools may provide some clues http://www.ibm.com/developerworks/java/library/j-ibmtools1/
Related
My application is crashing from time to time. Looking at windows crash dump, the following seems interesting:
ExceptionAddress: 000000006abc0608 (jvm!JVM_ResolveClass+0x000000000001d6b8)
ExceptionCode: c0000005 (Access violation)
DEFAULT_BUCKET_ID: NULL_CLASS_PTR_READ
ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.
FAILURE_BUCKET_ID: NULL_CLASS_PTR_READ_c0000005_jvm.dll!JVM_ResolveClass
BUCKET_ID: X64_APPLICATION_FAULT_NULL_CLASS_PTR_READ_jvm!JVM_ResolveClass+1d6b8
Can you please assist with how should I analyze it? How can I tell if it's something in my application code or in the JVM code?
Thanks
Your Java program should not be able to crash the JVM (I assume you're not using JNI or similar).
I would upgrade your JVM and try a newer version.
Seems to be a JVM issue. I would recommend trying your application on another computer just to be sure. Make sure you're using the latest version of Java as well.
I have a windows memory dump (DMP) file of a JVM process.
Is there any way I can use Java tooling to do a heap analysis of this? The SDK tools (jhat etc.) don't seem to help - they all seem to expect a Java heap dump.
(I've plenty of Windbg experience, but I am a complete ignoramus when it comes to Java debugging)
This similar question: Dump file analysis of Java process? has no answer on this point.
See my other answer covering exactly that, how to get Java information from Windows minidump
If i understood your question properly then i would suggest you to use jconsole you can find under jdk.
You can find API here
http://docs.oracle.com/javase/6/docs/technotes/tools/share/jconsole.html
There are two following options in Java HotSpot VM Options:
-XX:OnError="<cmd args>;<cmd args>" Run user-defined commands on fatal error. (Introduced in 1.4.2 update 9.)
-XX:OnOutOfMemoryError="<cmd args>;
<cmd args>" Run user-defined commands when an OutOfMemoryError is first thrown. (Introduced in 1.4.2 update 12, 6)
As far as I can see there are no such options in IBM JVM.
Is it correct?
I need to call some shell script in case if heap dump was generated.
What is the simplest way to do it?
The IBM J9 JDK offers the said ability via the -Xdump flag; this is the preferred way of registering dump agents.
A typical way of configuring the JVM to produce heap dumps on OOME is to catch all Out Of Memory Errors thrown by the application or by the JVM, and to prepare the dump for "walking" (with a heap inspector).
-Xdump:system+heap+java:events=systhrow+user,filter=java/lang/OutOfMemoryError,request=exclusive+prepwalk+compact
Ref:Eclipse Memory Analyzer Guide
The JAVA_DUMP_OPTS environment variable can also be used. More information on this is available in the IBM JDK diagnostics guide.
EDIT
For the purpose of running a command on a OOME, the tool option needs to be specified in the -Xdump option.
-Xdump is your friend and is very powerful.
For your OOM case, something like:
"-Xdump:tool:events=throw,filter=*OutOfMemoryError,exec=cmd_to_run
I would expect IBM's JVM to support the same flags, as it is an instrumented version of the Sun JVM if I remember correctly. Is it possible you compare command line options between major versions of Java? (I.e. Sun 1.6 versus IBM 1.4.2?)
If you do not find a solution for the flags, you could take advantage of the fact that the IBM JVM updates the file /tmp/dump-locations by appending the full path of the dumpfile. A cron job can run your script when that file is touched since its last run.
What tools are available to view the output of the built-in JVM profiler? For example, I'm starting my JVM with:
-agentlib:hprof=cpu=times,thread=y,cutoff=0,format=a,file=someFile.hprof.txt
This generates output in the hprof ("JAVA PROFILE 1.0.1") format.
I have had success in the past using HPjmeter to view these output files in a reasonable way. However, for whatever reason the files that are generated using the current version of the Sun JVM fail to load in the current version of HPjmeter:
java.lang.NullPointerException
at com.hp.jmeter.f.jb.a(Unknown Source)
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
Exception in thread "HPeprofDataFileReaderThread" java.lang.AssertionError: null pointer exception from loader
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
(Why would they obfuscate the bytecode for a free product?!)
Two questions arise from this:
Does anyone know the cause of this HPjmeter error? (EDIT: Yes--see below)
What other tools exist to read hprof files? And why are there none from Sun (are there)?
I know the Eclipse TPTP and other tools can monitor JVMTI data on the fly, but I need a solution that can process the generated hprof files after the fact since the deployed machine only has a JRE (not a JDK) intalled.
EDIT: A very helpful HPjmeter developer replied to my question on an HP ITRC forum indicating that heap=dump needs to be included in the -agentlib options temporarily until a bug in HPjmeter is fixed. This information makes HPjmeter viable again, but I will still leave the question open to see if anyone knows of any other tools.
EDIT: As of version 4.0.00 of HPjmeter (available 05/2009) this bug has been fixed.
Your Kit Java Profiler is able to read hprof snapshots (I am not sure if only for memory profiling or for CPU as well). It is not free but is by far the best java profiler I ever used. It presents the results in a clear, intuitive way and performs well on large data sets. The documentation is also pretty good.
For viewing and analyzing the output of hprof=samples or hprof=cpu I have used PerfAnal with good results. The GUI is a bit spartan, but very useful.
PerfAnal is a free download (GPL, originally an example project in the book Java Programming on Linux).
See this article:
http://www.oracle.com/technetwork/articles/javase/perfanal-137231.html
for more information and the download.
Normally you can just run
java -jar PerfAnal.jar hprof.java.txt
You may need to fiddle with -Xmx for large hprof files.
I am not 100% sure it'll work (it sounds like it will) and I am not sure it'll show it in the format you want... but have you thought about the VisualVM?
I believe it'll open up the resulting file.
I have been using Eclipse Memory Analyzer for analyzing different performance problems successfully. First of all, install the tool as described in the project webpage in Eclipse.
After that, you can create a dump file knowing the pid of the jvm to be analyzed
jmap -dump:format=b,file=<filename>.hprof <jvm_pid>
Then just import the .hprof file in eclipse. It has some automatic reports that try (for me they usually do not work) to point out which could be the possible problems.
Edit:
Answering the comment: You are right, it is more like a leak finder for Java. For performance problems, I have played with JRat for small projects. It shows time comsumed per method, number of times a method is called, hierarchy of calls, etc. The only problem is that as far as I know, it does not support .hprof files. To use it, yo need to execute your program adding a VM argument
-javaagent:<path>/shiftone-jrat.jar
This will generate a directory with the profile captured by the tool. Then, execute
java -jar shiftone-jrat.jar
And open the trace. Even been a simple tool, I think it could be useful.
What is the best practice to solve a Java VM crash if the follow conditions are true:
No own or third party native code. 100% pure java
The same program run on many other system without any problems.
PS: With VM crash I means that the VM write a dump file like hs_err_pid1234.log and terminate.
Read the hs_err_pid1234.log file (or whatever the error log file name is). There are usually clues in there. The next step depends on what you discover in the log.
Yes, it could be a bug in the specific version of the JVM implementation you are using, but I have also seen problems caused by memory fragmentation in the operating system. Windows, for example, is prone to pin dlls at inappropriate locations, and fail to allocate a contiguous block of memory when the JVM asks for it as a result. Other out opf memory problems can also manifest themselves through crash dumps of this type.
Update or replace your JVM. If you currently have the newest version, then try an older one, or if you don't have the latest version, try updating to it. Maybe its a known issue in your particular version?
Assuming the JVM version across machines is the same:
Figure out what is different about the machine where the JVM is crashing. Same OS and OS version? We have problems with JVMs crashing on a particular version of Red Hat for example. And we have also found some older Red Hat versions unable to cope with extra memory properly, resulting in running out of swap space. (Our solution was to upgrade RedHat).
Also, is the program doing exactly the same thing across machines? Is it accessing a shared filesystem? Is the file system mounted similarly on your machines (SMB/NFS etc)? Something must be different.
The log file should give you some idea of where the crash occurred (malloc for example).
Take a look at the stacktraces in the dump file, as it should tell you what was going on when the crash occurred.
As well as digging into the hs_err dump file, I'd also submit it to Sun or whomever made your JVM (I believe there are instructions in how to do so at the top of the file?). It can't hurt.
32bit? 64bit? Amount of ram in client machine? processor? os? See if there is any connection between the systems. A connection may lead to a clue. If all else fails, consider using different major/minor versions of the JVM. Also, if the problem JUST started can you get to a time (via version control) where the program didn't crash? Look through the hs_err log, you may get an idea of what caused the crash. It could be a version of some other client library the JVM uses. Lastly, run the program in debug/profile and maybe you'll see some symptons before the crash (assuming you can duplicate it)