<Outside any known module>, Vtune Amplifier Error - java

Currently I'm using VTune analyzer in linux system to profiling java code.
I generated report by attaching it to the running process.
However, in top-down tree, I usually can see [Outside any known module] which took certain amount of time.
When I click it, I couldn't see any thing.
The strange thing is that sometime it can generate proper top-down report.
When vtune can generate proper report, trace file is usually about 500MB
On the other hands, when it can't generate, trace file is just about 5MB
There are plenty of opinion that it is because of "code on the fly".
So, I tried this steps after turning off the JIT option in jdk.
Ofcourse, I ran it under root.
But it doesn't work well.
My Ubutu version is 14.04.1 LTS
Please help me!!
Any kind of probable ideas may be helpful
Thanks

When you start profiling do you see a message like "Cannot profile the managed part of the target process. There is no Java* Attach API available. Only native part of the target process will be profiled."?
Yes - means you are using a standalone JRE (which not a part of JDK). The JRE package does not include Java Attach API to attach and profile java code. Could you please try JDK.
Thanks,
Denis

Related

Generating Heap Dumps Java JRE7

I'm trying to generate a heap dump from my java program, but no matter what I seem to try I can't seem to figure out how to do so.
I downloaded the Eclipse Memory Analyzer (Plugin and then Standalone) which is supposed to be able to aquire heat dumps from active jre processes.. yet it lists none. The documentation lists several otherways of generating them, but I can't seem to make any of them work, or they refer to something that just doesn't seem to exist on my system. Same applies from anything I've managed to find on the web...
The program isn't causing an out of memory exception, its just using far more resources then I'm expecting it too.
I'm just at a complete loss at how exactly its supposed to be done :/
Any help would be appreciated thanks.
You can do it manually, using Java JDK's jmap.exe.
You get the PID of your process.
Navigate to %JAVA_HOME%/bin/ (JDK)
In the console (Command Prompt) type jmap.exe -dump:format=b,file=C:\dump\dump.bin PID
dump file is saved in the path you provide (in my example is C:\dump\dump.bin)
Then you can use NetBeans IDE to analyze this dump. It has an inbuilt tools, just import the dump.bin.

How do I stop .mdmp files from being created

I have an instance of Solr, hosted with Tomcat that recently started creating minidump files. There are no errors in any of logs, and Solr continues to work with out a hitch.
The files are approximately 14gb, and are filling up the hard drive. Is there a way to turn this off, while we investigate the issue?
Generally speaking when JVM crashes the content of hs_err error log file (controlled by -XX:ErrorFile) is often enough to point what the trouble may be.
To prevent Oracle JVM Hotspot to generate Windows minidump (mdmp files), the JVM option to use on command line is: -XX:-CreateMinidumpOnCrash
It exists since 2011 but was very difficult to find: How to disable minidump (mdmp) files generation with Java Hotspot JVM on Windows
This article has decent information on both Linux and Windows JVM dump files. Have yet to test it myself on my current version of Java 7....
From that site:
Disabling Text dump Files
If you suspect problems with the creation of text dump files you can turn off the text dump file by using the option: -XXnoJrDump.
Disabling the Binary Crash Files
You can turn off the binary crash file by using the option: -XXdumpSize:none.
Are you using Java 7? In that case revert to Java 5 or 6. Lucene/Solr and Java 7 don't go well together and it could be this creates the dump files. Otherwise if everything is working, just disable the dumping of files.
I never found a way to disable the Java minidumps on windows. The strange part here is that everything on the server worked correctly, besides the hard drive filling up with minidumps.
We eventually re-installed everything, same version of Solr/Java/Tomcat onto a linux machine and didn't have the problem any more. I would imagine that re-installing everything onto a windows machine would have also fixed the problem. This was a strange one.

How should I diagnose and prevent JVM crashes?

What should I (as a Java programmer who doesn't know anything about JVM internals) do when I come across a JVM crash?
In particular, how would you produce a reproducible test case? What should I be searching for in Sun's (or IBM's) bug database? What information can I get from the log files produced (e.g. hs_err_pidXYZ.log)?
If the crashes occur only one one specific machine, run memtest. I've seen recurring JVM crashes only two times, and in both cases the culprit turned out to be a hardware problem, namely faulty RAM.
In my experience they are nearly always caused by native code using JNI, either mine or someone else's. If you can, try re-running without the native code to see if you can reproduce it.
Sometimes it is worth trying with the JIT compiler turned off, if your bug is easily reproducible.
As others have pointed out, faulty hardware can also cause this, I've seen it for both Memory and video cards (when the crash was in swing code). Try running whatever hardware diagnostics are most appropriate for your system.
As JVM crashes are rare I'd report them to Sun. This can be done at their bug database. Use category Java SE, Subcategory jvm_exact or jit.
Under Unix/Linux you might get a Core dump. Under windows the JVM will usually tell you where it has stored a log of what has happened. These files often given some hint, but will vary from JVM to JVM. Sun gives full details of these files on their website. or IBM the files can be analysed using the Java Core Analyzer and Java heapdump Analyzer from IBM's alphaworks.
Unfortunately Java debuggers in my experience tend to hurt more than help. However, attaching an OS specific debugger (eg Visual Studio) can help if you are familiar with reading C stack traces.
Trying to get a reproducible test case is hard. If you have a large amount of code that always (or nearly always) crashes it is easier, just slowly remove parts while it keeps crashing, getting the result as small as possible. If you have no reproducible test code at all then it is very difficult. I'd suggest getting hints from my numbered selection above.
Sun documents the details of the crash log here. There is also a nice tutorial written up here, if you want to get into the dirty details (it sounds like you don't, though)
However, as a commenter mentioned, a JVM crash is a pretty rare and serious event, and it might be worthwhile to call Sun or IBM professional support in this situation.
When an iBM JVM crashes, it might have written to the file /tmp/dump_locations in there it lists any heapdump or javacore files it has written.
These files can be analysed using the Java Core Analyzer and Java heapdump Analyzer from IBM's alphaworks.
There's a great page on the Oracle website to troubleshoot these types of problems.
Check out the relevant sections for:
Hung Processes (eg. jstack utility)
Post Mortem diagnostics

HPjmeter-like graphical tool to view -agentlib:hprof profiling output

What tools are available to view the output of the built-in JVM profiler? For example, I'm starting my JVM with:
-agentlib:hprof=cpu=times,thread=y,cutoff=0,format=a,file=someFile.hprof.txt
This generates output in the hprof ("JAVA PROFILE 1.0.1") format.
I have had success in the past using HPjmeter to view these output files in a reasonable way. However, for whatever reason the files that are generated using the current version of the Sun JVM fail to load in the current version of HPjmeter:
java.lang.NullPointerException
at com.hp.jmeter.f.jb.a(Unknown Source)
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
Exception in thread "HPeprofDataFileReaderThread" java.lang.AssertionError: null pointer exception from loader
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
(Why would they obfuscate the bytecode for a free product?!)
Two questions arise from this:
Does anyone know the cause of this HPjmeter error? (EDIT: Yes--see below)
What other tools exist to read hprof files? And why are there none from Sun (are there)?
I know the Eclipse TPTP and other tools can monitor JVMTI data on the fly, but I need a solution that can process the generated hprof files after the fact since the deployed machine only has a JRE (not a JDK) intalled.
EDIT: A very helpful HPjmeter developer replied to my question on an HP ITRC forum indicating that heap=dump needs to be included in the -agentlib options temporarily until a bug in HPjmeter is fixed. This information makes HPjmeter viable again, but I will still leave the question open to see if anyone knows of any other tools.
EDIT: As of version 4.0.00 of HPjmeter (available 05/2009) this bug has been fixed.
Your Kit Java Profiler is able to read hprof snapshots (I am not sure if only for memory profiling or for CPU as well). It is not free but is by far the best java profiler I ever used. It presents the results in a clear, intuitive way and performs well on large data sets. The documentation is also pretty good.
For viewing and analyzing the output of hprof=samples or hprof=cpu I have used PerfAnal with good results. The GUI is a bit spartan, but very useful.
PerfAnal is a free download (GPL, originally an example project in the book Java Programming on Linux).
See this article:
http://www.oracle.com/technetwork/articles/javase/perfanal-137231.html
for more information and the download.
Normally you can just run
java -jar PerfAnal.jar hprof.java.txt
You may need to fiddle with -Xmx for large hprof files.
I am not 100% sure it'll work (it sounds like it will) and I am not sure it'll show it in the format you want... but have you thought about the VisualVM?
I believe it'll open up the resulting file.
I have been using Eclipse Memory Analyzer for analyzing different performance problems successfully. First of all, install the tool as described in the project webpage in Eclipse.
After that, you can create a dump file knowing the pid of the jvm to be analyzed
jmap -dump:format=b,file=<filename>.hprof <jvm_pid>
Then just import the .hprof file in eclipse. It has some automatic reports that try (for me they usually do not work) to point out which could be the possible problems.
Edit:
Answering the comment: You are right, it is more like a leak finder for Java. For performance problems, I have played with JRat for small projects. It shows time comsumed per method, number of times a method is called, hierarchy of calls, etc. The only problem is that as far as I know, it does not support .hprof files. To use it, yo need to execute your program adding a VM argument
-javaagent:<path>/shiftone-jrat.jar
This will generate a directory with the profile captured by the tool. Then, execute
java -jar shiftone-jrat.jar
And open the trace. Even been a simple tool, I think it could be useful.

Can I force generation of a JVM crash log file?

The log file from a JVM crash contains all sorts of useful information for debugging, such as shared libraries loaded and the complete environment. Can I force the JVM to generate one of these programmatically; either by executing code that crashes it or some other way? Or alternatively access the same information another way?
You can try throwing an OutOfMemoryError and adding the -XX:+HeapDumpOnOutOfMemoryError jvm argument. This is new as of 1.6 as are the other tools suggested by McDowell.
http://blogs.oracle.com/watt/resource/jvm-options-list.html
Have a look at the JDK Development Tools, in particular the Troubleshooting Tools for dumping the heap, printing config info, etcetera.
On Ubuntu 20.04.1 LTS I force core dump on jdk 11 process via
kill -4 <PID>
I am pretty sure this can be done with the IBM JDK as I was playing around with their stack analyzer some time ago. One option to force the dump would just to cause an outOfMemoryException.
These tools may provide some clues http://www.ibm.com/developerworks/java/library/j-ibmtools1/

Categories