HPjmeter-like graphical tool to view -agentlib:hprof profiling output - java

What tools are available to view the output of the built-in JVM profiler? For example, I'm starting my JVM with:
-agentlib:hprof=cpu=times,thread=y,cutoff=0,format=a,file=someFile.hprof.txt
This generates output in the hprof ("JAVA PROFILE 1.0.1") format.
I have had success in the past using HPjmeter to view these output files in a reasonable way. However, for whatever reason the files that are generated using the current version of the Sun JVM fail to load in the current version of HPjmeter:
java.lang.NullPointerException
at com.hp.jmeter.f.jb.a(Unknown Source)
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
Exception in thread "HPeprofDataFileReaderThread" java.lang.AssertionError: null pointer exception from loader
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
(Why would they obfuscate the bytecode for a free product?!)
Two questions arise from this:
Does anyone know the cause of this HPjmeter error? (EDIT: Yes--see below)
What other tools exist to read hprof files? And why are there none from Sun (are there)?
I know the Eclipse TPTP and other tools can monitor JVMTI data on the fly, but I need a solution that can process the generated hprof files after the fact since the deployed machine only has a JRE (not a JDK) intalled.
EDIT: A very helpful HPjmeter developer replied to my question on an HP ITRC forum indicating that heap=dump needs to be included in the -agentlib options temporarily until a bug in HPjmeter is fixed. This information makes HPjmeter viable again, but I will still leave the question open to see if anyone knows of any other tools.
EDIT: As of version 4.0.00 of HPjmeter (available 05/2009) this bug has been fixed.

Your Kit Java Profiler is able to read hprof snapshots (I am not sure if only for memory profiling or for CPU as well). It is not free but is by far the best java profiler I ever used. It presents the results in a clear, intuitive way and performs well on large data sets. The documentation is also pretty good.

For viewing and analyzing the output of hprof=samples or hprof=cpu I have used PerfAnal with good results. The GUI is a bit spartan, but very useful.
PerfAnal is a free download (GPL, originally an example project in the book Java Programming on Linux).
See this article:
http://www.oracle.com/technetwork/articles/javase/perfanal-137231.html
for more information and the download.
Normally you can just run
java -jar PerfAnal.jar hprof.java.txt
You may need to fiddle with -Xmx for large hprof files.

I am not 100% sure it'll work (it sounds like it will) and I am not sure it'll show it in the format you want... but have you thought about the VisualVM?
I believe it'll open up the resulting file.

I have been using Eclipse Memory Analyzer for analyzing different performance problems successfully. First of all, install the tool as described in the project webpage in Eclipse.
After that, you can create a dump file knowing the pid of the jvm to be analyzed
jmap -dump:format=b,file=<filename>.hprof <jvm_pid>
Then just import the .hprof file in eclipse. It has some automatic reports that try (for me they usually do not work) to point out which could be the possible problems.
Edit:
Answering the comment: You are right, it is more like a leak finder for Java. For performance problems, I have played with JRat for small projects. It shows time comsumed per method, number of times a method is called, hierarchy of calls, etc. The only problem is that as far as I know, it does not support .hprof files. To use it, yo need to execute your program adding a VM argument
-javaagent:<path>/shiftone-jrat.jar
This will generate a directory with the profile captured by the tool. Then, execute
java -jar shiftone-jrat.jar
And open the trace. Even been a simple tool, I think it could be useful.

Related

<Outside any known module>, Vtune Amplifier Error

Currently I'm using VTune analyzer in linux system to profiling java code.
I generated report by attaching it to the running process.
However, in top-down tree, I usually can see [Outside any known module] which took certain amount of time.
When I click it, I couldn't see any thing.
The strange thing is that sometime it can generate proper top-down report.
When vtune can generate proper report, trace file is usually about 500MB
On the other hands, when it can't generate, trace file is just about 5MB
There are plenty of opinion that it is because of "code on the fly".
So, I tried this steps after turning off the JIT option in jdk.
Ofcourse, I ran it under root.
But it doesn't work well.
My Ubutu version is 14.04.1 LTS
Please help me!!
Any kind of probable ideas may be helpful
Thanks
When you start profiling do you see a message like "Cannot profile the managed part of the target process. There is no Java* Attach API available. Only native part of the target process will be profiled."?
Yes - means you are using a standalone JRE (which not a part of JDK). The JRE package does not include Java Attach API to attach and profile java code. Could you please try JDK.
Thanks,
Denis

Generating Heap Dumps Java JRE7

I'm trying to generate a heap dump from my java program, but no matter what I seem to try I can't seem to figure out how to do so.
I downloaded the Eclipse Memory Analyzer (Plugin and then Standalone) which is supposed to be able to aquire heat dumps from active jre processes.. yet it lists none. The documentation lists several otherways of generating them, but I can't seem to make any of them work, or they refer to something that just doesn't seem to exist on my system. Same applies from anything I've managed to find on the web...
The program isn't causing an out of memory exception, its just using far more resources then I'm expecting it too.
I'm just at a complete loss at how exactly its supposed to be done :/
Any help would be appreciated thanks.
You can do it manually, using Java JDK's jmap.exe.
You get the PID of your process.
Navigate to %JAVA_HOME%/bin/ (JDK)
In the console (Command Prompt) type jmap.exe -dump:format=b,file=C:\dump\dump.bin PID
dump file is saved in the path you provide (in my example is C:\dump\dump.bin)
Then you can use NetBeans IDE to analyze this dump. It has an inbuilt tools, just import the dump.bin.

Debugging the "Too many files open" issue

The application i am working on suddenly crashed with
java.io.IOException: ... Too many open files
As i understand the issue it means that files are opened but not closed.
Stacktrace of course happens after the fact and can only help understand before what event error occurred.
What would be an intelligent way to search your code base to find this issue which only seems to occur when app is under high stress load.
use lsof -p pid to check what cause leak of file references;
use ulimit -n to see the limit of opened file references of a single process;
check any IO resources in your project,are they released in time?,Note that,File,Process,Socket(and Http connections) are all IO resources.
sometimes, too many threads will cause this problem too.
I think the best way to use a tool specifically designed for the purpose, such as this one:
This little Java agent is a tool that keeps track of where/when/who opened files in your JVM. You can have the agent trace these operations to find out about the access pattern or handle leaks, and dump the list of currently open files and where/when/who opened them.
In addition, upon "too many open files" exception, this agent will dump the list, allowing you to find out where a large number of file descriptors are in use.
I seem to remember YourKit also having some facilities around this, but can't find any specific information at the moment.
What OS? If it's linux/mac, there is information under /proc that should help. On Windows, use the Process Explorer.
As far as searching the code base, perhaps look for code that catches or raises IOException - I think I/O methods that already catch/raise this have a high likelihood of needing a close() call.
Have you tried attaching to the running process using jvisualvm (Java 5.0 and later in the JDK bin directory). You can open the running process and do a heap dump (which if you have an older JDK you will need to analyze using eclipse or intellij or netbeans et. al.).
In JDK 7 the heap dump button is under the "Monitor" tab. It will create a heap dump tab, "Classes" sub-tab that you can check and see if any classes that open files exist in high quantity. Another very useful feature is heap dump compare, so you can take a reference heap dump, let your app run a bit and then take another heap dump and compare the two (the link to compare is on the "[heapdump]" tab you get when you take one. There is also a flag in java for taking a heapdump on crash or OOM exception, you can go down that route if comparing heap dumps does not give you an obvious class that is causing the problem. Also, "Instances" subtab in the heap dump diff will show you what has been allocated in the time between the two heap dumps which may also help.
jvisualvm is an awesome tool that does not get enough mentions.

Java .jar uses too much memory

I'm making an application in Java using Eclipse Indigo. When I run it using Eclipse the Task Manager shows javaw.exe is using 50mb of memory. When I export the application as a runnable .jar and execute the .jar the Task Manager shows javaw.exe is using 500mb.
Why is this? How could I fix this?
Edit: I'm using a Windows 7 64 bit, and my system says I have Java 1.7 installed. Apparently the memory problem is caused by a while loop. I'll study what's inside the while loop causing the problem.
Edit: Problem found. At one point in the while loop new BufferedImage instances where created, instead of replacing the same BufferedImage.
Without any additional details about your code, I would suggest using a profiler to analyze the problem. I know YourKit and the one that is available for NetBeans are very good.
Once you run you app from the profiler, you should initially look at the objects and listeners created by your application's packages. If the issue is not there, you can expand your search to other packages until you identify things that are growing out-of-control, and then look at the code that handles those entities.
When you run certain parts of the code multiple times and still see memory utilization after that code stopped running, then you might have a leak and may consider nulling or emptying variables/listeners on exit.
It should be a good starting point, but please report your results back, so we know how it goes. By the way, what operating system are you using and what version of java?
--Luiz
You need to profile your code to get the exact answer, but from my experience when I see similar things I often equate it to garbage collecting. For example, I ran the same job and gave one job 10 gigs and the other 2 gigs..Both ran and completed but the 10gigs one used more memory(and finished faster) while the second(2gig) one, I believe, garbage collected so it still completed but took a bit more time with less memory. I'm a bit new to java so I maybe assuming the garbage collecting but I have seen what you are talking about.
You need to profile your code(check out jconsole, which is included with java, or visualVM)..
That sounds most peculiar.
I can think of two possible explanations:
You looked at the wrong javaw.exe instance. Perhaps you looked at the instance that is running Eclipse ... which is likely to be that big, or bigger.
You have (somehow) managed to configure Java to run with a large heap by default. On Linux you could do this with a wrapper script, a shell function or a shell alias. You can do at least the first of those on Windows.
I don't think it is the JAR file itself. AFAIK, you can't set JVM parameters in a JAR file. It is possible that you've somehow included a different version of something in the JAR file, but that's a bit of a stretch ...
If none of these ideas help, try profiling.
Problem found. At one point in the while loop new BufferedImage instances where created, instead of replacing the same BufferedImage.
Ah yes. BufferedImage uses large amounts of out-of-heap memory and that needs to be managed carefully.
But this doesn't explain why your application used more memory when run from the JAR than when launched from Eclipse ... unless you were telling the application to do different things.

How should I diagnose and prevent JVM crashes?

What should I (as a Java programmer who doesn't know anything about JVM internals) do when I come across a JVM crash?
In particular, how would you produce a reproducible test case? What should I be searching for in Sun's (or IBM's) bug database? What information can I get from the log files produced (e.g. hs_err_pidXYZ.log)?
If the crashes occur only one one specific machine, run memtest. I've seen recurring JVM crashes only two times, and in both cases the culprit turned out to be a hardware problem, namely faulty RAM.
In my experience they are nearly always caused by native code using JNI, either mine or someone else's. If you can, try re-running without the native code to see if you can reproduce it.
Sometimes it is worth trying with the JIT compiler turned off, if your bug is easily reproducible.
As others have pointed out, faulty hardware can also cause this, I've seen it for both Memory and video cards (when the crash was in swing code). Try running whatever hardware diagnostics are most appropriate for your system.
As JVM crashes are rare I'd report them to Sun. This can be done at their bug database. Use category Java SE, Subcategory jvm_exact or jit.
Under Unix/Linux you might get a Core dump. Under windows the JVM will usually tell you where it has stored a log of what has happened. These files often given some hint, but will vary from JVM to JVM. Sun gives full details of these files on their website. or IBM the files can be analysed using the Java Core Analyzer and Java heapdump Analyzer from IBM's alphaworks.
Unfortunately Java debuggers in my experience tend to hurt more than help. However, attaching an OS specific debugger (eg Visual Studio) can help if you are familiar with reading C stack traces.
Trying to get a reproducible test case is hard. If you have a large amount of code that always (or nearly always) crashes it is easier, just slowly remove parts while it keeps crashing, getting the result as small as possible. If you have no reproducible test code at all then it is very difficult. I'd suggest getting hints from my numbered selection above.
Sun documents the details of the crash log here. There is also a nice tutorial written up here, if you want to get into the dirty details (it sounds like you don't, though)
However, as a commenter mentioned, a JVM crash is a pretty rare and serious event, and it might be worthwhile to call Sun or IBM professional support in this situation.
When an iBM JVM crashes, it might have written to the file /tmp/dump_locations in there it lists any heapdump or javacore files it has written.
These files can be analysed using the Java Core Analyzer and Java heapdump Analyzer from IBM's alphaworks.
There's a great page on the Oracle website to troubleshoot these types of problems.
Check out the relevant sections for:
Hung Processes (eg. jstack utility)
Post Mortem diagnostics

Categories