Why are some java libraries compiled without debugging information - java

I've noticed recently that there's a few java libs (the JDK, joda time, iText) that compile without some/all of the debugging information. Either the local variable info is missing, or the both the local variable info and line numbers are missing.
Is there any reason for this? I realise it makes compiled code larger but I don't believe that's a particular large consideration. Or is it just building with the default compile options?
Thanks.

The default compile options don't include debugging information, you must specifically tell the compiler to include it. There are several reasons why most people omit it:
Some libraries are used in embedded systems (like mobile phones). Until recently, every bit counted. Today, most mobiles come with more memory than all computers in 1985 had together ;)
When compiled with debugging active, the code runs 5% slower. Not much but again, in some cases every cycle counts.
Today's Senior Developers were born in a time when 64KB of RAM was enormous. Yesterday, I added another 2TB drive to my server in the cellar. That's 7 orders of magnitude in 25 years. Humans need more time to adjust.
[EDIT] As John pointed out, Java bytecode isn't optimized (much) anymore today. So the output of the class files will be the same for both cases (only the class file with debug information will be bigger). The code is optimized in the JIT at runtime which allows the runtime to optimize the code for the CPU, memory (amount and layout), etc.
The mentioned 5% penalty is when you run the code and add the command line options to allow a remote debugger to attach to the process. If you don't enable remote debugging, there is no penalty (except for class loading but that happens only once).

Probably size of installation. Debug information adds overhead to the jar-files which Sun probably didn't like.
I had to investigate a Java Web Start issue recently - no debug information available - so adding full tracing to the java console and downloading the source code helped some, but the code is rather tangled so I'd just like a debug build.
The JDK should be compiled with FULL debug information everywhere!

Related

How to profile a gwt client application with jprofiler?

I have a memory leak problem in my GWT application and I'm trying to profile it using JProfiler.
I can't manage to get pertinent results as I don't see my java classes on the profile memory view, i just see the GWT lib classes.
I've added the parameter to profile a remote application using JProfiler (-agentpath:C:\PROGRA~1\JPROFI~1\bin\WINDOW~1\jprofilerti.dll=port=8849). I launch the project in superDevMode through the Eclipse IDE. JProfiler shows me the GWT classes in memory but it doesn't show my own java classes.
In this video youtube.com/watch?v=zUJUSxXOOa4 we can see that JProfiler can show the java classes directly, that's what i search to do
Is there any option to activate in JProfiler for that ? Any help on that matter would be welcome. Thank you
Super Dev mode does not work with a Java profiler. The old Dev mode executed the client-side code in the JVM via a special plugin. These days, the Dev mode browser plugin does not work with modern browsers. The last browsers that supported the plugins were Chrome 21.0.1180.89 and Firefox 26.
As of now, Firefox 24 ESR is still supported:
https://www.mozilla.org/en-US/firefox/organizations/all/
and the Dev mode plugin works in that version. For more information on dev mode see
http://www.gwtproject.org/doc/latest/DevGuideCompilingAndDebugging.html
I don't think so. GWT compile your classes to JS, due to this JProfiler won't work (I think, but maybe I'm wrong). Maybe you can give a try to MemoryAnalyzer with a Heap dump.
Chrome comes with some awesome built-in CPU and heap profiling tools. Firefox now has its own built-in CPU profiler, and Firebug has a different one. IE (at least 10, and I think 9) has a built in CPU profiler, though it has been a long time since I dug too far into there.
Memory is historically a difficult thing to track in browsers, not least because old IE versions just won't die, and leak just from looking at them funny. If you are facing one of those memory leaks, a different plan of attack is required.
But if you suspect you are dealing with a leak in your own application code, Chrome's dev tools can help! Compile in PRETTY (or DETAILED if you have an extremely wide screen), and bring up your app in Chrome with the developer tools open.
In the Profiles tab, there are three kinds of profile to capture, two about memory. I typically prefer the Take Heap Snapshot, and take a 'before' and 'after' look at whatever action I believe to be leaking memory, but the Record Heap Allocations view will give you another way to consider the memory usage of your application.
Start by picking a supposedly 'stable' state of your memory usage - turn the app on, use it for a bit, make sure all the various singletons etc are instantiated, and probably do whatever action you suspect as causing a problem, once. Once you are at a point which you can back to (memory-wise, at least if the leak was fixed), take a snapshot, do the behavior that leaks, return to the 'stable' state, and take another snapshot. Only take one step when checking for leaks, more on this in a bit.
With the two snapshots, you can compare objects allocated and freed - we're mostly interested in cases where more objects were created than deleted, ideally where zero were deleted. If you find N objects are deleted but N+1 are created, then make sure N is veeeery small before digging in - it is often possible to fix a leak only by going after individual objects, tracing them back to their actual leaked source, fixing it, and measuring again.
Once you have an object that was created in one step, but not deleted at the end of that step (but it should have been) use the 'Retainers' view to see why they are still kept. This will more or less show you the field in the object that holds them and the type of that holding object, all the way up to window or some other global object.
Ignore anything in ()s like (compiled code), (array), (system), (string), etc. I'd generally ignore dom element allocation (assuming you suspect you have a leak in app code, not JSNI). Look for few, high level objects leaked, rather than many, low level, it will make it more likely that you are closer to the source of the leak.
The names of compiled constructors and fields in PRETTY generally map very closely to the original Java source. All constructors get _X appended to them, where X is 0, 1, etc - this is to distinguish from the type itself. This makes for an easy way to recognize Java types in the Constructor column, as they all have _s near the end of their name.

Is there a way to profile a java application without the use of an external tool?

External tools are giving me trouble. Is there a way to get simple cpu usage/time spent per function without the use of some external gui tool?
I've been trying to profile my java program with VisualVM, but I'm having terrible, soul crushing, ambition killing results. It will only display heap usage, what I'm interested in is CPU usage, but that panel simply says Not supported for this JVM. Doesn't tell me which JVM to use, by the way. I've download JDK 6 and launched it using that, I made sure my program targets the same VM, but nothing! Still the same, unhelpful error message.
My needs are pretty simple. I just want to find out where the program is spending its time. Python has an excellent built in profiler that print out where time was spent in each function, both with per call, and total time formats. That's really the extent of what I'm looking for right now. Anyone have any suggestions?
It's not pretty, but you could use the built in hprof profiling mechanism, by adding a switch to the command line.
-Xrunhprof:cpu=times
There are many options available; see the Oracle documentation page for HPROF for more information.
So, for example, if you had an executable jar you wanted to profile, you could type:
java -Xrunhprof:cpu=times -jar Hello.jar
When the run completes, you'll have a (large) text file called "java.hprof.txt".
That file will contain a pile of interesting data, but the part you're looking for is the part which starts:
CPU TIME (ms) BEGIN (total = 500) Wed Feb 27 16:03:18 2013
rank self accum count trace method
1 8.00% 8.00% 2000 301837 sun.nio.cs.UTF_8$Encoder.encodeArrayLoop
2 5.40% 13.40% 2000 301863 sun.nio.cs.StreamEncoder.writeBytes
3 4.20% 17.60% 2000 301844 sun.nio.cs.StreamEncoder.implWrite
4 3.40% 21.00% 2000 301836 sun.nio.cs.UTF_8.updatePositions
Alternatively, if you've not already done so, I would try installing the VisualVM-Extensions, VisualGC, Threads Inspector, and at least the Swing, JVM, Monitor, and Jvmstat Tracer Probes.
Go to Tools->Plugins to install them. If you need more details, comment, and I'll extend this answer further.

Profiling Java Code

I'm attempting to profile a Java web search program called Nutch from source. As far as I understand, to profile, I need to enable profiling in the compiler in order to generate a profile file to be opened in a program such as GProf. How do I do this if all I do to compile the software is run ANT withing the source root directory?
If you're running a newer JDK (the latest 1.6 update 7 or greater), you don't need to do anything as far as preparing your Java process to profile. Simply use JVisualVM (which comes with the JDK) to attach to your process, and click the profile button.
You say in response to #Charlie's answer that ideally you would like information about how the program spends it's time.
There's another viewpoint - you need to know why the program spends its time.
The reason each cycle is spent is a chain of reasons, where each link is a line of code on the call stack. The chain is no stronger than its weakest link.
Unless the program is as fast as possible, you've got "bottlenecks".
For example, if a "bottleneck" is wasting 20% of the time, then it consists of an optimizable line of code (i.e. poorly justified) that is on the stack 20% of the time. All you have to do is find it.
If 10,000 samples of the stack are taken, it will be on about 2,000 of them. If 10 samples are taken, it will be on 2 of them, on average.
In fact, if you randomly pause the program several times and study the call stack, if you see an optimizable line of code on as few as 2 samples, you've found a "bottleneck".
You can fix it, get a nice speedup, and repeat the whole process.
That is the basis of this technique.
Regardless, thinking in terms of gprof concepts will not serve you well.
You're really asking an Ant question here. You can add command line flags for the compiler as attributes in the ant file for the compile target. See the <compilerarg> tag here.
There are a lot of good profiling tools, by the way. Have a look at this google search.

How should I diagnose and prevent JVM crashes?

What should I (as a Java programmer who doesn't know anything about JVM internals) do when I come across a JVM crash?
In particular, how would you produce a reproducible test case? What should I be searching for in Sun's (or IBM's) bug database? What information can I get from the log files produced (e.g. hs_err_pidXYZ.log)?
If the crashes occur only one one specific machine, run memtest. I've seen recurring JVM crashes only two times, and in both cases the culprit turned out to be a hardware problem, namely faulty RAM.
In my experience they are nearly always caused by native code using JNI, either mine or someone else's. If you can, try re-running without the native code to see if you can reproduce it.
Sometimes it is worth trying with the JIT compiler turned off, if your bug is easily reproducible.
As others have pointed out, faulty hardware can also cause this, I've seen it for both Memory and video cards (when the crash was in swing code). Try running whatever hardware diagnostics are most appropriate for your system.
As JVM crashes are rare I'd report them to Sun. This can be done at their bug database. Use category Java SE, Subcategory jvm_exact or jit.
Under Unix/Linux you might get a Core dump. Under windows the JVM will usually tell you where it has stored a log of what has happened. These files often given some hint, but will vary from JVM to JVM. Sun gives full details of these files on their website. or IBM the files can be analysed using the Java Core Analyzer and Java heapdump Analyzer from IBM's alphaworks.
Unfortunately Java debuggers in my experience tend to hurt more than help. However, attaching an OS specific debugger (eg Visual Studio) can help if you are familiar with reading C stack traces.
Trying to get a reproducible test case is hard. If you have a large amount of code that always (or nearly always) crashes it is easier, just slowly remove parts while it keeps crashing, getting the result as small as possible. If you have no reproducible test code at all then it is very difficult. I'd suggest getting hints from my numbered selection above.
Sun documents the details of the crash log here. There is also a nice tutorial written up here, if you want to get into the dirty details (it sounds like you don't, though)
However, as a commenter mentioned, a JVM crash is a pretty rare and serious event, and it might be worthwhile to call Sun or IBM professional support in this situation.
When an iBM JVM crashes, it might have written to the file /tmp/dump_locations in there it lists any heapdump or javacore files it has written.
These files can be analysed using the Java Core Analyzer and Java heapdump Analyzer from IBM's alphaworks.
There's a great page on the Oracle website to troubleshoot these types of problems.
Check out the relevant sections for:
Hung Processes (eg. jstack utility)
Post Mortem diagnostics

HPjmeter-like graphical tool to view -agentlib:hprof profiling output

What tools are available to view the output of the built-in JVM profiler? For example, I'm starting my JVM with:
-agentlib:hprof=cpu=times,thread=y,cutoff=0,format=a,file=someFile.hprof.txt
This generates output in the hprof ("JAVA PROFILE 1.0.1") format.
I have had success in the past using HPjmeter to view these output files in a reasonable way. However, for whatever reason the files that are generated using the current version of the Sun JVM fail to load in the current version of HPjmeter:
java.lang.NullPointerException
at com.hp.jmeter.f.jb.a(Unknown Source)
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
Exception in thread "HPeprofDataFileReaderThread" java.lang.AssertionError: null pointer exception from loader
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
(Why would they obfuscate the bytecode for a free product?!)
Two questions arise from this:
Does anyone know the cause of this HPjmeter error? (EDIT: Yes--see below)
What other tools exist to read hprof files? And why are there none from Sun (are there)?
I know the Eclipse TPTP and other tools can monitor JVMTI data on the fly, but I need a solution that can process the generated hprof files after the fact since the deployed machine only has a JRE (not a JDK) intalled.
EDIT: A very helpful HPjmeter developer replied to my question on an HP ITRC forum indicating that heap=dump needs to be included in the -agentlib options temporarily until a bug in HPjmeter is fixed. This information makes HPjmeter viable again, but I will still leave the question open to see if anyone knows of any other tools.
EDIT: As of version 4.0.00 of HPjmeter (available 05/2009) this bug has been fixed.
Your Kit Java Profiler is able to read hprof snapshots (I am not sure if only for memory profiling or for CPU as well). It is not free but is by far the best java profiler I ever used. It presents the results in a clear, intuitive way and performs well on large data sets. The documentation is also pretty good.
For viewing and analyzing the output of hprof=samples or hprof=cpu I have used PerfAnal with good results. The GUI is a bit spartan, but very useful.
PerfAnal is a free download (GPL, originally an example project in the book Java Programming on Linux).
See this article:
http://www.oracle.com/technetwork/articles/javase/perfanal-137231.html
for more information and the download.
Normally you can just run
java -jar PerfAnal.jar hprof.java.txt
You may need to fiddle with -Xmx for large hprof files.
I am not 100% sure it'll work (it sounds like it will) and I am not sure it'll show it in the format you want... but have you thought about the VisualVM?
I believe it'll open up the resulting file.
I have been using Eclipse Memory Analyzer for analyzing different performance problems successfully. First of all, install the tool as described in the project webpage in Eclipse.
After that, you can create a dump file knowing the pid of the jvm to be analyzed
jmap -dump:format=b,file=<filename>.hprof <jvm_pid>
Then just import the .hprof file in eclipse. It has some automatic reports that try (for me they usually do not work) to point out which could be the possible problems.
Edit:
Answering the comment: You are right, it is more like a leak finder for Java. For performance problems, I have played with JRat for small projects. It shows time comsumed per method, number of times a method is called, hierarchy of calls, etc. The only problem is that as far as I know, it does not support .hprof files. To use it, yo need to execute your program adding a VM argument
-javaagent:<path>/shiftone-jrat.jar
This will generate a directory with the profile captured by the tool. Then, execute
java -jar shiftone-jrat.jar
And open the trace. Even been a simple tool, I think it could be useful.

Categories