Is there a tool that is able to collect and count (Java) stacktraces in a large logfile, such that you get an overview which errors occur most often?
I am not aware of any automatic tool but logmx will give you a nice clean overview of your log file with search options.
This probably isn't the best answer but I am going to try to answer the spirit of your question. You should try Dynatrace. It's not free and it doesn't work with log files per say but it can get you very detail reports of what types of exceptions are thrown from where and when on top of a lot of other info.
I'm not too sure if there is a tool available to evaluate log files but you may have more success with a tool like AppDynamics. This is a monitoring tool which can be used to evaluate Live application performance and can be configured to monitor exception frequency.
Good luck.
Mark.
Related
There is an issue we are facing in production environment.
The File generated using log4j is getting appended with some special characters at the start of file, before starting to log.
This is resulting in a binary file which is making tools like Splunk not able to access these files as it is expecting text files.
Please help me what could be the issue here.
According to Google, my best guess is that you are using GC logs (JVM Garbage Collector logs) from what I read here: https://developer.jboss.org/message/529671#529671 and here: https://developer.jboss.org/thread/148848?tstart=0&_sscc=t.
It seems that there is no real solution, except maybe using the right combination of ASCII encoding + right locale, according to the pages previously linked.
Since you said, in your question, that you have this problem on production environment, I may suggest you to simply disable GC logs in production, because you should not do this in production (enabling GC logs have a performance/storage impact). In your JVM start options, look for something like -XX:+PrintGC or -verbose:gc.
We have a few java applications (jars) running as backend server applications on localhost. These programs are inside a virtual box (RHEL 6.2).
After one of the jar's ran for 5 days, it stopped working. No exceptions were thrown (didn't see any output of the errors that could be caught in the catch block). To find out what caused this, we put in some println's and redirected output to a text file using the > operator on the commandline using shellscript.
After about 4 or 5 days, we faced a situation where we could see that the jar was still running, but it wasn't outputting anything to the text file or to the database to which the application was supposed to write entries.
Perhaps the textfile became too large for the virtual box to handle, but basically we wanted to know this:
How are such runtime problems located in Java? In C++ we have valgrind, Purify etc, but
1. are there such tools in Java?
2. How would you recommend we output println's without facing the extremely-large-textfile problem? Or is there a better way to do it?
Rather than printing to System.out how about using tools like log4j. Log4J allows for logfile sizing, versioning and purging.
see http://logging.apache.org/log4j/1.2/
You may also want to re-consider your server architecture.
How are such runtime problems located in Java? In C++ we have
valgrind, Purify etc, but 1. are there such tools in Java?
There are lot of java profilers available, few are free as well. There is one called VisualVM, which comes along with java distribution. You can attach your process with profiler, but profilers will only help you find few problems such as memory leaks, cpu intenstive task etc
How would you recommend we output println's without facing the extremely-large-textfile problem? Or is there a better way to do it?
Sysout are not a good way to deal with this problem. Loggers such as log4j provides very roboust and easy to use API. Log4j also provides easy way to configure to roll over your log files, etc features
I need wikipedia deletion log for my project. I was able to find deletion logs here
http://en.wikipedia.org/w/index.php?title=Special:Log&type=delete&user=&page=&year=&month=-1&tagfilter=&hide_review_log=1
I can download 5000 entries at a time, but it will take lot of time, due to large number of pages. Is there a dump available?
Thank you
Bala
Why not ask at Wikipedia? There are various dumps available, including tools on the toolserver that may be of use. Your best bet is asking at the technical pump.
What should I (as a Java programmer who doesn't know anything about JVM internals) do when I come across a JVM crash?
In particular, how would you produce a reproducible test case? What should I be searching for in Sun's (or IBM's) bug database? What information can I get from the log files produced (e.g. hs_err_pidXYZ.log)?
If the crashes occur only one one specific machine, run memtest. I've seen recurring JVM crashes only two times, and in both cases the culprit turned out to be a hardware problem, namely faulty RAM.
In my experience they are nearly always caused by native code using JNI, either mine or someone else's. If you can, try re-running without the native code to see if you can reproduce it.
Sometimes it is worth trying with the JIT compiler turned off, if your bug is easily reproducible.
As others have pointed out, faulty hardware can also cause this, I've seen it for both Memory and video cards (when the crash was in swing code). Try running whatever hardware diagnostics are most appropriate for your system.
As JVM crashes are rare I'd report them to Sun. This can be done at their bug database. Use category Java SE, Subcategory jvm_exact or jit.
Under Unix/Linux you might get a Core dump. Under windows the JVM will usually tell you where it has stored a log of what has happened. These files often given some hint, but will vary from JVM to JVM. Sun gives full details of these files on their website. or IBM the files can be analysed using the Java Core Analyzer and Java heapdump Analyzer from IBM's alphaworks.
Unfortunately Java debuggers in my experience tend to hurt more than help. However, attaching an OS specific debugger (eg Visual Studio) can help if you are familiar with reading C stack traces.
Trying to get a reproducible test case is hard. If you have a large amount of code that always (or nearly always) crashes it is easier, just slowly remove parts while it keeps crashing, getting the result as small as possible. If you have no reproducible test code at all then it is very difficult. I'd suggest getting hints from my numbered selection above.
Sun documents the details of the crash log here. There is also a nice tutorial written up here, if you want to get into the dirty details (it sounds like you don't, though)
However, as a commenter mentioned, a JVM crash is a pretty rare and serious event, and it might be worthwhile to call Sun or IBM professional support in this situation.
When an iBM JVM crashes, it might have written to the file /tmp/dump_locations in there it lists any heapdump or javacore files it has written.
These files can be analysed using the Java Core Analyzer and Java heapdump Analyzer from IBM's alphaworks.
There's a great page on the Oracle website to troubleshoot these types of problems.
Check out the relevant sections for:
Hung Processes (eg. jstack utility)
Post Mortem diagnostics
What tools are available to view the output of the built-in JVM profiler? For example, I'm starting my JVM with:
-agentlib:hprof=cpu=times,thread=y,cutoff=0,format=a,file=someFile.hprof.txt
This generates output in the hprof ("JAVA PROFILE 1.0.1") format.
I have had success in the past using HPjmeter to view these output files in a reasonable way. However, for whatever reason the files that are generated using the current version of the Sun JVM fail to load in the current version of HPjmeter:
java.lang.NullPointerException
at com.hp.jmeter.f.jb.a(Unknown Source)
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
Exception in thread "HPeprofDataFileReaderThread" java.lang.AssertionError: null pointer exception from loader
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
(Why would they obfuscate the bytecode for a free product?!)
Two questions arise from this:
Does anyone know the cause of this HPjmeter error? (EDIT: Yes--see below)
What other tools exist to read hprof files? And why are there none from Sun (are there)?
I know the Eclipse TPTP and other tools can monitor JVMTI data on the fly, but I need a solution that can process the generated hprof files after the fact since the deployed machine only has a JRE (not a JDK) intalled.
EDIT: A very helpful HPjmeter developer replied to my question on an HP ITRC forum indicating that heap=dump needs to be included in the -agentlib options temporarily until a bug in HPjmeter is fixed. This information makes HPjmeter viable again, but I will still leave the question open to see if anyone knows of any other tools.
EDIT: As of version 4.0.00 of HPjmeter (available 05/2009) this bug has been fixed.
Your Kit Java Profiler is able to read hprof snapshots (I am not sure if only for memory profiling or for CPU as well). It is not free but is by far the best java profiler I ever used. It presents the results in a clear, intuitive way and performs well on large data sets. The documentation is also pretty good.
For viewing and analyzing the output of hprof=samples or hprof=cpu I have used PerfAnal with good results. The GUI is a bit spartan, but very useful.
PerfAnal is a free download (GPL, originally an example project in the book Java Programming on Linux).
See this article:
http://www.oracle.com/technetwork/articles/javase/perfanal-137231.html
for more information and the download.
Normally you can just run
java -jar PerfAnal.jar hprof.java.txt
You may need to fiddle with -Xmx for large hprof files.
I am not 100% sure it'll work (it sounds like it will) and I am not sure it'll show it in the format you want... but have you thought about the VisualVM?
I believe it'll open up the resulting file.
I have been using Eclipse Memory Analyzer for analyzing different performance problems successfully. First of all, install the tool as described in the project webpage in Eclipse.
After that, you can create a dump file knowing the pid of the jvm to be analyzed
jmap -dump:format=b,file=<filename>.hprof <jvm_pid>
Then just import the .hprof file in eclipse. It has some automatic reports that try (for me they usually do not work) to point out which could be the possible problems.
Edit:
Answering the comment: You are right, it is more like a leak finder for Java. For performance problems, I have played with JRat for small projects. It shows time comsumed per method, number of times a method is called, hierarchy of calls, etc. The only problem is that as far as I know, it does not support .hprof files. To use it, yo need to execute your program adding a VM argument
-javaagent:<path>/shiftone-jrat.jar
This will generate a directory with the profile captured by the tool. Then, execute
java -jar shiftone-jrat.jar
And open the trace. Even been a simple tool, I think it could be useful.