What is the easiest way to track down (i.e., find the cause of) a 'GC overhead limit exceeded' error?
What I do not consider good options:
Adding the -XX:-UseGCOverheadLimit parameter to the JVM call. That Java exception is telling me there is something incredibly inefficient in my implementation, and I want to fix that.
"Go and look carefully at your code". The project is very large, so I need some clues regarding where to look for inefficiencies.
Shoud I use a profiler? If yes, which one would you suggest?
Should I look into the GC log? I have tried doing that, but I have a little understanding of it, and it seems there are no clear pointers to the code (saying which objects are being GC'ed).
Many questions have been asked on SO about this error, but no one seems to answer this specific question.
Simplest tools to start profiling your app
Netbeans comes with a built in profiler.
Jconsole can also help a bit
VisualVm can also aid a bit.
Commercial Tools that is really awesome is DyanTrace
Now for the approach to fix your problem:
Although there can be other ways you can tackle it. but following things may help.
1) The symptoms that you are seeing are probably result of creation of too many short lived objects in your code. Now this is not a memory leak situation but too much garbage to clean for the JVM. And Jvm is failing to keep up with that. You need to check your code for where are these objects getting created.
2) Second thing you can do is take several heap dumps at regular intervals between two GC runs and compare these heapdumps in netbeans or some other tool of your choice. you need to do this before your app goes into this bad state.This comparison will tell you what has grown in heap and may be will give you a pointer where to look into your code.
I hope this helps in solving your issue. :)
Related
Suppose we have java task that is working in isolation and we are able to monitor it using visualvm... and we notice continuous garbage creation and periodic gc like this.
How do we detect what exactly is causing this issue
is there a way to see which method execution is generating garbage? how do we see where the garbage comes from?
yes we can see what objects exactly are allocating memory, but thats not helpful... i believe lot of objects are created and garbaged later, but i cant figure out where that happens and what exactly causes this...
How do we do this usually? what tools to use? any links to topics about this are appreciated
NOTE the problem here is not the GC parameter optimization, but rather the code optimization, we want to eliminate unnecessary object creation, maybe use primitives instead etc...
The easiest way is to use tool like JProfiler and record allocations. The "Allocation HotSpot" view will show in which method your application is allocating the objects. More details can be found here
When you cannot use profiler another approach is to take a heapdump and investigate the objects it contains. Then based on this information assume in which method they are instantiated.
I would suggest install VisualGC plugin in jvisualvm. It will give you very good idea about number of small and full GCs happening.
If you are looking for garbage collected objects and possible chance of memory leaks than you should inspect heap dump at two different instances of your code workflow.
How can I trace a Java program performance? Example, how long each method takes? How many resources were used and so on? I need some info for me to work on optimizing my Java program.
As others previously mentioned, profilers are the go. A long time ago, I'd used http://www.yourkit.com/, and found it quite easy to use and informative.
If you are keen, you could investigate using AOP for method timing etc. Just search Google for AOP method timing for some ideas.
You could try opening your project in Netbeans, from there you can use the Profiler tool and get performance data for methods, load times, etc. It's really easy to use and the data is very complete.
Netbeans Profiler
You can try a JVM profiling tool such as JProfiler.
Making a good and robust benchmark in java is hard. Take a look at the following articles:
http://www.ibm.com/developerworks/java/library/j-benchmark1/index.html
http://www.ibm.com/developerworks/java/library/j-benchmark2/index.html
In optimizing, it's hard to tell what causes the slow performance. Usually a bad algorithm (complexity) causes it. Maybe you can start from the algorithm first before go into further detail tweak
There is a specification of Java memory model.
And I want to dive into the source code to actually investigate how those mechanisms are implemented. (e.g., synchronized, volatile, ..., etc.)
But the codebase is so huge, I have no idea where to start with.
(http://www.java2s.com/Open-Source/Java-Document/CatalogJava-Document.htm)
Could anyone give me some clues?
Thanks a lot!
You might start by looking at the synchronizer.cpp class in the current version of the JDK. Prepare yourself a strong pot of coffee-- you've picked one of the most complex areas of the JVM to start delving into the source code.
If you haven't already done so, I would also suggest that you take a look at Bill Pugh's page on the Java Memory Model and Doug Lea's recommendations for compiler writers on implementing the Java memory model.
You may also glean something from running the debug JVM with the option turned on to output the JIT-compiled assembly which you can then inspect. (This won't tell you everything, but it might give you some pointers in: I think some of the things it prints will if nothing else give you some things to search for in the JDK source code...)
This may be a bit off topic of "right answer, not discussion."
However, I am trying to debug my thought process, so maybe someone can help me:
I use compilers all the time, and the fact that I'm giving up control over machine code generation (the layout of my caches, and the flow of electrons) does not bother me.
However, giving up control of memory layout (being able to place stuff in memory) and memory management (garbage collection) still bothers me these days.
Have others dealt with this? If so, how did you get past it? (In particular, how I often feel "safer" in C++ than in Java.)
Thanks!
Your feeling is, naturally, very subjective.
You might feel comfortable managing your own memory space in C++.
Others might appreciate the easiness of Java managing the heap for you, and reducing memory management overhead to a minimum.
Programming domain has an influence as well. For example, in an embedded environment, you most likely will not have the privilege to enjoy a garbage collection mechanism, leaving you to manage your own memory, whether you like it or not.
Bottom line - subjective and domain-dependent.
Confront your nightmare! Profile a busy application in NetBeans and watch the garbage collector do its job.
If you trust the JVM with code generation, why not trust it with data generation too?
Please note that things like cache sizes on CPU may influence the optimal placement of your objects, and that the JIT basically knows better than you because it can measure and take action in the process.
If you've ever used COM under C++ its really no different to using "Release()". The momory may or may not be freed right then or it may be freed somewhere down the line when the thing using it has finished using it.
Best thing to do is just assume it works and stop worrying about it.
The original poster asked about (a) memory layout and (b) memory management. The previous answers only talk about memory management.
Regarding memory layout, the keyword to search for seems to be "struct".
C and C++ both have memory layout control. D should as well.
It appears (based on a quick search) Java does not.
C# has grants memory layout control via structs. See:
Stack Overflow: incorrect members order in a C# structure
http://www.developerfusion.com/article/84519/mastering-structs-in-c/
Go's data structures are called "structs", but I cannot tell if they grant control over memory layout. (I suspect they do, but have not been able to confirm this.)
I welcome any corrections/additions to the above.
(And regarding memory management, I'm quite happy to let the language/platform do it.)
If we have 300 classes in an application, is it possible to monitor how many instances of each class we have at a given time? Is it possible to know how much memory each instance is consuming?
Thanks
JDK 1.6 includes a tool called jvisualvm, which allows you to view lots of information about your running Java program, including memory usage, threads, etc. You could also use a profiler to see this kind of information. The profiler in NetBeans looks a lot like JVisualVM.
I personally like Yourkit. It has a very good UI and comes with a 30 day trial. The details are also pretty extensive.
The online help document in that site should help you on how to set things up for running it.
use profiler4j or pmd
personally i like profiler4J for its ease of use and simple graphics :)
use jvisualvm.exe it is part of the JDK6
Most profilers will give you this information. I'm personally familiar with JProfiler, but I expect any worthwhile profiler would let you do this.
For a more low-tech solution, you could even trigger a heap dump from your application and then look through it with an application like jhat. The interface leaves a lot to be desired, though, and profilers would be much more comfortable to use in any non-trivial case.
Edit: here is an example of the memory screen for JProfiler, and you can also investigate the reference chain.
You could use a Java profiler, depending on which web container (if it's a web-app) you're deploying to you can try alot of different profilers: http://java-source.net/open-source/profilers