My question is different from this I've done profiling of my application it is too slow.
After completion of one process I have seen some live objects in the heap walker.
Though we are caching some data from database to HashMap but the heap walker shows me some live objects like Resultset.getString and Statement.getString which should not be there.
HashMap.put() taking very less memory then above two methods.
Have I done everything fine and is this analysis right? OR I am missing anything and the memory is occupied by HashMap itself and HeapWalker is just showing me methods of JDBC (getString and executeQuery).
Since you're talking about methods, I guess you're looking at the "Allocations" view of the heap walker. That view shows where objects have been created, not where objects are referenced. There's a screen cast that explains allocation recording in JProfiler.
HashMap.put will not allocate a lot of memory, it just creates small "Entry" objects that are are used to store key-value pairs. The objects that take a lot of memory are created before you put them into the hash map.
Resultset.getString and Statement.getString create the String objects that you read from your database. So it's reasonable to assume that some of these objects are longer-lived.
To find out why objects are still on the heap, you should go to the reference view, select incoming references and search for the "path to GC root". The "biggest objects" view is also very helpful in tracking down excessive memory usage.
What you may be seeing is cached data held by the connection (perhaps its buffer cache) or the statement or result set.
This could be through not closing the connection, statement or result set or it could be due to connection pooling. If you look at the memory profile, you may be able to see the "path to GC root" (the path to the object root) and this would indicate what is holding your ResultSet strings. You should see if it's in your code, cached within something you retain or if it's in a pool.
N.B. I've not used JProfiler but this is how I would track it with YourKit.
Related
I recently modified a bing chunk of my code and the garbage collector went crazy and I can't figure out what does it deletes. So I'd like to get the names of the things that the garbage collector deletes.
I've tried Android Device Monitor but I can't figure out where or how to find it or if it is even possible.
What should I do to figure out where to modify my code?
finalize(),
this API is provided by Java which JVM calls whenever an object is about to go through GC, although no gurantee if it will be called as per JAVA DOC, you can try printing name here.
Hope this solve ur issue.
(Actually, if the GC has "gone crazy", a more likely problem is the objects it can't delete than the ones that it deletes.)
Firstly, objects don't (in general) have names.
Secondly, I don't think it is possible to track when the GC actually deletes objects.
So tracking object deletion is unlikely to work ... and probably wouldn't help anyway.
What I think you really need is a memory profiler tool (e.g. DDMS) that can tell you were the objects are allocated, and can help you analyse all of the objects that are live (reachable) and garbage (unreachable).
The two most likely things that can make the GC use a lot of time (or "go crazy") are:
Having too much reachable data. As the heap gets close to full, the percentage of time spent attempting to collect garbage increases dramatically. A common cause of having too much reachable data is a storage leak in your application.
Creating too many temporary objects. A higher allocation rate means that the collector has to run more often.
References:
https://developer.android.com/tools/debugging/debugging-memory.html
http://developer.android.com/tools/debugging/ddms.html
I read http://www.cubrid.org/blog/tags/Garbage%20Collection/ article which gives high level picture of GC in Java. It says:
The compaction task is to remove memory fragmentation by compacting memory in order to remove the empty space between allocated memory areas.
Should objects be moved into anther places in order to fill holes?
I think that objects are moved. If so that mean addresses are changed and so reference to that object also should be updated?
It seems too complicated task to find all back reference and update them...
Yes, arbitrary objects are moved arbitrarily through memory, and yes this requires updating the references to those objects. One could also use indirection but that has various downsides and I'm not aware of any high performance GC doing it.
It's indeed somewhat complicated, but as far as GC optimizations go it's rather benign. Basic mark-compact works rather well and it basically just goes over all objects in address order, moves them to the smallest available address, and builds a "break table" as it goes which contains the necessary information (start address -> displacement) for quickly fixing up references, which it then does in a second pass. None of this requires information or bookkeeping beyond what any mark-sweep collector already needs (object types, locations of references, etc).
And when you move objects out of the nursery in a generational setting, you also know (roughly) where the old references are. You needed to know that to do a minor collection.
I am debugging a OutOfMemory issue using MAT( Analyzing Heap Dump ) in old java application.
MAT shows a RMI Thread has created Array(BO[150K+]) of My Business Object(BO) which has 150k+ instances, it is consuming around 358 MB ( Xmx is 512 MB). It is a memory leak situation.
One more interesting part I noticed in all the dumps ( created after server crash ) number of instances in Array Object is same.
I am not able to understand how can I find out this Array Object , in which class this array object is created. is there any such direct/indirect feature available in MAT ?
please suggest if any such option available in visualVM or some other tool. Or Some memory analyzer which i can run over codebase.
in Eclipse MAT histogram select the array object and right click and select the
"merge shortest path to GC Root" (exclude weak references) ,
this should show you the creation path all the way upto the base class object which creates this array.
If you need a profiler that can show you where instances were allocated, you can try JProfiler. The heap walker has an "Allocations" view where you can see the cumulated call stacks for any set of objects. To get allocation call stacks, you have to switch on allocation recording, possibly at startup.
Disclaimer: My company develops JProfiler
There is functionality to use OQL on your heap dump in vanilla jVisualVM. One of the functions in OQL is heap.livepaths, which takes an instance as a parameter and outputs all paths that prevent garbage collection.
If you have a concrete class or object that you know should not be there (relevant windows closed for swing and garbage collector forced several times), you can just list those paths and get several examples of reference paths.
Rinse and repeat until all suspect objects DO get garbage collected. Then it becomes more difficult since you no longer have a lead, but you likely found several bugs in your app and may have fixed it to an acceptable degree.
Note: I usually go the hard way around analyzing memory leaks, and it may not work for very complex applications.
I'm using collection objects (Arraylist, hashmap mainly).
My program is running 24*7. In between sometimes, it throws an exception out of memory Error: Java heap space.
I have already given 1gb of space for JVM
My Question is whether I need to use Global objects of Collection or local objects for each methods?
(Almost 1000000 data I have to process per day continuously 24*7)
You could also set the heap space to 2GB and see if the problem still occurs. Thats the poor mans memory leak detection process. Otherwise use a profiler like VisualVM and check for memory leaks.
You can use a source code quality tool like Sonar.
You can also use Eclipse Memory Analysis tool. It enables you to analyse the heap dump & figure out which process is using the maximum memory. Analyze productive heap dumps with hundreds of millions of objects, quickly calculate the retained sizes of objects, see who is preventing the Garbage Collector from collecting objects, run a report to automatically extract leak suspects.
I always use it to fix OutOfMemory exceptions.
All the answer were really helpful :
1.When the requirement of the program is to run 24*7 then use local variable across the method.
2.The program must be thread safe(if Thread is used).
3.Use Connection pooling because when your connection object is used in infinite loop then creating & destroying every time is a big performance issue , so always make 10 or 15 connection in the beginning & checkout and checkin the connection.
4. Use Memory Analysis tool to analyse the heap dump & figure out which process is using the maximum memory.
You should use local variable until and unless it is being used across the methods. and Try to make Global variable null whenever its value is not going to use anymore. But you should be more sure while making object null.
These null objects get garbage collected easily, which helps you to avoid memory exceptions. Also review your code for infinite loops while iterating collections, arrays etc.
im reading android training article: Performance Tips
Object creation is never free. A generational garbage collector with
per-thread allocation pools for temporary objects can make
allocation cheaper, but allocating memory is always more expensive
than not allocating memory.
what's per-thread allocation pools for temporary objects?
I did't find any docs about this.
Read it as : A generational garbage collector with per-thread allocation , pools for temporary objects .
Per thread garbage collection is that Objects associated only with a thread that created them are tracked. At a garbage collection time for a particular thread, it is determined which objects associated only with that thread remain reachable from a restricted root set associated with the thread. Any thread-only objects that are not determined to be reachable are garbage collected.
What they are saying, and they are right, is object creation (and subsequent collection) can be a major time-taker.
If you look at this example you'll see that at one point, memory management dominated the time, and was fixed by keeping used objects of each class in a free-list, so they could be efficiently re-used.
However, also note in that example, memory management was not the biggest problem at first.
It only became the biggest problem after even bigger problems were removed.
For example, suppose you have a team of people who want to lose weight, relative to another team.
Suppose the team has
1) a 400-lb person, (corresponding to some other problem)
2) a 200-lb person (corresponding to the memory management problem), and
3) a 100-lb person (corresponding to some other problem).
If the team as a whole wants to lose the most weight, where should it concentrate first?
Obviously, they need to work on all three, but if they miss out on the big guy, they're not going to get very far.
So the most aggressive procedure is first find out what the biggest problem is (not by guessing), and fix that.
Then the next biggest, and so on.
The big secret is don't guess.
Everybody knows that, but what do they do? - they guess anyway.
Guesses, by definition, are often wrong, missing the biggest issues.
Let the program tell you what the biggest problem is.
(I use random pausing as in that example.)