I have a memory leak problem in my GWT application and I'm trying to profile it using JProfiler.
I can't manage to get pertinent results as I don't see my java classes on the profile memory view, i just see the GWT lib classes.
I've added the parameter to profile a remote application using JProfiler (-agentpath:C:\PROGRA~1\JPROFI~1\bin\WINDOW~1\jprofilerti.dll=port=8849). I launch the project in superDevMode through the Eclipse IDE. JProfiler shows me the GWT classes in memory but it doesn't show my own java classes.
In this video youtube.com/watch?v=zUJUSxXOOa4 we can see that JProfiler can show the java classes directly, that's what i search to do
Is there any option to activate in JProfiler for that ? Any help on that matter would be welcome. Thank you
Super Dev mode does not work with a Java profiler. The old Dev mode executed the client-side code in the JVM via a special plugin. These days, the Dev mode browser plugin does not work with modern browsers. The last browsers that supported the plugins were Chrome 21.0.1180.89 and Firefox 26.
As of now, Firefox 24 ESR is still supported:
https://www.mozilla.org/en-US/firefox/organizations/all/
and the Dev mode plugin works in that version. For more information on dev mode see
http://www.gwtproject.org/doc/latest/DevGuideCompilingAndDebugging.html
I don't think so. GWT compile your classes to JS, due to this JProfiler won't work (I think, but maybe I'm wrong). Maybe you can give a try to MemoryAnalyzer with a Heap dump.
Chrome comes with some awesome built-in CPU and heap profiling tools. Firefox now has its own built-in CPU profiler, and Firebug has a different one. IE (at least 10, and I think 9) has a built in CPU profiler, though it has been a long time since I dug too far into there.
Memory is historically a difficult thing to track in browsers, not least because old IE versions just won't die, and leak just from looking at them funny. If you are facing one of those memory leaks, a different plan of attack is required.
But if you suspect you are dealing with a leak in your own application code, Chrome's dev tools can help! Compile in PRETTY (or DETAILED if you have an extremely wide screen), and bring up your app in Chrome with the developer tools open.
In the Profiles tab, there are three kinds of profile to capture, two about memory. I typically prefer the Take Heap Snapshot, and take a 'before' and 'after' look at whatever action I believe to be leaking memory, but the Record Heap Allocations view will give you another way to consider the memory usage of your application.
Start by picking a supposedly 'stable' state of your memory usage - turn the app on, use it for a bit, make sure all the various singletons etc are instantiated, and probably do whatever action you suspect as causing a problem, once. Once you are at a point which you can back to (memory-wise, at least if the leak was fixed), take a snapshot, do the behavior that leaks, return to the 'stable' state, and take another snapshot. Only take one step when checking for leaks, more on this in a bit.
With the two snapshots, you can compare objects allocated and freed - we're mostly interested in cases where more objects were created than deleted, ideally where zero were deleted. If you find N objects are deleted but N+1 are created, then make sure N is veeeery small before digging in - it is often possible to fix a leak only by going after individual objects, tracing them back to their actual leaked source, fixing it, and measuring again.
Once you have an object that was created in one step, but not deleted at the end of that step (but it should have been) use the 'Retainers' view to see why they are still kept. This will more or less show you the field in the object that holds them and the type of that holding object, all the way up to window or some other global object.
Ignore anything in ()s like (compiled code), (array), (system), (string), etc. I'd generally ignore dom element allocation (assuming you suspect you have a leak in app code, not JSNI). Look for few, high level objects leaked, rather than many, low level, it will make it more likely that you are closer to the source of the leak.
The names of compiled constructors and fields in PRETTY generally map very closely to the original Java source. All constructors get _X appended to them, where X is 0, 1, etc - this is to distinguish from the type itself. This makes for an easy way to recognize Java types in the Constructor column, as they all have _s near the end of their name.
Related
I have created an application using Vaadin having respective UI.
I am running in a server with having a maximum heapload of 250 Mb. The application gets crashed because of the heapload since it is not garbage collected.
I tried to run with a visualVM analyzer. Found to have lot of instances and somehow the vaadin ScssCache is making this mess.
How can I rectify this error? Is it because of the browser cache settings or should I do something with the vaadinservletcache entry?
I really do not understand please help. I have attached my VisualVm screen shot for the reference. Thank you very much. I am using vaadin 7.6.3.
The attached VisualVM screenshot shows that the entire scssCache retains 1248kb of memory, out of which 1200kb is used for the actual cached CSS contents. This is less than 1% of your 250mb heap size and most likely not the problem.
That 1200kb char[] with the compiled CSS might be the biggest individual object on the heap, but there's only one such object. You will thus have to look for something else that consumes lots of memory. I'd recommend looking at the list of Classes sorted by their retained size, and ignore low-level classes such as char[], java.lang.String or java.Util.HashMap and instead try to pinpoint anything related to your own application.
I would also encourage you to verify that your application is actually running in production mode since the only code path that I can identify that does anything with scssCache is through VaadinServlet.serveOnTheFlyCompiledScss which checks whether production mode is enabled and in that case returns before touching the cache.
We have an Java ERP type of application. Communication between server an client is via RMI. In peak hours there can be up to 250 users logged in and about 20 of them are working at the same time. This means that about 20 threads are live at any given time in peak hours.
The server can run for hours without any problems, but all of a sudden response times get higher and higher. Response times can be in minutes.
We are running on Windows 2008 R2 with Sun's JDK 1.6.0_16. We have been using perfmon and Process Explorer to see what is going on. The only thing that we find odd is that when server starts to work slow, the number of handles java.exe process has opened is around 3500. I'm not saying that this is the acual problem.
I'm just curious if there are some guidelines I should follow to be able to pinpoint the problem. What tools should I use? ....
Can you access to the log configuration of this application.
If you can, you should change the log level to "DEBUG". Tracing the DEBUG logs of a request could give you a usefull information about the contention point.
If you can't, profiler tools are can help you :
VisualVM (Free, and good product)
Eclipse TPTP (Free, but more complicated than VisualVM)
JProbe (not Free but very powerful. It is my favorite Java profiler, but it is expensive)
If the application has been developped with JMX control points, you can plug a JMX viewer to get informations...
If you want to stress the application to trigger the problem (if you want to verify whether it is a charge problem), you can use stress tools like JMeter
Sounds like the garbage collection cannot keep up and starts "halt-the-world" collecting for some reason.
Attach with jvisualvm in the JDK when starting and have a look at the collected data when the performance drops.
The problem you'r describing is quite typical but general as well. Causes can range from memory leaks, resource contention etcetera to bad GC policies and heap/PermGen-space allocation. To point out exact problems with your application, you need to profile it (I am aware of tools like Yourkit and JProfiler). If you profile your application wisely, only some application cycles would reveal the problems otherwise profiling isn't very easy itself.
In a similar situation, I have coded a simple profiling code myself. Basically I used a ThreadLocal that has a "StopWatch" (based on a LinkedHashMap) in it, and I then insert code like this into various points of the application: watch.time("OperationX");
then after the thread finishes a task, I'd call watch.logTime(); and the class would write a log that looks like this: [DEBUG] StopWatch time:Stuff=0, AnotherEvent=102, OperationX=150
After this I wrote a simple parser that generates CSV out from this log (per code path). The best thing you can do is to create a histogram (can be easily done using excel). Averages, medium and even mode can fool you.. I highly recommend to create a histogram.
Together with this histogram, you can create line graphs using average/medium/mode (which ever represents data best, you can determine this from the histogram).
This way, you can be 100% sure exactly what operation is taking time. If you can't determine the culprit, binary search is your friend (fine grain the events).
Might sound really primitive, but works. Also, if you make a library out of it, you can use it in any project. It's also cool because you can easily turn it on in production as well..
Aside from the GC that others have mentioned, Try taking thread dumps every 5-10 seconds for about 30 seconds during your slow down. There could be a case where DB calls, Web Service, or some other dependency becomes slow. If you take a look at the tread dumps you will be able to see threads which don't appear to move, and you could narrow your culprit that way.
From the GC stand point, do you monitor your CPU usage during these times? If the GC is running frequently you will see a jump in your overall CPU usage.
If only this was a Solaris box, prstat would be your friend.
For acute issues like this a quick jstack <pid> should quickly point out the problem area. Probably no need to get all fancy on it.
If I had to guess, I'd say Hotspot jumped in and tightly optimised some badly written code. Netbeans grinds to a halt where it uses a WeakHashMap with newly created objects to cache file data. When optimised, the entries can be removed from the map straight after being added. Obviously, if the cache is being relied upon, much file activity follows. You probably wont see the drive light up, because it'll all be cached by the OS.
We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this.
Upon analysis of the heap dumps we find the problem to be objects used in JSPs.
Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)?
We have a clustered Websphere appserver with 2 nodes and an IHS.
EDIT: The findings above are based on the heap-dump and nativestderr log analysis given below using the IBM support assistant
nativestd err log analysis:
alt text http://saregos.com/wp-content/uploads/2010/03/chart.jpg
Heap dump analysis:
![alt text][2]
Heap dump analysis showing the immediate dominators (2 levels up of hastable entry in the image above)
![alt text][3]
The last image shows that the immediate dominators are in fact objects being used in JSPs.
EDIT2: More info available at http://saregos.com/?p=43
I'd first attach a profile tool to tell you what these "Objects" are that are taking up all the memory.
Eclipse has TPTP,
or there is JProfiler
or JProbe.
Any of these should show the object heap creaping up and allow you to inspect it to see what is on the heap.
Then search the code base to find who is creating these.
Maybe you have a cache or tree/map object with elements in and you have only implemented the "equals()" method on these objects, and you need to implement "hashcode()".
This would then result in the map/cache/tree getting bigger and bigger till it falls over.
This is only a guess though.
JProfiler would be my first call
Javaworld has example screen shot of what is in memory...
(source: javaworld.com)
And a screen shot of object heap building up and being cleaned up (hence the saw edge)
(source: javaworld.com)
UPDATE *************************************************
Ok, I'd look at...
http://www-01.ibm.com/support/docview.wss?uid=swg1PK38940
Heap usage increases over time which leads to an OutOfMemory
condition. Analysis of a heapdump shows that the following
objects are taking up an increasing amount of space:
40,543,128 [304] 47 class
com/ibm/wsspi/rasdiag/DiagnosticConfigHome
40,539,056 [56] 2 java/util/Hashtable 0xa8089170
40,539,000 [2,064] 511 array of java/util/Hashtable$Entry
6,300,888 [40] 3 java/util/Hashtable$HashtableCacheHashEntry
Triggering the garbage collection manually doesn't solve your problem - it won't free resources that are still in use.
You should use a profiling tool (like jProfiler) to find your leaks. You problably use code that stores references in lists or maps that are not released during runtime - propably static references.
If you run under the Sun 6 JVM strongly consider to use the jvisualvm program in the JDK to get an inital overview of what actually goes on inside the program. The snapshot comparison is really good to help you get further in which objects sneak in.
If Sun 6 JVM is not an option, then investigate which profiling tools you have. Trials can get you really far.
It can be something as simple as gigantic character arrays underlying a substring you are collecting in a list, for e.g. housekeeping.
I suggest reading Effective Java, chapter 2. Following it, together with a profiler, will help you identify the places where your application produces memory leaks.
Freeing up memory isn't the way to solve extensive memory consumption. The extensive memory consumption may be a result of two things:
not properly written code - the solution is to write it properly, so that it does not consume more than is needed - Effective Java will help here.
the application simply needs this much memory. Then you should increase the VM memory using Xmx, Xms, XX:MaxHeapSize,...
There is no specific to free up objects allocated in JSPs, at least as far as I know. Rather than investigationg such options, I'd rather focus on finding the actual problem in your application codes and fix it.
Some hints that might help:
Check the scope of your beans. Aren't
you e.g. storing something user or
request specific into "application"
scope (by mistake)?
Check settings of web session timeout in your web application and
appserver settings.
You mentioned the heap consumption grows gradually. If it's indeed so,
try to see by how much the heap size
grows with various user scenarios:
Grab a heapdump, run a test, let the
session data timeout, grab another
dump, compare the two. That might
give you some idea where do the objects on heap come from
Check your beans for any obvious memory leaks, for sure :)
EDIT: Checking for unreleased static resources that Daniel mentions is another worthwhile thing :)
As I understand those top-level memory-eaters are cache storage and objects stored in it. Probably you should make sure that your cache is going to free objects when it takes too much memory. You may want to use weak-ref if you need cache for live objects only.
This is a problem I have been trying to track down for a couple months now. I have a java app running that processes xml feeds and stores the result in a database. There have been intermittent resource problems that are very difficult to track down.
Background:
On the production box (where the problem is most noticeable), i do not have particularly good access to the box, and have been unable to get Jprofiler running. That box is a 64bit quad-core, 8gb machine running centos 5.2, tomcat6, and java 1.6.0.11. It starts with these java-opts
JAVA_OPTS="-server -Xmx5g -Xms4g -Xss256k -XX:MaxPermSize=256m -XX:+PrintGCDetails -
XX:+PrintGCTimeStamps -XX:+UseConcMarkSweepGC -XX:+PrintTenuringDistribution -XX:+UseParNewGC"
The technology stack is the following:
Centos 64-bit 5.2
Java 6u11
Tomcat 6
Spring/WebMVC 2.5
Hibernate 3
Quartz 1.6.1
DBCP 1.2.1
Mysql 5.0.45
Ehcache 1.5.0
(and of course a host of other dependencies, notably the jakarta-commons libraries)
The closest I can get to reproducing the problem is a 32-bit machine with lower memory requirements. That I do have control over. I have probed it to death with JProfiler and fixed many performance problems (synchronization issues, precompiling/caching xpath queries, reducing the threadpool, and removing unnecessary hibernate pre-fetching, and overzealous "cache-warming" during processing).
In each case, the profiler showed these as taking up huge amounts of resources for one reason or another, and that these were no longer primary resource hogs once the changes went in.
The Problem:
The JVM seems to completely ignore the memory usage settings, fills all memory and becomes unresponsive. This is an issue for the customer facing end, who expects a regular poll (5 minute basis and 1-minute retry), as well for our operations teams, who are constantly notified that a box has become unresponsive and have to restart it. There is nothing else significant running on this box.
The problem appears to be garbage collection. We are using the ConcurrentMarkSweep (as noted above) collector because the original STW collector was causing JDBC timeouts and became increasingly slow. The logs show that as the memory usage increases, that is begins to throw cms failures, and kicks back to the original stop-the-world collector, which then seems to not properly collect.
However, running with jprofiler, the "Run GC" button seems to clean up the memory nicely rather than showing an increasing footprint, but since I can not connect jprofiler directly to the production box, and resolving proven hotspots doesnt seem to be working I am left with the voodoo of tuning Garbage Collection blind.
What I have tried:
Profiling and fixing hotspots.
Using STW, Parallel and CMS garbage collectors.
Running with min/max heap sizes at 1/2,2/4,4/5,6/6 increments.
Running with permgen space in 256M increments up to 1Gb.
Many combinations of the above.
I have also consulted the JVM [tuning reference](http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html) , but can't really find anything explaining this behavior or any examples of _which_ tuning parameters to use in a situation like this.
I have also (unsuccessfully) tried jprofiler in offline mode, connecting with jconsole, visualvm, but I can't seem to find anything that will interperet my gc log data.
Unfortunately, the problem also pops up sporadically, it seems to be unpredictable, it can run for days or even a week without having any problems, or it can fail 40 times in a day, and the only thing I can seem to catch consistently is that garbage collection is acting up.
Can anyone give any advice as to:
a) Why a JVM is using 8 physical gigs and 2 gb of swap space when it is configured to max out at less than 6.
b) A reference to GC tuning that actually explains or gives reasonable examples of when and what kind of setting to use the advanced collections with.
c) A reference to the most common java memory leaks (i understand unclaimed references, but I mean at the library/framework level, or something more inherenet in data structures, like hashmaps).
Thanks for any and all insight you can provide.
EDIT
Emil H:
1) Yes, my development cluster is a mirror of production data, down to the media server. The primary difference is the 32/64bit and the amount of RAM available, which I can't replicate very easily, but the code and queries and settings are identical.
2) There is some legacy code that relies on JaxB, but in reordering the jobs to try to avoid scheduling conflicts, I have that execution generally eliminated since it runs once a day. The primary parser uses XPath queries which call down to the java.xml.xpath package. This was the source of a few hotspots, for one the queries were not being pre-compiled, and two the references to them were in hardcoded strings. I created a threadsafe cache (hashmap) and factored the references to the xpath queries to be final static Strings, which lowered resource consumption significantly. The querying still is a large part of the processing, but it should be because that is the main responsibility of the application.
3) An additional note, the other primary consumer is image operations from JAI (reprocessing images from a feed). I am unfamiliar with java's graphic libraries, but from what I have found they are not particularly leaky.
(thanks for the answers so far, folks!)
UPDATE:
I was able to connect to the production instance with VisualVM, but it had disabled the GC visualization / run-GC option (though i could view it locally). The interesting thing: The heap allocation of the VM is obeying the JAVA_OPTS, and the actual allocated heap is sitting comfortably at 1-1.5 gigs, and doesnt seem to be leaking, but the box level monitoring still shows a leak pattern, but it is not reflected in the VM monitoring. There is nothing else running on this box, so I am stumped.
Well, I finally found the issue that was causing this, and I'm posting a detail answer in case someone else has these issues.
I tried jmap while the process was acting up, but this usually caused the jvm to hang further, and I would have to run it with --force. This resulted in heap dumps that seemed to be missing a lot of data, or at least missing the references between them. For analysis, I tried jhat, which presents a lot of data but not much in the way of how to interpret it. Secondly, I tried the eclipse-based memory analysis tool ( http://www.eclipse.org/mat/ ), which showed that the heap was mostly classes related to tomcat.
The issue was that jmap was not reporting the actual state of the application, and was only catching the classes on shutdown, which was mostly tomcat classes.
I tried a few more times, and noticed that there were some very high counts of model objects (actually 2-3x more than were marked public in the database).
Using this I analyzed the slow query logs, and a few unrelated performance problems. I tried extra-lazy loading ( http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html ), as well as replacing a few hibernate operations with direct jdbc queries (mostly where it was dealing with loading and operating on large collections -- the jdbc replacements just worked directly on the join tables), and replaced some other inefficient queries that mysql was logging.
These steps improved pieces of the frontend performance, but still did not address the issue of the leak, the app was still unstable and acting unpredictably.
Finally, I found the option: -XX:+HeapDumpOnOutOfMemoryError . This finally produced a very large (~6.5GB) hprof file that accurately showed the state of the application. Ironically, the file was so large that jhat could not anaylze it, even on a box with 16gb of ram. Fortunately, MAT was able to produce some nice looking graphs and showed some better data.
This time what stuck out was a single quartz thread was taking up 4.5GB of the 6GB of heap, and the majority of that was a hibernate StatefulPersistenceContext ( https://www.hibernate.org/hib_docs/v3/api/org/hibernate/engine/StatefulPersistenceContext.html ). This class is used by hibernate internally as its primary cache (i had disabled the second-level and query-caches backed by EHCache).
This class is used to enable most of the features of hibernate, so it can't be directly disabled (you can work around it directly, but spring doesn't support stateless session) , and i would be very surprised if this had such a major memory leak in a mature product. So why was it leaking now?
Well, it was a combination of things:
The quartz thread pool instantiates with certain things being threadLocal, spring was injecting a session factory in, that was creating a session at the start of the quartz threads lifecycle, which was then being reused to run the various quartz jobs that used the hibernate session. Hibernate then was caching in the session, which is its expected behavior.
The problem then is that the thread pool was never releasing the session, so hibernate was staying resident and maintaining the cache for the lifecycle of the session. Since this was using springs hibernate template support, there was no explicit use of the sessions (we are using a dao -> manager -> driver -> quartz-job hierarchy, the dao is injected with hibernate configs through spring, so the operations are done directly on the templates).
So the session was never being closed, hibernate was maintaining references to the cache objects, so they were never being garbage collected, so each time a new job ran it would just keep filling up the cache local to the thread, so there was not even any sharing between the different jobs. Also since this is a write-intensive job (very little reading), the cache was mostly wasted, so the objects kept getting created.
The solution: create a dao method that explicitly calls session.flush() and session.clear(), and invoke that method at the beginning of each job.
The app has been running for a few days now with no monitoring issues, memory errors or restarts.
Thanks for everyone's help on this, it was a pretty tricky bug to track down, as everything was doing exactly what it was supposed to, but in the end a 3 line method managed to fix all the problems.
Can you run the production box with JMX enabled?
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=<port>
...
Monitoring and Management Using JMX
And then attach with JConsole, VisualVM?
Is it ok to do a heap dump with jmap?
If yes you could then analyze the heap dump for leaks with JProfiler (you already have), jhat, VisualVM, Eclipse MAT. Also compare heap dumps that might help to find leaks/patterns.
And as you mentioned jakarta-commons. There is a problem when using the jakarta-commons-logging related to holding onto the classloader. For a good read on that check
A day in the life of a memory leak hunter (release(Classloader))
It seems like memory other than heap is leaking, you mention that heap is remaining stable. A classical candidate is permgen (permanent generation) which consists of 2 things: loaded class objects and interned strings. Since you report having connected with VisualVM you should be able to seem the amount of loaded classes, if there is a continues increase of the loaded classes (important, visualvm also shows the total amount of classes ever loaded, it's okay if this goes up but the amount of loaded classes should stabilize after a certain time).
If it does turn out to be a permgen leak then debugging gets trickier since tooling for permgen analysis is rather lacking in comparison to the heap. Your best bet is to start a small script on the server that repeatedly (every hour?) invokes:
jmap -permstat <pid> > somefile<timestamp>.txt
jmap with that parameter will generate an overview of loaded classes together with an estimate of their size in bytes, this report can help you identify if certain classes do not get unloaded. (note: with I mean the process id and should be some generated timestamp to distinguish the files)
Once you identified certain classes as being loaded and not unloaded you can figure out mentally where these might be generated, otherwise you can use jhat to analyze dumps generated with jmap -dump. I'll keep that for a future update should you need the info.
I would look for directly allocated ByteBuffer.
From the javadoc.
A direct byte buffer may be created by invoking the allocateDirect factory method of this class. The buffers returned by this method typically have somewhat higher allocation and deallocation costs than non-direct buffers. The contents of direct buffers may reside outside of the normal garbage-collected heap, and so their impact upon the memory footprint of an application might not be obvious. It is therefore recommended that direct buffers be allocated primarily for large, long-lived buffers that are subject to the underlying system's native I/O operations. In general it is best to allocate direct buffers only when they yield a measureable gain in program performance.
Perhaps the Tomcat code uses this do to I/O; configure Tomcat to use a different connector.
Failing that you could have a thread that periodically executes System.gc(). "-XX:+ExplicitGCInvokesConcurrent" might be an interesting option to try.
Any JAXB? I find that JAXB is a perm space stuffer.
Also, I find that visualgc, now shipped with JDK 6, is a great way to see what's going on in memory. It shows the eden, generational, and perm spaces and the transient behavior of the GC beautifully. All you need is the PID of the process. Maybe that will help while you work on JProfile.
And what about the Spring tracing/logging aspects? Maybe you can write a simple aspect, apply it declaratively, and do a poor man's profiler that way.
"Unfortunately, the problem also pops up sporadically, it seems to be unpredictable, it can run for days or even a week without having any problems, or it can fail 40 times in a day, and the only thing I can seem to catch consistently is that garbage collection is acting up."
Sounds like, this is bound to a use case which is executed up to 40 times a day and then not anymore for days. I hope, you do not just track only the symptoms. This must be something, that you can narrow down by tracing the actions of the application's actors (users, jobs, services).
If this happens by XML imports, you should compare the XML data of the 40 crashes day with data, that is imported on a zero crash day. Maybe it's some sort of logical problem, that you do not find inside your code, only.
I had the same problem, with couple of differences..
My technology is the following:
grails 2.2.4
tomcat7
quartz-plugin 1.0
I use two datasources on my application. That is a
particularity determinant to bug causes..
Another thing to consider is that quartz-plugin, inject hibernate session in quartz threads, just like #liam says, and quartz threads still alive, untill I finish application.
My problem was a bug on grails ORM combined with the way the plugin handle session and my two datasources.
Quartz plugin had a listener to init and destroy hibernate sessions
public class SessionBinderJobListener extends JobListenerSupport {
public static final String NAME = "sessionBinderListener";
private PersistenceContextInterceptor persistenceInterceptor;
public String getName() {
return NAME;
}
public PersistenceContextInterceptor getPersistenceInterceptor() {
return persistenceInterceptor;
}
public void setPersistenceInterceptor(PersistenceContextInterceptor persistenceInterceptor) {
this.persistenceInterceptor = persistenceInterceptor;
}
public void jobToBeExecuted(JobExecutionContext context) {
if (persistenceInterceptor != null) {
persistenceInterceptor.init();
}
}
public void jobWasExecuted(JobExecutionContext context, JobExecutionException exception) {
if (persistenceInterceptor != null) {
persistenceInterceptor.flush();
persistenceInterceptor.destroy();
}
}
}
In my case, persistenceInterceptor instances AggregatePersistenceContextInterceptor, and it had a List of HibernatePersistenceContextInterceptor. One for each datasource.
Every opertion do with AggregatePersistenceContextInterceptor its passed to HibernatePersistence, without any modification or treatments.
When we calls init() on HibernatePersistenceContextInterceptor he increment the static variable below
private static ThreadLocal<Integer> nestingCount = new ThreadLocal<Integer>();
I don't know the pourpose of that static count. I just know he it's incremented two times, one per datasource, because of the AggregatePersistence implementation.
Until here I just explain the cenario.
The problem comes now...
When my quartz job finish, the plugin calls the listener to flush and destroy hibernate sessions, like you can see in source code of SessionBinderJobListener.
The flush occurs perfectly, but the destroy not, because HibernatePersistence, do one validation before close hibernate session... It examines nestingCount to see if the value is grather than 1. If the answer is yes, he not close the session.
Simplifying what was did by Hibernate:
if(--nestingCount.getValue() > 0)
do nothing;
else
close the session;
That's the base of my memory leak..
Quartz threads still alive with all objects used in session, because grails ORM not close session, because of a bug caused because I have two datasources.
To solve that, I customize the listener, to call clear before destroy, and call destroy two times, (one for each datasource). Ensuring my session was clear and destroyed, and if the destroy fails, he was clear at least.
I'm not experienced with java applications but I found out that finding static pointers etc. to these applications' memory addresses is often (nearly) impossible, apparently because of the java engine that handles the code (correct me if this way of naming it is wrong please).
Now, I've used VisualVM (https://visualvm.dev.java.net/) and it's great. I can select my java process and create a heap dump. It then shows me all classes and their values.
Can I use this method to continousely poll the heap dump and receive object values, for example the X Y and Z of a game? How would I programmatically interact with such application, and if this should not be done with VisualVM, what would be an alternative?
Edit: this is what I need to do:
I need to be able to find all classes with properties that have a certain value. For example: I'd search for the X coordinate (a float) and it should return the class "PlayerCoordsHandler" (just an example) and the corresponding float with it's value... or alternatively just a way to find this same float again (after restarting for example). This process does not have to be programmatic, aslong as requesting the value of the now known property (x float) can be retrieved programmatically (for example with a command line utility or reading from a file).
Edit2:
The target application is a windows executable (but made with java) and launches it's own java VM. It's not possible to add java parameters for debugging. This does not seem to be required though, as VirtualVM is able to debug the process just fine. Anyone knows how?
Thanks in advance.
It looks like you want to debug running Java applications.
The "official" Java debugger is JDB. I believe it's part of the JDK. It has the ability to set breakpoints, examine heaps, list and display and even change variables, show running threads and so on. The usual debugger stuff. But it's command line, which makes it a pain in the neck to work with.
Instead, it makes a lot of sense to use an IDE with integrated debugger. I use Eclipse. You can do all the usual debuggery things, including displaying windows with variables. You can set conditional breakpoints and there's much more. Specifically in answer to your question, you can set up watch expressions, which will be evaluated during the program's execution and their displays refreshed with new values when they change.
You may not want to run your Java app inside the IDE; or it may be running in a Web application server. That's no problem for JDB or Eclipse (or other IDEs, like NetBeans or IntelliJ Idea): They can connect to a running JVM and debug remotely with the same level of convenience.
A program being debugged like this, remotely or otherwise, run somewhat more slowly than if it were not. Your game, while being debugged, will run at rather bad-looking FPS; but it should still respond more or less normally to gameplay interaction.
Remote debugging:
To be able to attach your EclipseNetBeans debugger to a running Java process you need to start that process with the following Java options…
-Xdebug -Xrunjdwp:transport=dt_socket,address=3704,server=y,suspend=n
Have a look at YourKit. You can monitor CPU, memory and threads live, and generate dumps whenever you want. It can even compare different memory dumps to show you which objects were added/removed.
It's not free though, it has a 15 day (or 30 day?) fully functional eval period. If free is not a real concern it's definitely a great tool.
I good starting point is the jps and jstat tools added in Java 6 (i think). jps gives you the pid and main class for each application. jstat give you more details about process
Triggering a heapdump is usefull for post-mortem analysis of say memory leaks, but as the Java garbage collector moves objects around, you cannot use the memory values of a heapdump to reliably access those objects.
If you need a way to query internal values from outside of the application you could look into setting up an RMI service API via which you can retrieve the values you need.
Another method (if you just need to test something) could be to connect to the process via de Java debugging API.
If you know the JRE location that is used, you could rename java.exe and write a (C/C++) wrapper that adds the debug options listed by Carl and calls the renamed_java.exe in turn.
Another posibility might be to add or update classes in the .jar file of the application. You do not need the source to do this.
Tom, are you trying to reverse engineer an application that specifically tries to obfuscate its working? If so you might get further if you contact the manufacturer and ask them what possibilities they see for what you try to achieve?
You can easily generate a heap dump by creating your own JMX connection to the JVM, just like VisualVM does it. Analyzing the heapdump is very possible (the data is there and totally disconnected from the JVM so there is no interference from the gc).
However, unless it is a very specific scenario you are looking for you are probably much better off giving the heapdump to MAT and find a good workflow in there to use.
Edit: In this particular case it is probably better to create some kind of specific API to access the values from the outside (and maybe publish the values as MBeans using JMX). Taking a heap dump is way to much work if all you want to do is monitoring a few values.
Edit2: Based on your edits, it seems to me like you could really benefit from publishing your own MBean over JMX. I have to run for a meeting but, unless someone else does it while I am away, I will try to remember to give you some pointers later. Either in an edit of this one or in a new post.
If you want to poll the values of specific objects while your Java application is running you would probably find that using JMX is a better and more efficient approach rather than using a heap dump. With JMX you can define what values should be exposed and use tools such as VisualVM or JConsole to view them at runtime.
With VisualVM and heapdump you can find all classes with certain property by OQL:
var out = "";
var cls = filter(heap.classes(), "/java./(it.name)")
while (cls.hasNext()) {
var cl = cls.next();
var fls = cl.fields;
while (fls.hasMoreElements()) {
var fl = fls.nextElement();
if (/size/(fl.name)) {
out = toHtml(cl) + "." + fl.name + "()\n";
}
}
}
out.toString()
and write custom logging for BTrace
It is alternative for debugging.
FusionReactor could be a good alternative. For example;
VisualVM doesn’t give you a lot of insides on application memory
except for the total Heap allocation. Heap is a good metric to start
with, but I feel this is not enough to troubleshoot the actual cause
of a memory-related issue.
FusionReactor will display all of the memory spaces it detects, which
depends on the version of Java you’re running:
Heap allocation Non-Heap allocation CodeHeap (profiled and
non-profiled methods) Compressed Class Space FusionReactor also shows
the amount of memory that each generation takes Eden Space Old Space
Survivor Space
https://www.fusion-reactor.com/blog/java-visualvm-alternatives/