How to overcome the heapdump created by Vaadin ScssCache? - java

I have created an application using Vaadin having respective UI.
I am running in a server with having a maximum heapload of 250 Mb. The application gets crashed because of the heapload since it is not garbage collected.
I tried to run with a visualVM analyzer. Found to have lot of instances and somehow the vaadin ScssCache is making this mess.
How can I rectify this error? Is it because of the browser cache settings or should I do something with the vaadinservletcache entry?
I really do not understand please help. I have attached my VisualVm screen shot for the reference. Thank you very much. I am using vaadin 7.6.3.

The attached VisualVM screenshot shows that the entire scssCache retains 1248kb of memory, out of which 1200kb is used for the actual cached CSS contents. This is less than 1% of your 250mb heap size and most likely not the problem.
That 1200kb char[] with the compiled CSS might be the biggest individual object on the heap, but there's only one such object. You will thus have to look for something else that consumes lots of memory. I'd recommend looking at the list of Classes sorted by their retained size, and ignore low-level classes such as char[], java.lang.String or java.Util.HashMap and instead try to pinpoint anything related to your own application.
I would also encourage you to verify that your application is actually running in production mode since the only code path that I can identify that does anything with scssCache is through VaadinServlet.serveOnTheFlyCompiledScss which checks whether production mode is enabled and in that case returns before touching the cache.

Related

DeploymentRuleSet occupying most of the heap space

We are getting an OOME on our Java Web Start application
When analyzing the heap size in JVisualVM we are seeing a large increase in memory over a short period.
The following memory consumption is more or less the same every time when running the application.
I have imported the heap dump to Eclipse MAT.
I can see that most of the heap is occupied by DeploymentRuleSets
The internal drs map of the DeploymentRuleSet class contains tens of thousands entries all containing the same jar
I've investigated at least 50 of these HashMap entries. They contain the same data. Screenshot attached - I have taken out some sensitive company data but I really triple checked if the values are the same. I've pasted screenshots of just 2 RuleId objects but you get the picture
Here are my questions/doubts:
The obvious one: has anyone encountered anything like this?
Looking at the RuleId class, it does not have the "equals" nor "hashcode" method implemented. It is therefor expected that duplicated entries will be inserted to the hashmap since it will compare the objects by reference.
However I've analyzed different heap dumps withing our company for other JavaWebStart applications. None of them have the same problem. The drs map contains only a couple hundred entries and there are only unique jars from classpath so exactly how I would expect it to work
Therefor I doubt that it is a "core java bug" but rather our specific app problem
Do you have an idea why the same jar would be inserted into the drsMap over and over again?
I've attached the remote debugger to our JavaWebStart app. I have imported the core java "deploy.jar" to the classpath and put some breakpoints in some of the DeploymentRuleSet methods. Sadly none of them were hit even though the DeploymentRuleSet increases in memory. Any way to debug core java stuff like this? Would be really helpful to see when and why are those DRS triggered

How to profile a gwt client application with jprofiler?

I have a memory leak problem in my GWT application and I'm trying to profile it using JProfiler.
I can't manage to get pertinent results as I don't see my java classes on the profile memory view, i just see the GWT lib classes.
I've added the parameter to profile a remote application using JProfiler (-agentpath:C:\PROGRA~1\JPROFI~1\bin\WINDOW~1\jprofilerti.dll=port=8849). I launch the project in superDevMode through the Eclipse IDE. JProfiler shows me the GWT classes in memory but it doesn't show my own java classes.
In this video youtube.com/watch?v=zUJUSxXOOa4 we can see that JProfiler can show the java classes directly, that's what i search to do
Is there any option to activate in JProfiler for that ? Any help on that matter would be welcome. Thank you
Super Dev mode does not work with a Java profiler. The old Dev mode executed the client-side code in the JVM via a special plugin. These days, the Dev mode browser plugin does not work with modern browsers. The last browsers that supported the plugins were Chrome 21.0.1180.89 and Firefox 26.
As of now, Firefox 24 ESR is still supported:
https://www.mozilla.org/en-US/firefox/organizations/all/
and the Dev mode plugin works in that version. For more information on dev mode see
http://www.gwtproject.org/doc/latest/DevGuideCompilingAndDebugging.html
I don't think so. GWT compile your classes to JS, due to this JProfiler won't work (I think, but maybe I'm wrong). Maybe you can give a try to MemoryAnalyzer with a Heap dump.
Chrome comes with some awesome built-in CPU and heap profiling tools. Firefox now has its own built-in CPU profiler, and Firebug has a different one. IE (at least 10, and I think 9) has a built in CPU profiler, though it has been a long time since I dug too far into there.
Memory is historically a difficult thing to track in browsers, not least because old IE versions just won't die, and leak just from looking at them funny. If you are facing one of those memory leaks, a different plan of attack is required.
But if you suspect you are dealing with a leak in your own application code, Chrome's dev tools can help! Compile in PRETTY (or DETAILED if you have an extremely wide screen), and bring up your app in Chrome with the developer tools open.
In the Profiles tab, there are three kinds of profile to capture, two about memory. I typically prefer the Take Heap Snapshot, and take a 'before' and 'after' look at whatever action I believe to be leaking memory, but the Record Heap Allocations view will give you another way to consider the memory usage of your application.
Start by picking a supposedly 'stable' state of your memory usage - turn the app on, use it for a bit, make sure all the various singletons etc are instantiated, and probably do whatever action you suspect as causing a problem, once. Once you are at a point which you can back to (memory-wise, at least if the leak was fixed), take a snapshot, do the behavior that leaks, return to the 'stable' state, and take another snapshot. Only take one step when checking for leaks, more on this in a bit.
With the two snapshots, you can compare objects allocated and freed - we're mostly interested in cases where more objects were created than deleted, ideally where zero were deleted. If you find N objects are deleted but N+1 are created, then make sure N is veeeery small before digging in - it is often possible to fix a leak only by going after individual objects, tracing them back to their actual leaked source, fixing it, and measuring again.
Once you have an object that was created in one step, but not deleted at the end of that step (but it should have been) use the 'Retainers' view to see why they are still kept. This will more or less show you the field in the object that holds them and the type of that holding object, all the way up to window or some other global object.
Ignore anything in ()s like (compiled code), (array), (system), (string), etc. I'd generally ignore dom element allocation (assuming you suspect you have a leak in app code, not JSNI). Look for few, high level objects leaked, rather than many, low level, it will make it more likely that you are closer to the source of the leak.
The names of compiled constructors and fields in PRETTY generally map very closely to the original Java source. All constructors get _X appended to them, where X is 0, 1, etc - this is to distinguish from the type itself. This makes for an easy way to recognize Java types in the Constructor column, as they all have _s near the end of their name.

want to look at memory used by one Java object in eclipse

I have a java project written in eclipse (RAD, actually); it uses a significant amount of memory by virtue of using iText. I am looking at using a different way of generating my iText document that is supposed to use less memory. I want to know how much less memory it uses.
I know what object will be the root for the largest portion of the memory; it would be fine for my purposes if I could set a breakpoint and then do something that would tell me the deep-copy memory used starting with that object (i.e., the memory used by it and all of its direct and indirect references).
I've been looking at memory monitors and heap dump analyzers and so forth for an hour now, and am pretty confused. All of them appear to be pointed at answering a different problem, or at least pointed to such a general class of problems that, even if I could get them installed and working, it is not clear whether they would answer MY question.
Can someone point me to a reasonably simple way to answer this limited question? I figure if I run the code once the current way, find out the memory used by this object and maybe one or two others, then run it again and look at the same values, I'll know how much good the new technique is doing.
rc
http://www.eclipse.org/mat/
works great for me... tutorials are included
You can fire up JVisualVM, shipped with all Oracle JDK and available independently.
Monitor your process, and at some point, you can do a Heap Dump.
In the heapdump tab, you can go to the OQL console and select your object[s].
When viewing the instance of an object, you can request to compute the retained size. That will give you the total size of your object.

How to free up memory?

We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this.
Upon analysis of the heap dumps we find the problem to be objects used in JSPs.
Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)?
We have a clustered Websphere appserver with 2 nodes and an IHS.
EDIT: The findings above are based on the heap-dump and nativestderr log analysis given below using the IBM support assistant
nativestd err log analysis:
alt text http://saregos.com/wp-content/uploads/2010/03/chart.jpg
Heap dump analysis:
![alt text][2]
Heap dump analysis showing the immediate dominators (2 levels up of hastable entry in the image above)
![alt text][3]
The last image shows that the immediate dominators are in fact objects being used in JSPs.
EDIT2: More info available at http://saregos.com/?p=43
I'd first attach a profile tool to tell you what these "Objects" are that are taking up all the memory.
Eclipse has TPTP,
or there is JProfiler
or JProbe.
Any of these should show the object heap creaping up and allow you to inspect it to see what is on the heap.
Then search the code base to find who is creating these.
Maybe you have a cache or tree/map object with elements in and you have only implemented the "equals()" method on these objects, and you need to implement "hashcode()".
This would then result in the map/cache/tree getting bigger and bigger till it falls over.
This is only a guess though.
JProfiler would be my first call
Javaworld has example screen shot of what is in memory...
(source: javaworld.com)
And a screen shot of object heap building up and being cleaned up (hence the saw edge)
(source: javaworld.com)
UPDATE *************************************************
Ok, I'd look at...
http://www-01.ibm.com/support/docview.wss?uid=swg1PK38940
Heap usage increases over time which leads to an OutOfMemory
condition. Analysis of a heapdump shows that the following
objects are taking up an increasing amount of space:
40,543,128 [304] 47 class
com/ibm/wsspi/rasdiag/DiagnosticConfigHome
40,539,056 [56] 2 java/util/Hashtable 0xa8089170
40,539,000 [2,064] 511 array of java/util/Hashtable$Entry
6,300,888 [40] 3 java/util/Hashtable$HashtableCacheHashEntry
Triggering the garbage collection manually doesn't solve your problem - it won't free resources that are still in use.
You should use a profiling tool (like jProfiler) to find your leaks. You problably use code that stores references in lists or maps that are not released during runtime - propably static references.
If you run under the Sun 6 JVM strongly consider to use the jvisualvm program in the JDK to get an inital overview of what actually goes on inside the program. The snapshot comparison is really good to help you get further in which objects sneak in.
If Sun 6 JVM is not an option, then investigate which profiling tools you have. Trials can get you really far.
It can be something as simple as gigantic character arrays underlying a substring you are collecting in a list, for e.g. housekeeping.
I suggest reading Effective Java, chapter 2. Following it, together with a profiler, will help you identify the places where your application produces memory leaks.
Freeing up memory isn't the way to solve extensive memory consumption. The extensive memory consumption may be a result of two things:
not properly written code - the solution is to write it properly, so that it does not consume more than is needed - Effective Java will help here.
the application simply needs this much memory. Then you should increase the VM memory using Xmx, Xms, XX:MaxHeapSize,...
There is no specific to free up objects allocated in JSPs, at least as far as I know. Rather than investigationg such options, I'd rather focus on finding the actual problem in your application codes and fix it.
Some hints that might help:
Check the scope of your beans. Aren't
you e.g. storing something user or
request specific into "application"
scope (by mistake)?
Check settings of web session timeout in your web application and
appserver settings.
You mentioned the heap consumption grows gradually. If it's indeed so,
try to see by how much the heap size
grows with various user scenarios:
Grab a heapdump, run a test, let the
session data timeout, grab another
dump, compare the two. That might
give you some idea where do the objects on heap come from
Check your beans for any obvious memory leaks, for sure :)
EDIT: Checking for unreleased static resources that Daniel mentions is another worthwhile thing :)
As I understand those top-level memory-eaters are cache storage and objects stored in it. Probably you should make sure that your cache is going to free objects when it takes too much memory. You may want to use weak-ref if you need cache for live objects only.

An alternative of software like VisualVM to programmatically find running java applications' values etc. by searching heap dumps?

I'm not experienced with java applications but I found out that finding static pointers etc. to these applications' memory addresses is often (nearly) impossible, apparently because of the java engine that handles the code (correct me if this way of naming it is wrong please).
Now, I've used VisualVM (https://visualvm.dev.java.net/) and it's great. I can select my java process and create a heap dump. It then shows me all classes and their values.
Can I use this method to continousely poll the heap dump and receive object values, for example the X Y and Z of a game? How would I programmatically interact with such application, and if this should not be done with VisualVM, what would be an alternative?
Edit: this is what I need to do:
I need to be able to find all classes with properties that have a certain value. For example: I'd search for the X coordinate (a float) and it should return the class "PlayerCoordsHandler" (just an example) and the corresponding float with it's value... or alternatively just a way to find this same float again (after restarting for example). This process does not have to be programmatic, aslong as requesting the value of the now known property (x float) can be retrieved programmatically (for example with a command line utility or reading from a file).
Edit2:
The target application is a windows executable (but made with java) and launches it's own java VM. It's not possible to add java parameters for debugging. This does not seem to be required though, as VirtualVM is able to debug the process just fine. Anyone knows how?
Thanks in advance.
It looks like you want to debug running Java applications.
The "official" Java debugger is JDB. I believe it's part of the JDK. It has the ability to set breakpoints, examine heaps, list and display and even change variables, show running threads and so on. The usual debugger stuff. But it's command line, which makes it a pain in the neck to work with.
Instead, it makes a lot of sense to use an IDE with integrated debugger. I use Eclipse. You can do all the usual debuggery things, including displaying windows with variables. You can set conditional breakpoints and there's much more. Specifically in answer to your question, you can set up watch expressions, which will be evaluated during the program's execution and their displays refreshed with new values when they change.
You may not want to run your Java app inside the IDE; or it may be running in a Web application server. That's no problem for JDB or Eclipse (or other IDEs, like NetBeans or IntelliJ Idea): They can connect to a running JVM and debug remotely with the same level of convenience.
A program being debugged like this, remotely or otherwise, run somewhat more slowly than if it were not. Your game, while being debugged, will run at rather bad-looking FPS; but it should still respond more or less normally to gameplay interaction.
Remote debugging:
To be able to attach your EclipseNetBeans debugger to a running Java process you need to start that process with the following Java options…
-Xdebug -Xrunjdwp:transport=dt_socket,address=3704,server=y,suspend=n
Have a look at YourKit. You can monitor CPU, memory and threads live, and generate dumps whenever you want. It can even compare different memory dumps to show you which objects were added/removed.
It's not free though, it has a 15 day (or 30 day?) fully functional eval period. If free is not a real concern it's definitely a great tool.
I good starting point is the jps and jstat tools added in Java 6 (i think). jps gives you the pid and main class for each application. jstat give you more details about process
Triggering a heapdump is usefull for post-mortem analysis of say memory leaks, but as the Java garbage collector moves objects around, you cannot use the memory values of a heapdump to reliably access those objects.
If you need a way to query internal values from outside of the application you could look into setting up an RMI service API via which you can retrieve the values you need.
Another method (if you just need to test something) could be to connect to the process via de Java debugging API.
If you know the JRE location that is used, you could rename java.exe and write a (C/C++) wrapper that adds the debug options listed by Carl and calls the renamed_java.exe in turn.
Another posibility might be to add or update classes in the .jar file of the application. You do not need the source to do this.
Tom, are you trying to reverse engineer an application that specifically tries to obfuscate its working? If so you might get further if you contact the manufacturer and ask them what possibilities they see for what you try to achieve?
You can easily generate a heap dump by creating your own JMX connection to the JVM, just like VisualVM does it. Analyzing the heapdump is very possible (the data is there and totally disconnected from the JVM so there is no interference from the gc).
However, unless it is a very specific scenario you are looking for you are probably much better off giving the heapdump to MAT and find a good workflow in there to use.
Edit: In this particular case it is probably better to create some kind of specific API to access the values from the outside (and maybe publish the values as MBeans using JMX). Taking a heap dump is way to much work if all you want to do is monitoring a few values.
Edit2: Based on your edits, it seems to me like you could really benefit from publishing your own MBean over JMX. I have to run for a meeting but, unless someone else does it while I am away, I will try to remember to give you some pointers later. Either in an edit of this one or in a new post.
If you want to poll the values of specific objects while your Java application is running you would probably find that using JMX is a better and more efficient approach rather than using a heap dump. With JMX you can define what values should be exposed and use tools such as VisualVM or JConsole to view them at runtime.
With VisualVM and heapdump you can find all classes with certain property by OQL:
var out = "";
var cls = filter(heap.classes(), "/java./(it.name)")
while (cls.hasNext()) {
var cl = cls.next();
var fls = cl.fields;
while (fls.hasMoreElements()) {
var fl = fls.nextElement();
if (/size/(fl.name)) {
out = toHtml(cl) + "." + fl.name + "()\n";
}
}
}
out.toString()
and write custom logging for BTrace
It is alternative for debugging.
FusionReactor could be a good alternative. For example;
VisualVM doesn’t give you a lot of insides on application memory
except for the total Heap allocation. Heap is a good metric to start
with, but I feel this is not enough to troubleshoot the actual cause
of a memory-related issue.
FusionReactor will display all of the memory spaces it detects, which
depends on the version of Java you’re running:
Heap allocation Non-Heap allocation CodeHeap (profiled and
non-profiled methods) Compressed Class Space FusionReactor also shows
the amount of memory that each generation takes Eden Space Old Space
Survivor Space
https://www.fusion-reactor.com/blog/java-visualvm-alternatives/

Categories