I was able to call ObjectHeap.iterateObjectsOfKlass (with the help of SA) to obtain all objects belonging to a certain class. The result is exactly what I have expected, but the performance is not.
It took me >800 seconds to get my result, during which the target VM is suspended. The target VM heap is about 2GB. I know iterateObjectsOfKlass will call iterateExact.
My question is: do these methods iterate/traverse the entire heap just to obtain objects for 1 class? I am disappointed since my expectation is that with a single class, the result should return within 10 seconds.
HotSpot Serviceability Agent is really powerful technology, but indeed very slow. I have explained how it works in this answer.
JVM has no means to quickly find all instances of the specific class. So, yes, it has to scan the entire heap. Moreover, in order to read memory of a foreign process, SA uses ptrace system call for every single word of data. That's why it is so slow.
You have several options to scan heap faster:
Create a coredump of a foreign process and then run SA tool against the coredump. This is much faster than to read memory of a suspended process. See the related question.
Inject a JVMTI agent into a running process using Dynamic Attach mechanism. The agent can scan heap of a local JVM using IterateOverInstancesOfClass function. This will be dramatically faster comparing to SA, because it will be just reading from within the same process without any syscalls or whatever. I believe it will take just a few seconds for a 2GB heap.
Related
I'm using collection objects (Arraylist, hashmap mainly).
My program is running 24*7. In between sometimes, it throws an exception out of memory Error: Java heap space.
I have already given 1gb of space for JVM
My Question is whether I need to use Global objects of Collection or local objects for each methods?
(Almost 1000000 data I have to process per day continuously 24*7)
You could also set the heap space to 2GB and see if the problem still occurs. Thats the poor mans memory leak detection process. Otherwise use a profiler like VisualVM and check for memory leaks.
You can use a source code quality tool like Sonar.
You can also use Eclipse Memory Analysis tool. It enables you to analyse the heap dump & figure out which process is using the maximum memory. Analyze productive heap dumps with hundreds of millions of objects, quickly calculate the retained sizes of objects, see who is preventing the Garbage Collector from collecting objects, run a report to automatically extract leak suspects.
I always use it to fix OutOfMemory exceptions.
All the answer were really helpful :
1.When the requirement of the program is to run 24*7 then use local variable across the method.
2.The program must be thread safe(if Thread is used).
3.Use Connection pooling because when your connection object is used in infinite loop then creating & destroying every time is a big performance issue , so always make 10 or 15 connection in the beginning & checkout and checkin the connection.
4. Use Memory Analysis tool to analyse the heap dump & figure out which process is using the maximum memory.
You should use local variable until and unless it is being used across the methods. and Try to make Global variable null whenever its value is not going to use anymore. But you should be more sure while making object null.
These null objects get garbage collected easily, which helps you to avoid memory exceptions. Also review your code for infinite loops while iterating collections, arrays etc.
No, really, that's what I'm trying to do. Server is holding onto 1600 users - back end long-running process, not web server - but sometimes the users generate more activity than usual, so it needs to cut its load down, specifically when it runs out of "resources," which pretty much means heap memory. This is a big design question - how to design this?
This might likely involve preventing OOM instead of recovering from them. Ideally
if(nearlyOutOfMemory()) throw new MyRecoverableOOMException();
might happen.
But that nearlyOutOfMemory() function I don't really know what might be.
Split the server into shards, each holding fewer users but residing in different physical machines.
If you have lots of caches, try to use soft references, which get cleared out when the VM runs out of heap.
In any case, profile, profile, profile first to see where CPU time is consumed and memory is allocated and held onto.
I have actually asked a similar question about handling OOM and it turns out that there's not too many options to recover from it. Basically you can:
1) invoke external shell script (-XX:OnOutOfMemoryError="cmd args;cmd args") which would trigger some action. The problem is that if OOM has happened in some thread which doesn't have a decent recovery strategy, you're doomed.
2) Define a threshold for Old gen which technically isn't OOM but a few steps ahead, say 80% and act if the threshold has been reached. More details here.
You could use Runtime.getRuntime() and the following methods:
freeMemory()
totalMemory()
maxMemory()
But I agree with the other posters, using SoftReference, WeakReference or a WeakHashMap will probably safe you the trouble of manually recovering from that condition.
A throttling, resource regulating servlet filter may be of use too. I did encounter DoSFilter of jetty/eclipse.
I'm still learning the ropes of Java so sorry if there's a obvious answer to this. I have a program that is taking a ton of memory and I want to figure a way to reduce its usage, but after reading many SO questions I have the idea that I need to prove where the problem is before I start optimizing it.
So here's what I did, I added a break point to the start of my program and ran it, then I started visualVM and had it profile the memory(I also did the same thing in netbeans just to compare the results and they are the same). My problem is I don't know how to read them, I got the highest area just saying char[] and I can't see any code or anything(which makes sense because visualvm is connecting to the jvm and can't see my source, but netbeans also does not show me the source as it does when doing cpu profiling).
Basically what I want to know is which variable(and hopefully more details like in which method) all the memory is being used so I can focus on working there. Is there a easy way to do this? I right now I am using eclipse and java to develop(and installed visualVM and netbeans specifically for profiling but am willing to install anything else that you feel gets this job done).
EDIT: Ideally, I'm looking for something that will take all my objects and sort them by size(so I can see which one is hogging memory). Currently it returns generic information such as string[] or int[] but I want to know which object its referring to so I can work on getting its size more optimized.
Strings are problematic
Basically in Java, String references ( things that use char[] behind the scenes ) will dominate most business applications memory wise. How they are created determines how much memory they consume in the JVM.
Just because they are so fundamental to most business applications as a data type, and they are one of the most memory hungry as well. This isn't just a Java thing, String data types take up lots of memory in pretty much every language and run time library, because at the least they are just arrays of 1 byte per character or at the worse ( Unicode ) they are arrays of multiple bytes per character.
Once when profiling CPU usage on a web app that also had an Oracle JDBC dependency I discovered that StringBuffer.append() dominated the CPU cycles by many orders of magnitude over all other method calls combined, much less any other single method call. The JDBC driver did lots and lots of String manipulation, kind of the trade off of using PreparedStatements for everything.
What you are concerned about you can't control, not directly anyway
What you should focus on is what in in your control, which is making sure you don't hold on to references longer than you need to, and that you are not duplicating things unnecessarily. The garbage collection routines in Java are highly optimized, and if you learn how their algorithms work, you can make sure your program behaves in the optimal way for those algorithms to work.
Java Heap Memory isn't like manually managed memory in other languages, those rules don't apply
What are considered memory leaks in other languages aren't the same thing/root cause as in Java with its garbage collection system.
Most likely in Java memory isn't consumed by one single uber-object that is leaking ( dangling reference in other environments ).
It is most likely lots of smaller allocations because of StringBuffer/StringBuilder objects not sized appropriately on first instantantations and then having to automatically grow the char[] arrays to hold subsequent append() calls.
These intermediate objects may be held around longer than expected by the garbage collector because of the scope they are in and lots of other things that can vary at run time.
EXAMPLE: the garbage collector may decide that there are candidates, but because it considers that there is plenty of memory still to be had that it might be too expensive time wise to flush them out at that point in time, and it will wait until memory pressure gets higher.
The garbage collector is really good now, but it isn't magic, if you are doing degenerate things, it will cause it to not work optimally. There is lots of documentation on the internet about the garbage collector settings for all the versions of the JVMs.
These un-referenced objects may just have not reached the time that the garbage collector thinks it needs them to for them to be expunged from memory, or there could be references to them held by some other object ( List ) for example that you don't realize still points to that object. This is what is most commonly referred to as a leak in Java, which is a reference leak more specifically.
EXAMPLE: If you know you need to build a 4K String using a StringBuilder create it with new StringBuilder(4096); not the default, which is like 32 and will immediately start creating garbage that can represent many times what you think the object should be size wise.
You can discover how many of what types of objects are instantiated with VisualVM, this will tell you what you need to know. There isn't going to be one big flashing light that points at a single instance of a single class that says, "This is the big memory consumer!", that is unless there is only one instance of some char[] that you are reading some massive file into, and this is not possible either, because lots of other classes use char[] internally; and then you pretty much knew that already.
I don't see any mention of OutOfMemoryError
You probably don't have a problem in your code, the garbage collection system just might not be getting put under enough pressure to kick in and deallocate objects that you think it should be cleaning up. What you think is a problem probably isn't, not unless your program is crashing with OutOfMemoryError. This isn't C, C++, Objective-C, or any other manual memory management language / runtime. You don't get to decide what is in memory or not at the detail level you are expecting you should be able to.
In JProfiler, you can take go to the heap walker and activate the biggest objects view. You will see the objects the retain most memory. "Retained" memory is the memory that would be freed by the garbage collector if you removed the object.
You can then open the object nodes to see the reference tree of the retained objects. Here's a screen shot of the biggest object view:
Disclaimer: My company develops JProfiler
I would recommend capturing heap dumps and using a tool like Eclipse MAT that lets you analyze them. There are many tutorials available. It provides a view of the dominator tree to provide insight into the relationships between the objects on the heap. Specifically for what you mentioned, the "path to GC roots" feature of MAT will tell you where the majority of those char[], String[] and int[] objects are being referenced. JVisualVM can also be useful in identifying leaks and allocations, particularly by using snapshots with allocation stack traces. There are quite a few walk-throughs of the process of getting the snapshots and comparing them to find the allocation point.
Java JDK comes with JVisualVM under bin folder, once your application server (for example is running) you can run visualvm and connect it to your localhost, which will provide you memory allocation and enable you to perform heap dump
If you use visualVM to check your memory usage, it focuses on the data, not the methods. Maybe your big char[] data is caused by many String values? Unless you are using recursion, the data will not be from local variables. So you can focus on the methods that insert elements into large data structures. To find out what precise statements cause your "memory leakage", I suggest you additionally
read Josh Bloch's Effective Java Item 6: (Eliminate obsolete object references)
use a logging framework an log instance creations on the highest verbosity level.
There are generally two distinct approaches to analyse Java code to gain an understanding of its memory allocation profile. If you're trying to measure the impact of a specific, small section of code – say you want to compare two alternative implementations in order to decide which one gives better runtime performance – you would use a microbenchmarking tool such as JMH.
While you can pause the running program, the JVM is a sophisticated runtime that performs a variety of housekeeping tasks and it's really hard to get a "point in time" snapshot and an accurate reading of the "level of memory usage". It might allocate/free memory at a rate that does not directly reflect the behaviour of the running Java program. Similarly, performing a Java object heap dump does not fully capture the low-level machine specific memory layout that dictates the actual memory footprint, as this could depend on the machine architecture, JVM version, and other runtime factors.
Tools like JMH get around this by repeatedly running a small section of code, and observing a long-running average of memory allocations across a number of invocations. E.g. in the GC profiling sample JMH benchmark the derived *·gc.alloc.rate.norm metric gives a reasonably accurate per-invocation normalised memory cost.
In the more general case, you can attach a profiler to a running application and get JVM-level metrics, or perform a heap dump for offline analysis. Some commonly used tools for profiling full applications are Async Profiler and the newly open-sourced Java Flight Recorder in conjunction with Java Mission Control to visualise results.
I'm writing a Java/Swing application with ~30 class my probleme is when i run my programe it load more than 150 M of the memory, is that normal ? since the application have 4 threads, parse some XML files, load some icon file, and drow some Jfreechat charts.
if not how can i do to minimize the amount of memory used by the application, is affecting some variables to null help? is loading the XML files once to use them in all the application life cycle help or i have to load them evry time i need them? is there some other tips that help me?
PS: im devlopping with a 8G memory computer in case that can affect the memory used by my program.
EDIT: it appeared that the program don't occupy all the 150MB because i get this value from the top command on linux, by running this code in my application as vilmantas advises me:
long free = Runtime.getRuntime().freeMemory();
long total = Runtime.getRuntime().totalMemory();
long max = Runtime.getRuntime().maxMemory();
long used = total - free;
I found that he occupy much less than that (~40MB) so i decide to run it with "-Xmx40M" argument and i reduce more than 40% of memory usage in the Top command.
The problem who are occupying the rest of memory since JVM (as i know) have his own process ? and how to make this operation automatic**?** because when choosing a not appropriate value you can get a memory exception as i have by running with "-Xmx30M" argument:
Exception in thread "Thread-2" java.lang.OutOfMemoryError: Java heap space
It is. This is Java, usually your VM/GC will do the job for you. Worry about memory usage when and if it becomes a problem.
If you want, there are several tools that can help you analyze what is going on. How to monitor Java memory usage?
Setting variables to null can help preventing memory leaks, if the referring variable's life cycle is greater than the referred instance. So that variables that should hold-on through the whole application life cycle are better not hold references to temporary objects that are used for a short time.
Loading the XMLs only once can help if you're good with loading its information only once. Meaning, if the XML is changed otherwise than through your application and you need to get the update - you'll have to reload the XML (and if the deprecated XML info is no longer needed - get rid of it).
You could use java memory heap analyzer like http://www.eclipse.org/mat/ to identify the parts of your application that use up most of the memory. You can then either optimize your data structures, or decide release parts of the data by setting all references to it to null.
Unintended references to data that is not needed anymore are also refered as "memory leaks". Settings those references to null will cause the garbage collector to remove it from java memory heap.
Along that line, you might find WeakReferences helpful.
Where do you observe those 150M? Is that how much your JVM process occupies (e.g. visible in the top command on linux/unix) or is it really the memory used (and necessary) by your application?
Try writing the following 4 values when your application runs:
long free = Runtime.getRuntime().freeMemory();
long total = Runtime.getRuntime().totalMemory();
long max = Runtime.getRuntime().maxMemory();
long used = total - free;
If the value for "used" is much lower than 150M, you may add java start parameter e.g. "-Xmx30M" to limit the heap size of your application to 30MB. Note that the JVM process will still occupy a little bit more than 30MB in such case.
The memory usage by JVM is somewhat tricky.
Let's say I have a Java application which does roughly the following:
Initialize (takes a long time because this is complicated)
Do some stuff quickly
Wait idly for a long time (your favorite mechanism here)
Go to step 2.
Is there a way to encourage or force the JVM to flush its memory out to disk during long periods of idleness? (e.g. at the end of step 2, make some function call that effectively says "HEY JVM! I'm going to be going to sleep for a while.")
I don't mind using a big chunk of virtual memory, but physical memory is at a premium on the machine I'm using because there are many background processes.
The operating system should handle this, I'd think.
Otherwise, you could manually store your application to disk or database post-initialization, and do a quicker initialization from that data, maybe?
Instead of having your program sit idle and use up resources, why not schedule it with cron? Or better yet, since you're using Java, schedule it with Quartz? Do your best to cache elements of your lengthy initialization procedure so you don't have to pay a big penalty each time the scheduled task runs.
The very first thing you must make sure of, is that your objects are garbage collectable. But that's just the first step.
Secondly, the memory used by the JVM may not be returned to the OS at all.
For instance. Let's say you have 100mb of java objects, your VM size will be 100mb approx. After the garbage collection you may reduce the heap usage to 10mb, but the VM will stay in something around 100mb. This strategy is used to allow the VM to have available memory for new objects.
To have the application returning "physical" memory to the system you have to check if your VM supports such a thing.
There are additional VM options that may allow your app to return more memory to the OS:
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
In my own interpretation using those options the VM will shirk if it falls below 70%. But quite frankly I don't know if only the heap will shrink and be returned to the OS or only shrink inside the VM.
For a complete description on the hot memory management works see:
Description of HotSpot GCs: Memory Management in the Java HotSpot Virtual Machine White Paper: https://www.oracle.com/technetwork/java/javase/memorymanagement-whitepaper-150215.pdf
And please, please. Give it a try and measure and let us know back here if that effectively reduces the memory consumption.
It's a bit of a hack to say the very least, but assuming you are on Win32 and if you are prepared to give up portability - write a small DLL that calls SetProcessWorkingSetSize and call into it using JNI. This allows you to suggest to the OS what the WS size should be. You can even specify -1, in which case the OS will attempt to page out as much as possible.
Assuming this is something like a server that's waiting for a request, could you do this?
Make two classes, Server and Worker.
Server only listens and launches Worker when required.
If Worker has never been initialised, initialise it.
After Worker has finished doing whatever it needed to do, serialize it, write it to disk, and set the Worker object to null.
Wait for a request.
When a request is received, read the serialized Worker object from disk and load it into memory.
Perform Worker tasks, when done, serialize, write out and set Worker object to null.
Rinse and repeat.
This means that the memory-intensive Worker object gets unloaded from memory (when the gc next runs, and you can encourage the gc to run by calling System.gc() after setting the Worker object to null), but since you saved it's state, you have the ability to reload it from disk and let it do it's work without going through initialization again. If it needs to run every "x" hours, you can put a java.util.Timer in the Server class instead of listening on a socket.
EDIT: There is also a JVM option -Xmx which sets the maximum size of the JVM's heap. This is probably not helpful in this case, but just thought I'd throw it in there.
Isn't this what page files are for? If your JVM is idle for any length of time and doesn't access it's memory pages. It'll very likely get paged and thus won't be using much actual RAM.
One thing you could do though... Most daemon programs have a startup phase (where they parse files and create data structures etc) and a running phase where they use the objects created at startup. If the JVM is allowed to it will start on the second phase without doing a garbage collection potentially causing the size of the process to grow and then stay that big for the lifetime of the process (since GC never/infrequently reduces the actual size of the process).
If you make sure that all memory allocated at each distinct phase of the programs life is GCable before the next phase starts then you can use the -Xmx setting to force down the maximum size of the process and cause your program to constantly GC between phases. I've done that before with some success.