I'm writing a Java/Swing application with ~30 class my probleme is when i run my programe it load more than 150 M of the memory, is that normal ? since the application have 4 threads, parse some XML files, load some icon file, and drow some Jfreechat charts.
if not how can i do to minimize the amount of memory used by the application, is affecting some variables to null help? is loading the XML files once to use them in all the application life cycle help or i have to load them evry time i need them? is there some other tips that help me?
PS: im devlopping with a 8G memory computer in case that can affect the memory used by my program.
EDIT: it appeared that the program don't occupy all the 150MB because i get this value from the top command on linux, by running this code in my application as vilmantas advises me:
long free = Runtime.getRuntime().freeMemory();
long total = Runtime.getRuntime().totalMemory();
long max = Runtime.getRuntime().maxMemory();
long used = total - free;
I found that he occupy much less than that (~40MB) so i decide to run it with "-Xmx40M" argument and i reduce more than 40% of memory usage in the Top command.
The problem who are occupying the rest of memory since JVM (as i know) have his own process ? and how to make this operation automatic**?** because when choosing a not appropriate value you can get a memory exception as i have by running with "-Xmx30M" argument:
Exception in thread "Thread-2" java.lang.OutOfMemoryError: Java heap space
It is. This is Java, usually your VM/GC will do the job for you. Worry about memory usage when and if it becomes a problem.
If you want, there are several tools that can help you analyze what is going on. How to monitor Java memory usage?
Setting variables to null can help preventing memory leaks, if the referring variable's life cycle is greater than the referred instance. So that variables that should hold-on through the whole application life cycle are better not hold references to temporary objects that are used for a short time.
Loading the XMLs only once can help if you're good with loading its information only once. Meaning, if the XML is changed otherwise than through your application and you need to get the update - you'll have to reload the XML (and if the deprecated XML info is no longer needed - get rid of it).
You could use java memory heap analyzer like http://www.eclipse.org/mat/ to identify the parts of your application that use up most of the memory. You can then either optimize your data structures, or decide release parts of the data by setting all references to it to null.
Unintended references to data that is not needed anymore are also refered as "memory leaks". Settings those references to null will cause the garbage collector to remove it from java memory heap.
Along that line, you might find WeakReferences helpful.
Where do you observe those 150M? Is that how much your JVM process occupies (e.g. visible in the top command on linux/unix) or is it really the memory used (and necessary) by your application?
Try writing the following 4 values when your application runs:
long free = Runtime.getRuntime().freeMemory();
long total = Runtime.getRuntime().totalMemory();
long max = Runtime.getRuntime().maxMemory();
long used = total - free;
If the value for "used" is much lower than 150M, you may add java start parameter e.g. "-Xmx30M" to limit the heap size of your application to 30MB. Note that the JVM process will still occupy a little bit more than 30MB in such case.
The memory usage by JVM is somewhat tricky.
Related
I have some code that throws an OutOfMemoryError.
I set the JVM to dump on OOM and I opened the dump in Java Flight Recorder.
When inspecting the Live Objects in JFR, I see very few objects (less than 60).
How can I find out the largest object(s) being held in memory and noncollectable at the moment the OOM was triggered?
Objects are sampled, so there is no way you can be sure to see the largest object before OOM.
That said, 60 samples are usually sufficient to find a memory leak, at least if the application has been running for some time and the leak is not negligible in size.
Samples that happens in the beginning are typically singletons and static objects that you have for the whole duration of application. Samples that happen at the end are typically short-lived objects that are to be garbage collected. In JMC you can click in "the middle" of the timeline on top to find memore leak candidates. Then you can look at stack trace and path to GC root and see if you see something suspicious.
You can also use the command line tool and do:
$ jfr print --events OldObjectSample --stack-depth 64 recording.jfr
It will list the samples in chronological order. It may be easier to see each sample than looking at an aggregate. The command line approach is described in detail here
You can't do this in an automated way (like memory analyzer tools do it with heap dumps) due to the nature of data being collected.
It is totally fine that you can see only handful of objects. The reason is how low overhead sampling works - on every new TLAB allocation JFR steps in and takes a few objects from old TLAB. Therefore you don't get all objects recorded, only a representative sample of objects being allocated. This should be enough to give you a ratio of objects in heap. Also, all of the objects reported are live at the point of recording dump.
If you think that you get too little samples to come to a proper conclusion, it might be that your heap is small relative to TLAB size and you might want to reduce TLAB size. This is not advisable in production environment as improper TLAB setting can reduce application performance.
If you had "Memory Leak Detection" to "Object types + Allocation Stack Traces + Path to GC Root" set in the profiling configuration during record, you can trace where live objects go in code after being created and you can reconstruct representative dominator tree that way.
If you care about large objects meaning large themselves (and not retaining most of heap), you can find objects that are larger than TLAB by looking at "TLAB Allocations" page and look for "Total Allocations Outside TLAB" column. This data would be collected only if profiling configuration had "Memory Profiling" set to "Object Allocation and Promotion".
By profiling configuration I mean the file that you specify with settings option when you start recording with JFR. This file can be created using JMC application's "Flight Recording Template Manager".
I am getting the following error on execution of a multi-threading program
java.lang.OutOfMemoryError: Java heap space
The above error occured in one of the threads.
Upto my knowledge, Heap space is occupied by instance variables only. If this is correct, then why this error occurred after running fine for sometime as space for instance variables are alloted at the time of object creation.
Is there any way to increase the heap space?
What changes should I made to my program so that It will grab less heap space?
If you want to increase your heap space, you can use java -Xms<initial heap size> -Xmx<maximum heap size> on the command line. By default, the values are based on the JRE version and system configuration. You can find out more about the VM options on the Java website.
However, I would recommend profiling your application to find out why your heap size is being eaten. NetBeans has a very good profiler included with it. I believe it uses the jvisualvm under the hood. With a profiler, you can try to find where many objects are being created, when objects get garbage collected, and more.
1.- Yes, but it pretty much refers to the whole memory used by your program.
2.- Yes see Java VM options
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Ie
java -Xmx2g assign 2 gigabytes of ram as maximum to your app
But you should see if you don't have a memory leak first.
3.- It depends on the program. Try spot memory leaks. This question would be to hard to answer. Lately you can profile using JConsole to try to find out where your memory is going to
You may want to look at this site to learn more about memory in the JVM:
http://developer.streamezzo.com/content/learn/articles/optimization-heap-memory-usage
I have found it useful to use visualgc to watch how the different parts of the memory model is filling up, to determine what to change.
It is difficult to determine which part of memory was filled up, hence visualgc, as you may want to just change the part that is having a problem, rather than just say,
Fine! I will give 1G of RAM to the JVM.
Try to be more precise about what you are doing, in the long run you will probably find the program better for it.
To determine where the memory leak may be you can use unit tests for that, by testing what was the memory before the test, and after, and if there is too big a change then you may want to examine it, but, you need to do the check while your test is still running.
You can get your heap memory size through below programe.
public class GetHeapSize {
public static void main(String[] args) {
long heapsize = Runtime.getRuntime().totalMemory();
System.out.println("heapsize is :: " + heapsize);
}
}
then accordingly you can increase heap size also by using:
java -Xmx2g
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
To increase the heap size you can use the -Xmx argument when starting Java; e.g.
-Xmx256M
Upto my knowledge, Heap space is occupied by instance variables only. If this is correct, then why this error occurred after running fine for sometime as space for instance variables are alloted at the time of object creation.
That means you are creating more objects in your application over a period of time continuously. New objects will be stored in heap memory and that's the reason for growth in heap memory.
Heap not only contains instance variables. It will store all non-primitive data types ( Objects). These objects life time may be short (method block) or long (till the object is referenced in your application)
Is there any way to increase the heap space?
Yes. Have a look at this oracle article for more details.
There are two parameters for setting the heap size:
-Xms:, which sets the initial and minimum heap size
-Xmx:, which sets the maximum heap size
What changes should I made to my program so that It will grab less heap space?
It depends on your application.
Set the maximum heap memory as per your application requirement
Don't cause memory leaks in your application
If you find memory leaks in your application, find the root cause with help of profiling tools like MAT, Visual VM , jconsole etc. Once you find the root cause, fix the leaks.
Important notes from oracle article
Cause: The detail message Java heap space indicates object could not be allocated in the Java heap. This error does not necessarily imply a memory leak.
Possible reasons:
Improper configuration ( not allocating sufficiant memory)
Application is unintentionally holding references to objects and this prevents the objects from being garbage collected
Applications that make excessive use of finalizers. If a class has a finalize method, then objects of that type do not have their space reclaimed at garbage collection time. If the finalizer thread cannot keep up, with the finalization queue, then the Java heap could fill up and this type of OutOfMemoryError exception would be thrown.
On a different note, use better Garbage collection algorithms ( CMS or G1GC)
Have a look at this question for understanding G1GC
In most of the cases, the code is not optimized. Release those objects which you think shall not be needed further. Avoid creation of objects in your loop each time. Try to use caches. I don't know how your application is doing. But In programming, one rule of normal life applies as well
Prevention is better than cure. "Don't create unnecessary objects"
Local variables are located on the stack. Heap space is occupied by objects.
You can use the -Xmx option.
Basically heap space is used up everytime you allocate a new object with new and freed some time after the object is no longer referenced. So make sure that you don't keep references to objects that you no longer need.
No, I think you are thinking of stack space. Heap space is occupied by objects. The way to increase it is -Xmx256m, replacing the 256 with the amount you need on the command line.
To avoid that exception, if you are using JUnit and Spring try adding this in every test class:
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
I have tried all Solutions but nothing worked from above solutions
Solution: In My case I was using 4GB RAM and due to that RAM usage comes out 98% so the required amount if Memory wasn't available. Please do look for this also.If such issue comes upgrade RAM and it will work fine.
Hope this will save someone Time
In netbeans, Go to 'Run' toolbar, --> 'Set Project Configuration' --> 'Customise' --> 'run' of its popped up windo --> 'VM Option' --> fill in '-Xms2048m -Xmx2048m'. It could solve heap size problem.
I just proposed an algorithm, and I want to prove its superiority compared with another algorithm in terms of time&space consumption. I will implement my algorithm in Java, does anyone know how to monitor the memory consumption of a Java program when the program is executing? Thanks.
from JDK 6 onwards you have a tool in the bin directory which monitors CPU, memory consumption as well as the number of threads spawned. its called visualvm. Just start it and then start your java process. You will see the java process in the tool on the left hand side. Double click on the process and view the statistics. Hope this helps. :)
While the tools mentioned in other answers will tell you how much memory Java allocated to your heap, this can only be taken as an upper bound on your program's memory consumption - the JVM does not garbage collect objects in the moment they're no longer needed, it can even increase heap size instead of garbage collecting and only perform garbage collection when it reaches the given heap limit (-Xmx).
So, the graphs you'll see may reach much higher than what the program really needs.
One way to deal with this is to lower the heap size limit (-Xmx) incrementally until the program crashes on OutOfMemory exception.
Another way can be used if you know at which point in your algorithm it will consume the most memory - place a breakpoint at that point, make a heap dump and then examine it to see how much living objects occupy.
You can use
long memory = runtime.totalMemory() - runtime.freeMemory();
This place also gives you runtime so that might be pretty useful for what you're trying to do as well!
http://www.vogella.com/tutorials/JavaPerformance/article.html#runtimeinfo
Answer is found with a quick Google search.
Use the totalMemory( ) and freeMemory( ) methods [...] The java.lang.Runtime.totalMemory()
from http://crunchify.com/java-runtime-get-free-used-and-total-memory-in-java/
so
long totalmem = java.lang.Runtime.totalMemory();
long freemem = java.lang.Runtime.freeMemory();
long consump = totalmem - freemem;
after searching the web for a while I decided to ask you for help with my problem.
My program should analyze logfiles, which are really big. They are about 100mb up to 2gb. I want to read the files using NIO-classes like FileChannel.
I don't want to save the files in memory, but I want to process the lines immediately. The code works.
Now my problem: I analyzed the Memory usage with the Eclipse MAT plugin and it says about 18mb of data is saved (that fits). But TaskManager in Windows says that about 180mb are used by the JVM.
Can you tell me WHY this is?
I don't want to save the data reading with the FileChannel, i just want to process it. I am closing the Channel afterwards - I thought every data would be deleted then?
I hope you guys can help me with the difference between the used space is shown in MAT and the used space is shown in TaskManager.
MAT will only show objects that are actively references by your program. The JVM uses more memory than that:
Its own code
Non-object data (classes, compiled bytecode e.t.c.)
Heap space that is not currently in use, but has already been allocated.
The last case is probably the most major one. Depending on how much physical memory there is on a computer, the JVM will set a default maximum size for its heap. To improve performance it will keep using up to that amount of memory with minimal garbage collection activity. That means that objects that are no longer referenced will remain in memory, rather than be garbage collected immediately, thus increasing the total amount of memory used.
As a result, the JVM will generally not free any memory it has allocated as part of its heap back to the system. This will show-up as an inordinate amount of used memory in the OS monitoring utilities.
Applications with high object allocation/de-allocation rates will be worse - I have an application that uses 1.8GB of memory, while actually requiring less than 100MB. Reducing the maximum heap size to 120 MB, though, increases the execution time by almost a full order of magnitude.
Let's say I have a Java application which does roughly the following:
Initialize (takes a long time because this is complicated)
Do some stuff quickly
Wait idly for a long time (your favorite mechanism here)
Go to step 2.
Is there a way to encourage or force the JVM to flush its memory out to disk during long periods of idleness? (e.g. at the end of step 2, make some function call that effectively says "HEY JVM! I'm going to be going to sleep for a while.")
I don't mind using a big chunk of virtual memory, but physical memory is at a premium on the machine I'm using because there are many background processes.
The operating system should handle this, I'd think.
Otherwise, you could manually store your application to disk or database post-initialization, and do a quicker initialization from that data, maybe?
Instead of having your program sit idle and use up resources, why not schedule it with cron? Or better yet, since you're using Java, schedule it with Quartz? Do your best to cache elements of your lengthy initialization procedure so you don't have to pay a big penalty each time the scheduled task runs.
The very first thing you must make sure of, is that your objects are garbage collectable. But that's just the first step.
Secondly, the memory used by the JVM may not be returned to the OS at all.
For instance. Let's say you have 100mb of java objects, your VM size will be 100mb approx. After the garbage collection you may reduce the heap usage to 10mb, but the VM will stay in something around 100mb. This strategy is used to allow the VM to have available memory for new objects.
To have the application returning "physical" memory to the system you have to check if your VM supports such a thing.
There are additional VM options that may allow your app to return more memory to the OS:
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
In my own interpretation using those options the VM will shirk if it falls below 70%. But quite frankly I don't know if only the heap will shrink and be returned to the OS or only shrink inside the VM.
For a complete description on the hot memory management works see:
Description of HotSpot GCs: Memory Management in the Java HotSpot Virtual Machine White Paper: https://www.oracle.com/technetwork/java/javase/memorymanagement-whitepaper-150215.pdf
And please, please. Give it a try and measure and let us know back here if that effectively reduces the memory consumption.
It's a bit of a hack to say the very least, but assuming you are on Win32 and if you are prepared to give up portability - write a small DLL that calls SetProcessWorkingSetSize and call into it using JNI. This allows you to suggest to the OS what the WS size should be. You can even specify -1, in which case the OS will attempt to page out as much as possible.
Assuming this is something like a server that's waiting for a request, could you do this?
Make two classes, Server and Worker.
Server only listens and launches Worker when required.
If Worker has never been initialised, initialise it.
After Worker has finished doing whatever it needed to do, serialize it, write it to disk, and set the Worker object to null.
Wait for a request.
When a request is received, read the serialized Worker object from disk and load it into memory.
Perform Worker tasks, when done, serialize, write out and set Worker object to null.
Rinse and repeat.
This means that the memory-intensive Worker object gets unloaded from memory (when the gc next runs, and you can encourage the gc to run by calling System.gc() after setting the Worker object to null), but since you saved it's state, you have the ability to reload it from disk and let it do it's work without going through initialization again. If it needs to run every "x" hours, you can put a java.util.Timer in the Server class instead of listening on a socket.
EDIT: There is also a JVM option -Xmx which sets the maximum size of the JVM's heap. This is probably not helpful in this case, but just thought I'd throw it in there.
Isn't this what page files are for? If your JVM is idle for any length of time and doesn't access it's memory pages. It'll very likely get paged and thus won't be using much actual RAM.
One thing you could do though... Most daemon programs have a startup phase (where they parse files and create data structures etc) and a running phase where they use the objects created at startup. If the JVM is allowed to it will start on the second phase without doing a garbage collection potentially causing the size of the process to grow and then stay that big for the lifetime of the process (since GC never/infrequently reduces the actual size of the process).
If you make sure that all memory allocated at each distinct phase of the programs life is GCable before the next phase starts then you can use the -Xmx setting to force down the maximum size of the process and cause your program to constantly GC between phases. I've done that before with some success.