Increasing JVM heap size without effect - Review of defined options - java

For a large data mining experiment run using Java (Weka), I am trying to assign a larger heap size as said to increase performance.
I do so by setting the following options in the menu 'Run > Run Configurations > Arguments':
[The application containing the code to be run indeed is the selected one].
Two things confuse me:
The performance doesn't increase at all. Might still be possible if additional memory doesn't make the program run faster?
Even if I set values that exceed my machines RAM (4096m) it does not yield any errors. Is that correct or should there be one?

Increasing memory will never increase the performance if your program does not need that much memory. So first make sure that your program needs that much memory.
Understand the meaning of xms and xmx
xms(initial heap size) is the initial heap size so system will allocate this much heap for initial use and by specifying xmx(maximum heap size) you are telling the system that this is the maximum heap size that you may be using. this is the reason why your system is not yielding any error.

Related

Why is JVM -Xms option not simply 0?

Why does the JVM have an -Xms option? Why do we care about a minimum heap size? Why not just 0? It's super easy to allocate RAM, so I don't see the point of forcing a minimum heap size.
In my searching, I see that it's customary to set -Xms (minimum heap size) and -Xmx (maximum heap size) to the same value.
I am having a hard time finding a clear and rational basis for this custom or why -Xms even exists. Rather, I find a lot of communal reinforcement. On occasion, I see it justified by a flaky theory, such that the JVM is unusually slow at allocating additional RAM as it grows the heap size.
While this came up as I was optimizing Solr, it seems that fussing with the heap size is a common consideration with JVMs.
As a curious data point, you'll see two memory-usage dips here:
Before dip 1: -Xms14g -Xmx14g
Between dip 1 and 2: -Xms0g -Xmx14g
After dip 2: -Xmx14g
After dip 2, Solr reported to me that it was only using a couple hundred MBs of heap space even though the JVM gobbled up many GBs of RAM.
In case it matters, I am on the current release of OpenJDK.
To summarize, is there a rational and fact-based basis for:
Setting -Xms to something other than 0.
The custom of setting -Xms and -Xmx to the same value.
Why -Xms even exists.
I think the fact-based basis will help with a more informed basis for managing heap-size options.
You have three questions.
First question:
Is there a rational and fact-based basis for setting -Xms to something other than 0?
Looking as far back as version 8, we can see (among other things) the following for -Xms:
Sets the minimum and the initial size (in bytes) of the heap.
So, setting -Xms to 0 would mean setting the heap to 0. I'm not sure what your expectation would be, but.. the JVM needs at least some amount of heap to do things (like run a garbage collector).
Second question:
Is there a rational and fact-based basis for the custom of setting -Xms and -Xmx to the same value?
Yes, if you expect to use a certain amount of memory over time, but not necessarily right away. You could allocate the full amount of memory up front so that any allocation costs are out of the way.
For example, consider an app that launches needing less than 1GB of memory, but over time it grows (normally, correctly) to 4GB. You could run that app with -Xms1g -Xmx4g – so, start with 1GB and do periodic allocations over time to reach 4GB. Or you could run with -Xms4g -Xmx4g – tell the JVM to allocate 4GB now, even if it's not going to be used right away.
Allocating memory from an underlying operating system has a cost, and might be expensive enough that you'd like to do that early in the application life, instead of some later point where that cost might be more impactful.
Third question:
Is there a rational and fact-based basis for why -Xms even exists?
Yes, it allows tuning JVM behavior. Some applications don't need to do this, but others do. It's useful to be able to set values for lower, upper, or both (lower and upper together). Way beyond this, there's a whole world of garbage collector tuning, too.
A little more detail on how -Xms is used (below) could give you some initial garbage collection topics to read about (old generation, young generation):
If you do not set this option, then the initial size will be set as the sum of the sizes allocated for the old generation and the young generation.

How come Java can allocate more memory than the specified heap size

We are basically tuning our JVM options.
-J-Xms1536M -J-Xmx1536M -J-Xss3M -J-Djruby.memory.max=1536M -J-Djruby.thread.pool.enabled=true -J-Djruby.compile.mode=FORCE -J-XX:NewRatio=3 -J-XX:NewSize=256M -J-XX:MaxNewSize=256M -J-XX:+UseParNewGC -J-XX:+CMSParallelRemarkEnabled -J-XX:+UseConcMarkSweepGC -J-XX:CMSInitiatingOccupancyFraction=75 -J-XX:+UseCMSInitiatingOccupancyOnly -J-XX:SurvivorRatio=5 -J-server -J-Xloggc:/home/deploy/gcLog/gc.log -J-XX:+PrintGCDateStamps -J-XX:+PrintGCDetails -J-XX:+PrintGCApplicationStoppedTime -J-XX:+PrintSafepointStatistics -J-XX:PrintSafepointStatisticsCount=1
We have set the -J-Xmx1536 and -J-Xms1536M to a value of 1536M. Now
If I understood this correctly -J-Xmx represent the maximum size of the heap.
The system is 4 core 15GB ram process.
But when I check the RSS(using top) of my running Java process I see it is consuming a value larger than the -JXmx1536 around ~2GB.
Now clearly, the JVM heap has increased beyond the specified value of -Jmx.
So my question are..
Why? am I not seeing any Java out of memory exception.
And what is an ideal setting for -JXmx with 4 cores and 15GB RAM.(given that no other process is running in the system other than Java application)
Why? am I not seeing any Java out of memory exception.
because you did not run out of heap memory, start VisualVM and examine the process after setting -Xmx. you'll notice there'sa region called MetaSpace (1G max by default) besides that there are also ways in which the process might use additional memory e.g. for code-cache (JIT-ed native code)
And what is an ideal setting for -JXmx with 4 cores and 15GB RAM.(given that no other process is running in the system other than Java application)
there's no "clear" answer for that, it depends from application to application, you should monitor your memory usage under various scenarios. first thing to do might be to set heap high but if you're not using up most of it and you have a memory leak it will complicate things.

Dropbox "java.lang.outofmemoryerror java heap space" when trying to upload a large file [duplicate]

I am getting the following error on execution of a multi-threading program
java.lang.OutOfMemoryError: Java heap space
The above error occured in one of the threads.
Upto my knowledge, Heap space is occupied by instance variables only. If this is correct, then why this error occurred after running fine for sometime as space for instance variables are alloted at the time of object creation.
Is there any way to increase the heap space?
What changes should I made to my program so that It will grab less heap space?
If you want to increase your heap space, you can use java -Xms<initial heap size> -Xmx<maximum heap size> on the command line. By default, the values are based on the JRE version and system configuration. You can find out more about the VM options on the Java website.
However, I would recommend profiling your application to find out why your heap size is being eaten. NetBeans has a very good profiler included with it. I believe it uses the jvisualvm under the hood. With a profiler, you can try to find where many objects are being created, when objects get garbage collected, and more.
1.- Yes, but it pretty much refers to the whole memory used by your program.
2.- Yes see Java VM options
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Ie
java -Xmx2g assign 2 gigabytes of ram as maximum to your app
But you should see if you don't have a memory leak first.
3.- It depends on the program. Try spot memory leaks. This question would be to hard to answer. Lately you can profile using JConsole to try to find out where your memory is going to
You may want to look at this site to learn more about memory in the JVM:
http://developer.streamezzo.com/content/learn/articles/optimization-heap-memory-usage
I have found it useful to use visualgc to watch how the different parts of the memory model is filling up, to determine what to change.
It is difficult to determine which part of memory was filled up, hence visualgc, as you may want to just change the part that is having a problem, rather than just say,
Fine! I will give 1G of RAM to the JVM.
Try to be more precise about what you are doing, in the long run you will probably find the program better for it.
To determine where the memory leak may be you can use unit tests for that, by testing what was the memory before the test, and after, and if there is too big a change then you may want to examine it, but, you need to do the check while your test is still running.
You can get your heap memory size through below programe.
public class GetHeapSize {
public static void main(String[] args) {
long heapsize = Runtime.getRuntime().totalMemory();
System.out.println("heapsize is :: " + heapsize);
}
}
then accordingly you can increase heap size also by using:
java -Xmx2g
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
To increase the heap size you can use the -Xmx argument when starting Java; e.g.
-Xmx256M
Upto my knowledge, Heap space is occupied by instance variables only. If this is correct, then why this error occurred after running fine for sometime as space for instance variables are alloted at the time of object creation.
That means you are creating more objects in your application over a period of time continuously. New objects will be stored in heap memory and that's the reason for growth in heap memory.
Heap not only contains instance variables. It will store all non-primitive data types ( Objects). These objects life time may be short (method block) or long (till the object is referenced in your application)
Is there any way to increase the heap space?
Yes. Have a look at this oracle article for more details.
There are two parameters for setting the heap size:
-Xms:, which sets the initial and minimum heap size
-Xmx:, which sets the maximum heap size
What changes should I made to my program so that It will grab less heap space?
It depends on your application.
Set the maximum heap memory as per your application requirement
Don't cause memory leaks in your application
If you find memory leaks in your application, find the root cause with help of profiling tools like MAT, Visual VM , jconsole etc. Once you find the root cause, fix the leaks.
Important notes from oracle article
Cause: The detail message Java heap space indicates object could not be allocated in the Java heap. This error does not necessarily imply a memory leak.
Possible reasons:
Improper configuration ( not allocating sufficiant memory)
Application is unintentionally holding references to objects and this prevents the objects from being garbage collected
Applications that make excessive use of finalizers. If a class has a finalize method, then objects of that type do not have their space reclaimed at garbage collection time. If the finalizer thread cannot keep up, with the finalization queue, then the Java heap could fill up and this type of OutOfMemoryError exception would be thrown.
On a different note, use better Garbage collection algorithms ( CMS or G1GC)
Have a look at this question for understanding G1GC
In most of the cases, the code is not optimized. Release those objects which you think shall not be needed further. Avoid creation of objects in your loop each time. Try to use caches. I don't know how your application is doing. But In programming, one rule of normal life applies as well
Prevention is better than cure. "Don't create unnecessary objects"
Local variables are located on the stack. Heap space is occupied by objects.
You can use the -Xmx option.
Basically heap space is used up everytime you allocate a new object with new and freed some time after the object is no longer referenced. So make sure that you don't keep references to objects that you no longer need.
No, I think you are thinking of stack space. Heap space is occupied by objects. The way to increase it is -Xmx256m, replacing the 256 with the amount you need on the command line.
To avoid that exception, if you are using JUnit and Spring try adding this in every test class:
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
I have tried all Solutions but nothing worked from above solutions
Solution: In My case I was using 4GB RAM and due to that RAM usage comes out 98% so the required amount if Memory wasn't available. Please do look for this also.If such issue comes upgrade RAM and it will work fine.
Hope this will save someone Time
In netbeans, Go to 'Run' toolbar, --> 'Set Project Configuration' --> 'Customise' --> 'run' of its popped up windo --> 'VM Option' --> fill in '-Xms2048m -Xmx2048m'. It could solve heap size problem.

How to calculate the java heap required

I have always given an assumed heap size to my application and while the app is running, I monitor and modify / tune the heap size.
Is there a way in which I can calculate the initial heap required more or less accurately.
For the best performance in a Java EE style environment, i.e. when the application is meant to be running for very long periods of time (months or weeks), then it is best to set the minimum and maximum heap size to be the same.
The best way to size the heap in this case is to gather data on how your application runs over time. With a weeks worth of verbose GC log, we can import that data into GCViewer. Looking at the troughs of the graph, we can take an average and see the minimum retained set after each garbage collection. This is the amount of data, on average, kept in the heap for normal running. So any heap size should be at least that level. Since we're setting minimum and maximum to the same here, we need to add more space to compensate for spikes. How much to add depends on your use case, but somewhere between 25-35% is a start.
With this method, always remember to keep monitoring your heap and GC behaviour using verbose GC (which is recommended by Oracle to run even in production).
Important: This is assuming that your application is an always-on type of application. If you're considering a desktop java application, then the behaviour will be very very different and this should not be seen as a reliable method in that case. As #juhist said in his/her answer, just the maximum should be set and the JVM will handle the rest.
Why do you monitor and modify / tune the heap size?
Java has been designed in a way that the heap automatically grows to the needed size as long as the maximum heap size is large enough. You shouldn't modify the heap size manually at all.
I would rely on the automatic Java algorithms for adjusting the heap size and just specify the maximum heap size at program startup. You should take into account how many java processes you're running in parallel when deciding the maximum heap size.

Is it good to set the max and min JVM heap size the same?

Currently in our testing environment the max and min JVM heap size are set to the same value, basically as much as the dedicated server machine will allow for our application. Is this the best configuration for performance or would giving the JVM a range be better?
Peter 's answer is correct in that -Xms is allocated at startup and it will grow up to -Xmx (max heap size) but it's a little misleading in how he has worded his answer. (Sorry Peter I know you know this stuff cold).
Setting ms == mx effectively turns off this behavior. While this used to be a good idea in older JVMs, it is no longer the case. Growing and shrinking the heap allows the JVM to adapt to increases in pressure on memory yet reduce pause time by shrinking the heap when memory pressure is reduced. Sometimes this behavior doesn't give you the performance benefits you'd expect and in those cases it's best to set mx == ms.
OOME is thrown when heap is more than 98% of time is spent collecting and the collections cannot recover more than 2% of that. If you are not at max heaps size then the JVM will simply grow so that you're beyond that boundaries. You cannot have an OutOfMemoryError on startup unless your heap hits the max heap size and meets the other conditions that define an OutOfMemoryError.
For the comments that have come in since I posted. I don't know what the JMonitor blog entry is showing but this is from the PSYoung collector.
size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
min_gen_size());
I could do more digging about but I'd bet I'd find code that serves the same purpose in the ParNew and PSOldGen and CMS Tenured implementations. In fact it's unlikely that CMS would be able to return memory unless there has been a Concurrent Mode Failure. In the case of a CMF the serial collector will run and that should include a compaction after which top of heap would most likely be clean and therefore eligible to be deallocated.
Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).
Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.
AFAIK, setting both to the same size does away with the additional step of heap resizing which might be in your favour if you pretty much know how much heap you are going to use. Also, having a large heap size reduces GC invocations to the point that it happens very few times. In my current project (risk analysis of trades), our risk engines have both Xmx and Xms to the same value which pretty large (around 8Gib). This ensures that even after an entire day of invoking the engines, almost no GC takes place.
Also, I found an interesting discussion here.
Definitely yes for a server app. What's the point of having so much memory but not using it?
(No it doesn't save electricity if you don't use a memory cell)
JVM loves memory. For a given app, the more memory JVM has, the less GC it performs. The best part is more objects will die young and less will tenure.
Especially during a server startup, the load is even higher than normal. It's brain dead to give server a small memory to work with at this stage.
From what I see here at http://java-monitor.com/forum/showthread.php?t=427
the JVM under test begins with the Xms setting, but WILL deallocate memory it doesn't need and it will take it upto the Xmx mark when it needs it.
Unless you need a chunk of memory dedicated for a big memory consumer initially, there's not much of a point in putting in a high Xms=Xmx. Looks like deallocation and allocation occur even with Xms=Xmx

Categories