How do I start a JVM with no heap max memory restriction? So that it can take as much memory as it can ?
I searched if there is such an option, but I only seem to find the -Xmx and -Xms options.
EDIT:
Let's say I have a server with 4GB of RAM and it only runs my application. And, let's say I have another server with 32GB of RAM. I don't want to start my application with 4GB of memory limit because the second machine should be able to handle more objects
-XX:MaxRAMFraction=1 will auto-configure the max heap size to 100% of your physical ram or the limits imposed by cgroups if UseCGroupMemoryLimitForHeap is set.
OpenJDK 10 will also support a percentage based option MaxRAMPercentage allowing more fine-grained selection JDK-8186248. This is important to leave some spare capacity for non-heap data structures to avoid swapping.
You can't (or at least, I don't know a JVM implementation that supports it). The JVM needs to know at start up how much memory it can allocate, to ensure that it can reserve a contiguous range in virtual memory. This allows - among others - for simpler reasoning in memory management.
If virtual memory would be expanded at runtime, this could lead to fragmented virtual memory ranges, making tracking and referencing memory harder.
However, recent Java versions have introduced options like -XX:MaxRAMPercentage=n, which allows you to specify the percentage of memory to allocate to the Java heap, instead of an absolute in bytes. For example -XX:MaxRAMPercentage=80 will allocate 80% of the available memory to the Java heap (the default is 25%).
The -XX:MaxRAMPercentage only works for systems with more than 200MB of memory (otherwise you need to use -XX:MinRAMPercentage, default 50%).
You can also use -XX:InitialRAMPercentage to specify the initial memory allocated to Java (MaxRAMPercentage is similar to -Xmx, InitialRAMPercentage is similar to -Xms).
JVM is running on physical computer that has limited memory. Therefore JVM cannot have unlimited memory as far as the memory is limited by definition. You can however supply very huge limit in -Xmx option.
The question is however why do you need the unlimited memory and even more correct question is how much memory do you really need?
On Linux, you can use the free and awk commands to calculate an inline value like this:
JAVA_OPT_MAX_MEM=$(free -m | awk '/Mem:/ {printf "-Xmx%dm", 0.80*$2}')
Example Result (on a machine with 3950m of free memory):
JAVA_OPT_MAX_MEM=-Xmx3160m
The calculated option is 80% of the total reported memory.
Related
I am really confused with this.
Xmx according to the java docs, is the maximum allowable heap size.
Xms is the minimum required java heap size, and is allocated at JVM start.
On a 32 bit JVM (4GB ram), java -Xmx1536M HelloWorld gives a cannot allocate enough memory error.
On a 64 bit JVM (4GB Ram), java -Xmx20G HelloWorld works just fine. But I don't even have that much virtual or physical memory allocated.
So from this, I conclude that Java 32 bit is allocating the 1536M at JVM startup, but Java 64 bit is not.
Why? A simple Hello World should not need 1536M to run. I am just specifying that 1536M is the maximum, not that it is needed.
Explanations anyone?
There is a distinction between allocating the memory and allocating the address space. The Oracle JVM is allocating the address space on startup to ensure the heap is contiguous. This allows for certain optimizations to be used with the heap.
If the allocation fails, then Java won't start... as you have seen. It isn't necessarily using that much memory, but it is allocating the required address space up-front. Since you are passing -Xmx1536m, it is saying ok, I need to allocate that in case you need it... and since it must be contiguous it does it up-front so it can guarantee it (or fails trying).
This behavior is the same on both 32-bit and 64-bit JVMs. What you are seeing is the 2GB per-process address space limitation of 32-bit processes (at least, on Windows this is the limitation - it may be slightly larger on other platforms) causing this allocation to fail on 32-bit, where 64-bit has no issues since it has much larger address space. But, you say, 1536m is less than 2GB, I should be good, right? Not quite - the heap is not the only thing being allocated in the address space, DLLs and other stuff is also allocated in the address space...so getting a contiguous 1536m chunk out of 2GB max on 32-bit is unfortunately very unlikely. I've seen values below 1000m fail on 32-bit processes with particularly bad fragmentation, and usually 1200-1300m is the max heap you can specify on 32-bit.
On modern OSes, ASLR (Address Space Layout Randomization) makes fragmentation of 32-bit process address space worse. It intentionally loads DLLs into random addresses for security reasons... making it even less likely you can get a big, contiguous heap in 32-bit.
In 64-bit, the address space is so large that fragmentation is no longer a problem and giant heaps can be allocated without issues. Even if you have 4GB of RAM on 32-bit, though, the 2GB per process address space limitation (at least on Windows) usually means the max heap is usually only 1300m or so.
Actually, the application is not allocating the Xmx memory at startup.
The -Xms parameter configure the startup memory. (What are the Xms and Xmx parameters when starting JVMs?)
The 64bit environment allows a bigger memory allocation then 32bits. But, in fact, it's using the HD space, not the memory ram.
See this other post for more info.
Estimating maximum safe JVM heap size in 64-bit Java
Under Windows there is a difference in memory allocation operations with native WinAPI low-level functions like VirtualAlloc.
"Reserving" means allocation of a continuous area within process' address space without actually making this area of virtual memory usable. Allocated area is not backed by actual physical RAM or swap space and does not consume any free memory. Any application can reserve any amount of address space limited only by processor's memory addressing capability (bitness).
"Committing" means backing some of previously "reserved" memory with real memory - RAM or swap space, making it actually readable/writable by the process. This memory is taken from available OS virtual memory pool (RAM and swap).
An alternative to "committing" memory (taking a blank page from the pool) is "mapping" a file into the previously "reserved" memory. This does not consume memory from the swap pool but uses the mapped file in a manner similar to a dedicated swap space for that specific region of a process' address space.
Native Windows applications (like JVM itself) reserve memory for all heaps needed in the future, but commit it only as needed.
High-level memory operations like malloc() style functions or "new" operators really do "commits" as needed or even do their own heap management logic user-mode with memory committed ahead by large chunks as it is'a CPU-intensive kernel call and works at page (4k) granularity.
During process startup JVM "reserves" -Xmx memory, but "commits" only -Xms amount of it. The remaining reserved memory is committed on demand as heaps grow. So heaps can grow up to available memory or -Xmx parameter, whichever is smaller.
I am using a JBoss instance that has a start memory of 4 GB and a max memory of 12 GB. My question is - how does the JVM decide when to extend memory (from 4GB higher) vs when to force a full GC? The reason I ask is because I am watching the memory profile of the JVM and in one instance I noticed it increased the memory ceiling from 4GB to a higher value to accommodate the memory growth demand and in another case it decided to perform a full GC to bring the used memory lower. Any understanding on this?
You did not say what JVM you were using, but to my knowledge they all grow the heap to accommodate more objects. It is very likely that something else caused the full gc.
If you have the 12 GB available (which you should if you allow the JVM to grow) then I would also recommend to set Xms=12GB as well. This allows you to easier understand what generation sizes you want to have without the need to deal with heap resizing.
I am using 64-bit Linux and Java JVM. I want to confirm if the memory used by JVM is smaller than physical memory size of the machine, there will be no disk memory swap by OS?
No, that's not necessarily true. Physical memory is shared by all processes, as well as by a bunch of other kernel things (e.g. the disk cache). So the amount of virtual memory used by your application is not the only consideration.
You can start your java application with the jvm argument -Xmx512m wich will tell the jvm to use a max of 512MB of ram for your heap. Take in account also that there exists another parameter for thread stack size -Xss512k. So the amount of memory that your jvm will use will be the max heap + (threadCount * threadStackSize) + some more ram for JIT compilation and GC datastructures depending on the GC collector that you use
Having this into account you can make sure your jvm wont use more ram than what is present in your machine
I've always been able to allocate 1400 megabytes for Java SE running on 32-bit Windows XP (Java 1.4, 1.5 and 1.6).
java -Xmx1400m ...
Today I tried the same option on a new Windows XP machine using Java 1.5_16 and 1.6.0_07 and got the error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Through trial and error it seems 1200 megabytes is the most I can allocate on this machine.
Any ideas why one machine would allow 1400 and another only 1200?
Edit: The machine has 4GB of RAM with about 3.5GB that Windows can recognize.
Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous in its address space. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You could try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have much more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
This has to do with contiguous memory.
Here's some info I found online for somebody asking that before, supposedly from a "VM god":
The reason we need a contiguous memory
region for the heap is that we have a
bunch of side data structures that are
indexed by (scaled) offsets from the
start of the heap. For example, we
track object reference updates with a
"card mark array" that has one byte
for each 512 bytes of heap. When we
store a reference in the heap we have
to mark the corresponding byte in the
card mark array. We right shift the
destination address of the store and
use that to index the card mark array.
Fun addressing arithmetic games you
can't do in Java that you get to (have
to :-) play in C++.
Usually we don't have trouble getting
modest contiguous regions (up to about
1.5GB on Windohs, up to about 3.8GB on Solaris. YMMV.). On Windohs, the
problem is mostly that there are some
libraries that get loaded before the
JVM starts up that break up the
address space. Using the /3GB switch
won't rebase those libraries, so they
are still a problem for us.
We know how to make chunked heaps, but
there would be some overhead to using
them. We have more requests for faster
storage management than we do for
larger heaps in the 32-bit JVM. If you
really want large heaps, switch to the
64-bit JVM. We still need contiguous
memory, but it's much easier to get in
a 64-bit address space.
The Java heap size limits for Windows are:
maximum possible heap size on 32-bit Java: 1.8 GB
recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
This doesn't help you getting a bigger Java heap, but now you know you can't go beyond these values.
Oracle JRockit, which can handle a non-contiguous heap, can have a Java heap size of 2.85 GB on Windows 2003/XP with the /3GB switch. It seems that fragmentation can have quite an impact on how large a Java heap can be.
The JVM needs contiguous memory and depending on what else is running, what was running before, and how windows has managed memory you may be able to get up to 1.4GB of contiguous memory. I think 64bit Windows will allow larger heaps.
Sun's JVM needs contiguous memory. So the maximal amount of available memory is dictated by memory fragmentation. Especially driver's dlls tend to fragment the memory, when loading into some predefined base address. So your hardware and its drivers determine how much memory you can get.
Two sources for this with statements from Sun engineers: forum blog
Maybe another JVM? Have you tried Harmony? I think they planned to allow non-continuous memory.
I think it has more to do with how Windows is configured as hinted by this response:
Java -Xmx Option
Some more testing: I was able to allocate 1300MB on an old Windows XP machine with only 768MB physical RAM (plus virtual memory). On my 2GB RAM machine I can only get 1220MB. On various other corporate machines (with older Windows XP) I was able to get 1400MB. The machine with a 1220MB limit is pretty new (just purchased from Dell), so maybe it has newer (and more bloated) Windows and DLLs (it's running Window XP Pro Version 2002 SP2).
I got this error message when running a java program from a (limited memory) virtuozzo VPS. I had not specified any memory arguments, and found I had to explicitly set a small amount as the default must have been too high. E.g. -Xmx32m (obviously needs to be tuned depending on the program you run).
Just putting this here in case anyone else gets the above error message without specifying a large amount of memory like the questioner did.
sun's JDK/JRE needs a contiguous amount of memory if you allocate a huge block.
The OS and initial apps tend to allocate bits and pieces during loading which fragments the available RAM. If a contiguous block is NOT available, the SUN JDK cannot use it. JRockit from Bea(acquired by Oracle) can allocate memory from pieces.
Everyone seems to be answering about contiguous memory, but have neglected to acknowledge a more pressing issue.
Even with 100% contiguous memory allocation, you can't have a 2 GiB heap size on a 32-bit Windows OS (*by default). This is because 32-bit Windows processes cannot address more than 2 GiB of space.
The Java process will contain perm gen (pre Java 8), stack size per thread, JVM / library overhead (which pretty much increases with each build) all in addition to the heap.
Furthermore, JVM flags and their default values change between versions. Just run the following and you'll get some idea:
java -XX:+PrintFlagsFinal
Lots of the options affect memory division in and out of the heap. Leaving you with more or less of that 2 GiB to play with...
To reuse portions of this answer of mine (about Tomcat, but applies to any Java process):
The Windows OS
limits the memory allocation of a 32-bit process to 2 GiB in total (by
default).
[You will only be able] to allocate around 1.5 GiB heap
space because there is also other memory allocated to the process
(the JVM / library overhead, perm gen space etc.).
Why does 32-bit Windows impose a 2 GB process address space limit, but
64-bit Windows impose a 4GB limit?
Other modern operating systems [cough Linux] allow 32-bit processes to
use all (or most) of the 4 GiB addressable space.
That said, 64-bit Windows OS's can be configured to increase the limit
of 32-bit processes to 4 GiB (3 GiB on 32-bit):
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
Here is how to increase the Paging size
right click on mycomputer--->properties--->Advanced
in the performance section click settings
click Advanced tab
in Virtual memory section, click change. It will show ur current paging
size.
Select Drive where HDD space is available.
Provide initial size and max size ...e.g. initial size 0 MB and max size
4000 MB. (As much as you will require)
**There are numerous ways to change heap size like,
file->setting->build, exceution, deployment->compiler here you will find heap size
file->setting->build, exceution, deployment->compiler->andriod here also you will find heap size. You can refer this for andriod project if you facing same issue.
What worked for me was
Set proper appropriate JAVA_HOME path incase you java got updated.
create new system variable computer->properties->advanced setting->create new system variable
name: _JAVA_OPTION value: -Xmx750m
FYI:
you can find default VMoption in Intellij
help->edit custom VM option , In this file you see min and max size of heap.**
First, using a page-file when you have 4 GB of RAM is useless. Windows can't access more than 4GB (actually, less because of memory holes) so the page file is not used.
Second, the address space is split in 2, half for kernel, half for user mode. If you need more RAM for your applications use the /3GB option in boot.ini (make sure java.exe is marked as "large address aware" (google for more info).
Third, I think you can't allocate the full 2 GB of address space because java wastes some memory internally (for threads, JIT compiler, VM initialization, etc). Use the /3GB switch for more.
I can set the max memory as 1000 and not more than that, if I set the memory more than that, it throws the following error.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
My question is, why jvm looks for the max memory at startup?
Thanks in advance.
The Sun JVM needs a contiguous area of memory for its heap. Using the tool vmmap from the Sysinternals suite you can examine the memory layout of the Java process exactly. To do that, write a simple Java program like this:
public class MemoryLayout {
public static void main(String[] args) throws java.io.IOException {
System.in.read();
}
}
Compile that program and run it using the large heap settings
javac MemoryLayout.java
java -Xmx1000m -Xms1000m MemoryLayout
Then, start vmmap, select the java process and look for the yellow memory region whose size is larger than 1000000k. This is the JVM heap. Look further below, and you will eventually find a purple row indicating that there is a DLL file mapped. This DLL file prevents your JVM heap from growing bigger.
If you know what you are doing, you can then rebase that DLL, so it will be loaded at a different address. Microsoft provides a tool called rebase.exe as part of the Microsoft Platform SDK (I have version 5.2.3790.1830).
There are two command line parameters that directly control the size of the (normal) heap:
-Xmx<nnn> sets the maximum heap size
-Xms<nnn> sets the initial heap size
In both cases <nnn> is a number of bytes, with a k or m on the end to indicate kilobytes and megabytes respectively. The initial size gives the heap size allocated when the JVM starts, and the maximum size puts a limit on how big it can grow. (But the JVM also allocates memory for buffers, the "permgen" heap, stacks and other things ... in addition to the normal heap.)
It is not clear what options you are actually giving. (A value of 1000 doesn't make any sense. The -Xmx size has to be more than 2 megabytes and the -Xms size has to be more than 1 megabytes; see this page.)
There are advantages and disadvantages in making the initial heap size smaller than the maximum heap size; e.g. -Xms100m -Xmx1000m. But there is no point making the maximum heap size larger than the amount of virtual memory your machine can allocate to the JVM.
why jvm looks for the max memory at startup.
It wants to make sure that it can eventually allocate the maximum amount which you said it could have.
Why do you need to set a higher maximum then your machine actually supports?
Answer: It would make JVM configuration easier, if you could just set it to basically unlimited, especially if you deployed to different machines. This was possible in earlier versions, but for the current Sun JVM, you have to figure out a "proper" value for every machine. Hopefully, there will be more clever/automatic memory settings in the future.
As elaborated here for implementation reasons (basically it makes performance faster and that is their priority) the JVM requires contiguous memory addressing, so it has to establish that it has that at startup, otherwise it might to be available later.
The fact of the matter is that the JVM in many ways is a server-side oriented technology. That is where Java is popular so that is what gets the development attention.
If you use a 64-bit JVM on a 64-bit OS you won't have this problem. This is only a problem on 32-bit OSes.