I've always been able to allocate 1400 megabytes for Java SE running on 32-bit Windows XP (Java 1.4, 1.5 and 1.6).
java -Xmx1400m ...
Today I tried the same option on a new Windows XP machine using Java 1.5_16 and 1.6.0_07 and got the error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Through trial and error it seems 1200 megabytes is the most I can allocate on this machine.
Any ideas why one machine would allow 1400 and another only 1200?
Edit: The machine has 4GB of RAM with about 3.5GB that Windows can recognize.
Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous in its address space. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You could try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have much more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
This has to do with contiguous memory.
Here's some info I found online for somebody asking that before, supposedly from a "VM god":
The reason we need a contiguous memory
region for the heap is that we have a
bunch of side data structures that are
indexed by (scaled) offsets from the
start of the heap. For example, we
track object reference updates with a
"card mark array" that has one byte
for each 512 bytes of heap. When we
store a reference in the heap we have
to mark the corresponding byte in the
card mark array. We right shift the
destination address of the store and
use that to index the card mark array.
Fun addressing arithmetic games you
can't do in Java that you get to (have
to :-) play in C++.
Usually we don't have trouble getting
modest contiguous regions (up to about
1.5GB on Windohs, up to about 3.8GB on Solaris. YMMV.). On Windohs, the
problem is mostly that there are some
libraries that get loaded before the
JVM starts up that break up the
address space. Using the /3GB switch
won't rebase those libraries, so they
are still a problem for us.
We know how to make chunked heaps, but
there would be some overhead to using
them. We have more requests for faster
storage management than we do for
larger heaps in the 32-bit JVM. If you
really want large heaps, switch to the
64-bit JVM. We still need contiguous
memory, but it's much easier to get in
a 64-bit address space.
The Java heap size limits for Windows are:
maximum possible heap size on 32-bit Java: 1.8 GB
recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
This doesn't help you getting a bigger Java heap, but now you know you can't go beyond these values.
Oracle JRockit, which can handle a non-contiguous heap, can have a Java heap size of 2.85 GB on Windows 2003/XP with the /3GB switch. It seems that fragmentation can have quite an impact on how large a Java heap can be.
The JVM needs contiguous memory and depending on what else is running, what was running before, and how windows has managed memory you may be able to get up to 1.4GB of contiguous memory. I think 64bit Windows will allow larger heaps.
Sun's JVM needs contiguous memory. So the maximal amount of available memory is dictated by memory fragmentation. Especially driver's dlls tend to fragment the memory, when loading into some predefined base address. So your hardware and its drivers determine how much memory you can get.
Two sources for this with statements from Sun engineers: forum blog
Maybe another JVM? Have you tried Harmony? I think they planned to allow non-continuous memory.
I think it has more to do with how Windows is configured as hinted by this response:
Java -Xmx Option
Some more testing: I was able to allocate 1300MB on an old Windows XP machine with only 768MB physical RAM (plus virtual memory). On my 2GB RAM machine I can only get 1220MB. On various other corporate machines (with older Windows XP) I was able to get 1400MB. The machine with a 1220MB limit is pretty new (just purchased from Dell), so maybe it has newer (and more bloated) Windows and DLLs (it's running Window XP Pro Version 2002 SP2).
I got this error message when running a java program from a (limited memory) virtuozzo VPS. I had not specified any memory arguments, and found I had to explicitly set a small amount as the default must have been too high. E.g. -Xmx32m (obviously needs to be tuned depending on the program you run).
Just putting this here in case anyone else gets the above error message without specifying a large amount of memory like the questioner did.
sun's JDK/JRE needs a contiguous amount of memory if you allocate a huge block.
The OS and initial apps tend to allocate bits and pieces during loading which fragments the available RAM. If a contiguous block is NOT available, the SUN JDK cannot use it. JRockit from Bea(acquired by Oracle) can allocate memory from pieces.
Everyone seems to be answering about contiguous memory, but have neglected to acknowledge a more pressing issue.
Even with 100% contiguous memory allocation, you can't have a 2 GiB heap size on a 32-bit Windows OS (*by default). This is because 32-bit Windows processes cannot address more than 2 GiB of space.
The Java process will contain perm gen (pre Java 8), stack size per thread, JVM / library overhead (which pretty much increases with each build) all in addition to the heap.
Furthermore, JVM flags and their default values change between versions. Just run the following and you'll get some idea:
java -XX:+PrintFlagsFinal
Lots of the options affect memory division in and out of the heap. Leaving you with more or less of that 2 GiB to play with...
To reuse portions of this answer of mine (about Tomcat, but applies to any Java process):
The Windows OS
limits the memory allocation of a 32-bit process to 2 GiB in total (by
default).
[You will only be able] to allocate around 1.5 GiB heap
space because there is also other memory allocated to the process
(the JVM / library overhead, perm gen space etc.).
Why does 32-bit Windows impose a 2 GB process address space limit, but
64-bit Windows impose a 4GB limit?
Other modern operating systems [cough Linux] allow 32-bit processes to
use all (or most) of the 4 GiB addressable space.
That said, 64-bit Windows OS's can be configured to increase the limit
of 32-bit processes to 4 GiB (3 GiB on 32-bit):
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
Here is how to increase the Paging size
right click on mycomputer--->properties--->Advanced
in the performance section click settings
click Advanced tab
in Virtual memory section, click change. It will show ur current paging
size.
Select Drive where HDD space is available.
Provide initial size and max size ...e.g. initial size 0 MB and max size
4000 MB. (As much as you will require)
**There are numerous ways to change heap size like,
file->setting->build, exceution, deployment->compiler here you will find heap size
file->setting->build, exceution, deployment->compiler->andriod here also you will find heap size. You can refer this for andriod project if you facing same issue.
What worked for me was
Set proper appropriate JAVA_HOME path incase you java got updated.
create new system variable computer->properties->advanced setting->create new system variable
name: _JAVA_OPTION value: -Xmx750m
FYI:
you can find default VMoption in Intellij
help->edit custom VM option , In this file you see min and max size of heap.**
First, using a page-file when you have 4 GB of RAM is useless. Windows can't access more than 4GB (actually, less because of memory holes) so the page file is not used.
Second, the address space is split in 2, half for kernel, half for user mode. If you need more RAM for your applications use the /3GB option in boot.ini (make sure java.exe is marked as "large address aware" (google for more info).
Third, I think you can't allocate the full 2 GB of address space because java wastes some memory internally (for threads, JIT compiler, VM initialization, etc). Use the /3GB switch for more.
Related
The theoretical maximum heap value that can be set with -Xmx in a 32-bit system is of course 2^32 bytes, but typically (see: Understanding max JVM heap size - 32bit vs 64bit) one cannot use all 4GB.
For a 64-bit JVM running in a 64-bit OS on a 64-bit machine, is there any limit besides the theoretical limit of 2^64 bytes or 16 exabytes?
I know that for various reasons (mostly garbage collection), excessively large heaps might not be wise, but in light of reading about servers with terrabytes of RAM, I'm wondering what is possible.
If you want to use 32-bit references, your heap is limited to 32 GB.
However, if you are willing to use 64-bit references, the size is likely to be limited by your OS, just as it is with 32-bit JVM. e.g. on Windows 32-bit this is 1.2 to 1.5 GB.
Note: you will want your JVM heap to fit into main memory, ideally inside one NUMA region. That's about 1 TB on the bigger machines. If your JVM spans NUMA regions the memory access and the GC in particular will take much longer. If your JVM heap start swapping it might take hours to GC, or even make your machine unusable as it thrashes the swap drive.
Note: You can access large direct memory and memory mapped sizes even if you use 32-bit references in your heap. i.e. use well above 32 GB.
Compressed oops in the Hotspot JVM
Compressed oops represent managed pointers (in many but not all places in the JVM) as 32-bit values which must be scaled by a factor of 8 and added to a 64-bit base address to find the object they refer to. This allows applications to address up to four billion objects (not bytes), or a heap size of up to about 32Gb. At the same time, data structure compactness is competitive with ILP32 mode.
The answer clearly depends on the JVM implementation. Azul claim that their JVM
can scale ... to more than a 1/2 Terabyte of memory
By "can scale" they appear to mean "runs wells", as opposed to "runs at all".
Windows imposes a memory limit per process, you can see what it is for each version here
See:
User-mode virtual address space for each 64-bit process;
With IMAGE_FILE_LARGE_ADDRESS_AWARE set (default):
x64: 8 TB
Intel IPF: 7 TB
2 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE cleared
I tried -Xmx32255M is accepted by vmargs for compressed oops.
For a 64-bit JVM running in a 64-bit OS on a 64-bit machine, is there any limit besides the theoretical limit of 2^64 bytes or 16 exabytes?
You also have to take hardware limits into account. While pointers may be 64bit current CPUs can only address a less than 2^64 bytes worth of virtual memory.
With uncompressed pointers the hotspot JVM needs a continuous chunk of virtual address space for its heap. So the second hurdle after hardware is the operating system providing such a large chunk, not all OSes support this.
And the third one is practicality. Even if you can have that much virtual memory it does not mean the CPUs support that much physical memory, and without physical memory you will end up swapping, which will adversely affect the performance of the JVM because the GCs generally have to touch a large fraction of the heap.
As other answers mention compressed oops: By bumping the object alignment higher than 8 bytes the limits with compressed oops can be increased beyond 32GB
In theory everything is possible but reality you find the numbers much lower than you might expect.
I have been trying to address huge spaces on servers often and found that even though a server can have huge amounts of memory it surprised me that most software actually never can address it in real scenario's simply because the cpu's are not fast enough to really address them.
Why would you say right ?! . Timings thats the endless downfall of every enormous machine which i have worked on.
So i would advise to not go overboard by addressing huge amounts just because you can, but use what you think could be used.
Actual values are often much lower than what you expected.
Ofcourse non of us really uses hp 9000 systems at home and most of you actually ever will go near the capacity of your home system ever.
For instance most users do not have more than 16 Gb of memory in their system. Ofcourse some of the casual gamers use work stations for a game once a month but i bet that is a very small percentage.
So coming down to earth means i would address on a 8 Gb 64 bit system not much more than 512 mb for heapspace or if you go overboard try 1 Gb. I am pretty sure its even with these numbers pure overkill.
I have constant monitored the memory usage during gaming to see if the addressing would make any difference but did not notice any difference at all when i addressed much lower values or larger ones. Even on the server/workstations there was no visible change in performance no matter how large i set the values.
That does not say some jave users might be able to make use of more space addressed, but this far i have not seen any of the applications needing so much ever.
Ofcourse i assume that their would be a small difference in performance if java instances would run out of enough heapspace to work with.
This far i have not found any of it at all, however lack of real installed memory showed instant drops of performance if you set too much heapspace.
When you have a 4 Gb system you run quickly out of heapspace and then you will see some errors and slowdowns because people address too much space which actually is not free in the system so the os starts to address drive space to make up for the shortage hence it starts to swap.
I have defined a jnlp file with initial-heap-size="512m" max-heap-size="1024m" on a machine that has 16Gb with 12Gb available. The JVM running is a 32-bit JVM because of native libraries. I understand that I must have 1Gb of contiguous memory available to allocate that max. If I reduce the max-heap-size to 768, then it runs as normal, and sometimes I don't need to reduce it.
Two questions:
Why is the machine checking max-heap-size initially before the JVM starts up? Are there assertions that are being performed?
Why would I not be able to allocate the full 1Gb from the get go if I have 12Gb available - assuming that there is a contiguous 1Gb block available?
If you can't use the 64-bit JVM...
If on Windows, using 32-bit JVM, you need to research "large address aware" (JVM compiled/linked with /LARGEADDRESSAWARE option). It will allow you to use larger memory footprint with 32-bit. You can set the bit on a particular executable.
Drawbacks of using /LARGEADDRESSAWARE for 32 bit Windows executables?
If on Linux, look at your system hard or soft limits. That may be limiting the max size of your process. Also, you may have process control groups.
For ulimits, on Linux/UNIX try
ulimit -a
Specifically look at ulimit -m or -v settings. If those are unlimited, you may be experiencing another type of control mechanism.
Partial educated guess to:
Why would I not be able to allocate the full 1Gb from the get go if I have 12Gb available - assuming that there is a contiguous 1Gb block available?
You wrote that JVM running is a 32-bit JVM because of native libraries. Note that then those native libraries are part of your process, and even if they don't allocate huge amounts of memory, they may nevertheless fragment the 2 GB virtual address space (which you're limited to, without LARGESPACEAWARE) so that you may not have a contiguous 1 GB block left. You might want to study the the native libraries' base addresses and rebasing. There's a free tool called VMMap, which is great for studying these issues.
I have a question that bothered me after reading an article on analyzing thread dumps. There was one paragraph which mentioned that the logical maximum heap size in a 32-Bit JVM is 4GB.
This link states that the maximum heap size on a 32-bit Windows machine will be around 1.4 - 1.6 GB.
My question is, say if you have RAM of say around 8GB, does this mean i can only utilize 1.4-1.6 GB of it if i were to you a 32-bit JVM? And what will be the maximum size allowed for a 64bit JVM?
Appreciate your help regarding this as i am confused on the same.
specifically on windows, the reason is a combination of the implementation of hotspot (the sun/oracle JVM) and windows dlls.
32-bit code has access to a 4GB virtual address space (there are extensions that allow more but i wont be going into those).
on 32-bit windows the upper 2GB of this virtual address space are reserved for operating sysem use (some versions of the OS accept the /3GB flag as a boot parameter to allow for 3GB of user-accessible space).
also, any libraries (*.dlls) you use are mapped into parts of this address space. by default the windows base *.dll files are loaded at the ~1.6 GB mark (differs slightly by OS version and patch level)
on top of all this, the hotspot JVM only supports allocating a single, continuous, chunk of memory for use as heap space.
so, if you try and picture this in your head, you'll see that you have a free area of ~2GB with a "wall" of windows *.dlls loaded at ~1.6GB. this is the logic behind that figure. it also means that even if you provide the /3GB flag the sun/oracle JVM will not be able to make use of it. some other VMs are better at handling a fragmented heap - like the jrockit VM
you could also try rebasing windows dlls so that they load into higher memory addresses and squeeze some more usable heap space, but the process is fragile.
also note that its very possible that drivers/applications loaded on a particular machines (like anti virus software) will inject their own *.dlls into a java process, and those dlls can be loaded at ever lower memory addresses, further shrinking your usable heap space.
on 64bit versions of windows the addressable limit is 8-128TB and the physical limit stands at 64TB right now
2^32 = 4GB is the max total memory you can address with 32 bits.
The JVM only gets 1.4-1.6GB on 32 bit machines because you still have to accomodate the operating system.
2^64 = (2^32)^2 is the max total memory you can address with 64 bits. As you can see, it's a much larger number.
jvm and also os use paging memory managment system that obtain 4G virtual memory for us in
32bit os systems
but if you have 8G ram you must use 64 bit version of os for maximum performance of os
It depends on your operating system, 32 bit versions of MacOS X and Linux have some ability to access more than 4GB in the kernel but still limit processes to 4GB. Other operating systems may restrict process memory further since they need part of the 4GB for themselves. In general you want to avoid swapping out your JVM to VM so you need to know how much free memory you system has.
The theoretical maximum heap value that can be set with -Xmx in a 32-bit system is of course 2^32 bytes, but typically (see: Understanding max JVM heap size - 32bit vs 64bit) one cannot use all 4GB.
For a 64-bit JVM running in a 64-bit OS on a 64-bit machine, is there any limit besides the theoretical limit of 2^64 bytes or 16 exabytes?
I know that for various reasons (mostly garbage collection), excessively large heaps might not be wise, but in light of reading about servers with terrabytes of RAM, I'm wondering what is possible.
If you want to use 32-bit references, your heap is limited to 32 GB.
However, if you are willing to use 64-bit references, the size is likely to be limited by your OS, just as it is with 32-bit JVM. e.g. on Windows 32-bit this is 1.2 to 1.5 GB.
Note: you will want your JVM heap to fit into main memory, ideally inside one NUMA region. That's about 1 TB on the bigger machines. If your JVM spans NUMA regions the memory access and the GC in particular will take much longer. If your JVM heap start swapping it might take hours to GC, or even make your machine unusable as it thrashes the swap drive.
Note: You can access large direct memory and memory mapped sizes even if you use 32-bit references in your heap. i.e. use well above 32 GB.
Compressed oops in the Hotspot JVM
Compressed oops represent managed pointers (in many but not all places in the JVM) as 32-bit values which must be scaled by a factor of 8 and added to a 64-bit base address to find the object they refer to. This allows applications to address up to four billion objects (not bytes), or a heap size of up to about 32Gb. At the same time, data structure compactness is competitive with ILP32 mode.
The answer clearly depends on the JVM implementation. Azul claim that their JVM
can scale ... to more than a 1/2 Terabyte of memory
By "can scale" they appear to mean "runs wells", as opposed to "runs at all".
Windows imposes a memory limit per process, you can see what it is for each version here
See:
User-mode virtual address space for each 64-bit process;
With IMAGE_FILE_LARGE_ADDRESS_AWARE set (default):
x64: 8 TB
Intel IPF: 7 TB
2 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE cleared
I tried -Xmx32255M is accepted by vmargs for compressed oops.
For a 64-bit JVM running in a 64-bit OS on a 64-bit machine, is there any limit besides the theoretical limit of 2^64 bytes or 16 exabytes?
You also have to take hardware limits into account. While pointers may be 64bit current CPUs can only address a less than 2^64 bytes worth of virtual memory.
With uncompressed pointers the hotspot JVM needs a continuous chunk of virtual address space for its heap. So the second hurdle after hardware is the operating system providing such a large chunk, not all OSes support this.
And the third one is practicality. Even if you can have that much virtual memory it does not mean the CPUs support that much physical memory, and without physical memory you will end up swapping, which will adversely affect the performance of the JVM because the GCs generally have to touch a large fraction of the heap.
As other answers mention compressed oops: By bumping the object alignment higher than 8 bytes the limits with compressed oops can be increased beyond 32GB
In theory everything is possible but reality you find the numbers much lower than you might expect.
I have been trying to address huge spaces on servers often and found that even though a server can have huge amounts of memory it surprised me that most software actually never can address it in real scenario's simply because the cpu's are not fast enough to really address them.
Why would you say right ?! . Timings thats the endless downfall of every enormous machine which i have worked on.
So i would advise to not go overboard by addressing huge amounts just because you can, but use what you think could be used.
Actual values are often much lower than what you expected.
Ofcourse non of us really uses hp 9000 systems at home and most of you actually ever will go near the capacity of your home system ever.
For instance most users do not have more than 16 Gb of memory in their system. Ofcourse some of the casual gamers use work stations for a game once a month but i bet that is a very small percentage.
So coming down to earth means i would address on a 8 Gb 64 bit system not much more than 512 mb for heapspace or if you go overboard try 1 Gb. I am pretty sure its even with these numbers pure overkill.
I have constant monitored the memory usage during gaming to see if the addressing would make any difference but did not notice any difference at all when i addressed much lower values or larger ones. Even on the server/workstations there was no visible change in performance no matter how large i set the values.
That does not say some jave users might be able to make use of more space addressed, but this far i have not seen any of the applications needing so much ever.
Ofcourse i assume that their would be a small difference in performance if java instances would run out of enough heapspace to work with.
This far i have not found any of it at all, however lack of real installed memory showed instant drops of performance if you set too much heapspace.
When you have a 4 Gb system you run quickly out of heapspace and then you will see some errors and slowdowns because people address too much space which actually is not free in the system so the os starts to address drive space to make up for the shortage hence it starts to swap.
I have a program that needs a lot of memory and want to set the maximum heap space at 6024MB.
Java gives me the error:
Invalid maximum heap size: -Xmx6024m
The specified size exceeds the maximum representable size.
Is there a workaround?
There are big differences between how many heap one can allocate between the different Java VMs. E.g. Sun's VM needs to allocate the memory as a single block from the OS. This limitation does not exist for Oracle's JRockit VM. It is also OS dependent -- e.g. I was able to allocate more heap with Sun's VM using Linux than was possible with Windows XP. Also note that I read somewhere that the problem goes away for 64bit OSes...
Edit:
Here's a blog entry about Sun's JVM and Java heap space issues on 32bit Windows OSes.
Is this a 64 bit VM? If so, you should be able to use the switch as you did.