Maximum amount of memory per Java process on Windows? - java

What is the maximum heap size that you can allocate on 32-bit Windows for a Java process using -Xmx?
I'm asking because I want to use the ETOPO1 data in OpenMap and the raw binary float file is about 910 MB.

There's nothing better than an empirical experiment to answer your question.
I've wrote a Java program and run it while specifying the XMX flag (also used XMS=XMX to force the JVM pre-allocate all of the memory).
To further protect against JVM optimizations, I've actively allocate X number of 10MB objects.
I run a number of test on a number of JVMs increasing the XMX value together with increasing the number of MB allocated, on a different 32bit operating systems using both Sun and IBM JVMs, here's a summary of the results:
OS:Windows XP SP2, JVM: Sun 1.6.0_02, Max heap size: 1470 MB
OS: Windows XP SP2, JVM: IBM 1.5, Max heap size: 1810 MB
OS: Windows Server 2003 SE, JVM: IBM 1.5, Max heap size: 1850 MB
OS: Linux 2.6, JVM: IBM 1.5, Max heap size: 2750 MB
Here's the detailed run attempts together with the allocation class helper source code:
WinXP SP2, SUN JVM:
C:>java -version
java version "1.6.0_02"
Java(TM) SE Runtime Environment (build 1.6.0_02-b06)
Java HotSpot(TM) Client VM (build 1.6.0_02-b06, mixed mode)
java -Xms1470m -Xmx1470m Class1 142
...
about to create object 141
object 141 created
C:>java -Xms1480m -Xmx1480m Class1 145
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
WinXP SP2, IBM JVM
C:>c:\ibm\jdk\bin\java.exe -version
java version "1.5.0"
Java(TM) 2 Runtime Environment, Standard Edition (build pwi32devifx-20070323 (if
ix 117674: SR4 + 116644 + 114941 + 116110 + 114881))
IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows XP x86-32 j9vmwi3223ifx-2007
0323 (JIT enabled)
J9VM - 20070322_12058_lHdSMR
JIT - 20070109_1805ifx3_r8
GC - WASIFIX_2007)
JCL - 20070131
c:\ibm\jdk\bin\java.exe -Xms1810m -Xmx1810m Class1 178
...
about to create object 177
object 177 created
C:>c:\ibm\jdk\bin\java.exe -Xms1820m -Xmx1820m Class1 179
JVMJ9VM015W Initialization error for library j9gc23(2): Failed to instantiate he
ap. 1820M requested
Could not create the Java virtual machine.
Win2003 SE, IBM JVM
C:>"C:\IBM\java" -Xms1850m -Xmx1850m Class1
sleeping for 5 seconds.
Done.
C:>"C:\IBM\java" -Xms1880m -Xmx1880m
Class1
JVMJ9VM015W Initialization error for library j9gc23(2): Failed to instantiate he
ap. 1880M requested
Could not create the Java virtual machine.
Linux 2.6, IBM JVM
[root#myMachine ~]# /opt/ibm/java2-i386-50/bin/java -version
java version "1.5.0"
Java(TM) 2 Runtime Environment, Standard Edition (build pxi32dev-20060511 (SR2))
IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux x86-32 j9vmxi3223-20060504 (JIT enabled)
J9VM - 20060501_06428_lHdSMR
JIT - 20060428_1800_r8
GC - 20060501_AA)
JCL - 20060511a
/opt/ibm/java2-i386-50/bin/java -Xms2750m -Xmx2750m Class1 270
[root#myMachine ~]# /opt/ibm/java2-i386-50/bin/java -Xms2800m -Xmx2800m Class1 270
JVMJ9VM015W Initialization error for library j9gc23(2): Failed to instantiate heap. 2800M requested
Could not create the Java virtual machine.
Here's the code:
import java.util.StringTokenizer;
public class Class1 {
public Class1() {}
private class BigObject {
byte _myArr[];
public BigObject() {
_myArr = new byte[10000000];
}
}
public static void main(String[] args) {
(new Class1()).perform(Integer.parseInt(args[0]));
}
public void perform(int numOfObjects) {
System.out.println("creating 10 MB arrays.");
BigObject arr[] = new BigObject[numOfObjects];
for (int i=0;i <numOfObjects; i++) {
System.out.println("about to create object "+i);
arr[i] = new BigObject();
System.out.println("object "+i+" created");
}
System.out.println("sleeping for 5 seconds.");
try {
Thread.sleep(5000);
}catch (Exception e) {e.printStackTrace();}
System.out.println("Done.");
}
}

For a large file I suggest you use a memory mapped file. This doesn't use heap space (or very little) so maximum heap size shouldn't be a problem in this case.

We have recently ported from Windows to Linux (because of VM size issues).
I have heard of lots of numbers thrown around in the past for Windows VM size (1200, 1400, 1600, 1800). On our Windows Servers (2003), in our environment, with our applications, ... I have never successfully used more than 1280MB. Beyond that our application started exhibiting GC and OOM issues.
Everytime I got a new VM version I tried changing the number and it never varied.
You have a 900MB file now, what if the file increases to 1300MB? What will you do?
You have a number of options
Port to Linux/Solaris. This just needs hardware/software and what is often a simple porting exercise.
Use 64bit Windows. This may not be free of GC issues though - I have heard of different tales with 64bit vms.
Redesign the app to process the file differently, Can you split the file logically in some way, can you read the file in chunks and process it differently etc?
Other people using OpenMap must have encountered this issue. Can you tap into their knowledge and not re-invent any wheels?

As noted in the question mentioned in the comment, there is a practical limit, circa 1200 MB.
However the situation you're describing has more depth to it than sheer memory size.
When you read a 910 MB binary data and build a network objects off of it (as opposed to just maintaining the data as an array of bytes), you end up consuming much more memory than 910 MB. A reasonable estimate would be that the in-memory representation will consume twice as much memory - that's because (1) each object contains an additional pointer (to the class of the object); and (2) there's a lot bookkeeping data. For instance if you use a HashMap to manage your objects then in addition to each object you also allocate a Map.Entry object which can easily consume 16 or 20 bytes (implementation dependent).
On the other hand, there's still hope: do you really need to maintain all 910 MB in memory? Can't you just build something that reads the data in a lazy manner? Combined with WeakReferences I think you can pull this off.

On 32-bit Windows, by default, every application can use up to 2 GB virtual address space. I guess this makes -Xmx2048M. However, if you have more RAM installed, you can increase the virtual address space up to 3 GB by using boot time parameters.
In boot.ini, you can create a new boot options like this:
[boot loader]<br>
timeout=5<br>
default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS<br>
[operating systems]<br>
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional - magyar" /noexecute=optin /fastdetect /usepmtimer<br>
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional - magyar 3GB" /noexecute=optin /fastdetect /usepmtimer /3GB /USERVA=2800<br>
Here by adjusting the /USERVA=2800 parameter, you can tune your machine. But be aware that some configurations don't like high values in this parameter - expect crashes.

Related

Maximum heap size using for Java process in Windows 10 64 bit running 64 bit JVM

What is maximum Heap size for Java process running on Windows 10 64 bits, with 64 bits JVM? My machine has 8 GB of RAM. And I am running Java 8.
I trying to run BFS on huge graph for experimental purposes. While running BFS I am monitoring Heap size being used in Java Visual VM. According to Visual VM heap utilization is always less than 2000 MB regardless of providing following JVM parameters
-Xms2048m
-Xmx3072m
-XX:ReservedCodeCacheSize=240m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
I did some research over internet but could not find any specific answer related to the system specification I am using. Can a java process use more than 2 GB on Windows 10 64 bit and 64 bit JVM? As Guidelines for Java Heap sizing the limit for Windows XP/2008/7 is 2 GB.
On a 64-bit machine, with 64-bit JVM you can work with multi gigabyte heaps (dozens and dozens of GBs). I'm not sure it's limited in any way except by the available memory (and the theoretical address space of a 64-bit pointer).
Of course if you're working with a huge heap, the GC has a lot more work to do and you may find that you need to scale horizontally instead of vertically, to maintain a good performance.
If VisualVM isn't showing you using more than 2GB (the initial heap size given with -Xms), then it probably just doesn't need more than that. You've given the permission to use up to 3GB (-Xmx), but the JVM won't allocate more memory just for the fun of it.
Maximum Heap can be allocated for 32bit JVM is 2^32 = 4G, Again 4gb will be devided into 1+ GB for VM to use for runtime classes. It varies windows it is ~2GB and linux it is ~3GB.
As you are using 64bit machine maximum heap available is 2^64 it will be big enough for you to run BFS easily.
You can monitor the available memory using vm flags "-XX+PrintFlagsFinal | grep -iE HeapSize" will tell you the maximum available heap size that can be used. Configure slightly less than that and start using...
There is no definite size you could specify for 64 bit architecture but simple test helps you find what is the maximum contiguous space available or could be allocated for a process. This could be tested as follow by using simple command.
Try as below
java -Xmx -version
If the above command gives result then your system could be allowed to have Xmx to that level, If it fails then you can't specify that value.
Few test from system.
I tested the value with 20G.40g,100G,160G,300G all these gave java -version output but tried with 1600G that throws the error.
Output of the test
C:\Users\mpalanis>java -Xmx300G -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
C:\Users\mpalanis>java -Xmx1600G -version
Error occurred during initialization of VM
Unable to allocate 52431424KB bitmaps for parallel garbage collection for the requested 1677805568KB heap.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Hope this explanation helps.
If you are using IntelliJ Idea as an IDE you can do this directly from it,
From the main menu, select Help | Change Memory Settings
Set the necessary amount of memory that you want to allocate and click Save and Restart.
This changes the value of the -Xmx option used by the JVM and restarts IntelliJ IDEA with the new setting.

java on osx - xmx ignored?

i've got two computers running on Mac OS X El Capitan and Ubuntu 16.04 LTS. On both is Java SDK 1.8.0_101 installed.
When I try to start an game server on Ubuntu with more memory than available, I get the following output:
$ java -Xms200G -Xmx200G -jar craftbukkit.jar nogui
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f9b6e580000, 71582613504, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 71582613504 bytes for committing reserved memory.
# An error report file with more information is saved as:
#
On Mac OS I don't get this error:
$ java -Xms200G -Xmx200G -jar craftbukkit.jar nogui
Loading libraries, please wait...
Both computers have 8GB of memory. I also tried with an other Apple computer - same problem.
Is that a bug of java mac version?
(Is there a way to force limit the memory usage e.g. to 1GB? -Xmx1G won't work and I wasn't able to find another reason. Not here neither on google)
Thank you!
Sorry for my bad english...
It's a difference in how the operating systems work. Linux has a concept of a 'fixed' swap - this is based on physical RAM + the various swapfiles/swap partitions added to the system. This is considered the maximum limit of memory that can be committed.
OSX doesn't consider swap as fixed. It will continue to add swapfiles as more and more memory is committed on the operating system (you can see the files being added in /var/vm).
As a result, you can ask OSX for significantly more memory than is available and it will effectively reply with 'ok', while under linux it will go 'no'.
The upper-bound limit is still enforced by java - once the heap goes above the size specified it will return a java.lang.OutOfMemoryError: Java heap space exception, so if you're specifying a -Xmx1G then it should be enforced by the JRE.
You can see the difference with a simple test program:
import java.util.Vector;
public class memtest {
public static void main(String args[]) throws Exception {
Vector<byte[]> v = new Vector<byte[]>();
while (true) {
v.add(new byte[128 * 1024]);
System.out.println(v.size());
}
}
};
If this program is run with -Xmx100M it dies with a Java heap space message after ~730 iterations, when run with -Xmx1G it dies with a Java heap space message after ~7300 iterations, showing that the limit is being enforced by the java virtual machine.

JVM ignoring Xms setting

I have the following Java program that I don't expect to need much java heap:
SleepTest.java
public class SleepTest {
public static void main(String[] args) {
while (true) {
try {
Thread.sleep(100000);
} catch (InterruptedException e) {
}
}
}
}
Therefore I try to inform this to the JVM by specifying an initial heap size using the Xms setting. However it seems that the JVM just ignores this and allocates with the default size. Why is this ?
To illustrate I start my program three times:
abc#yyy:~> java SleepTest & // default - 512 MB
[1] 15745
abc#yyy:~> java -Xms10M SleepTest & // specifying initial size of 10 MB
[2] 15756
abc#yyy:~> java -Xmx10M SleepTest & // forcing max heap size of 10 MB
[3] 15766
And investigates the memory consumption in Linux:
abc#yyy:~/ebbe> grep VmSize /proc/15745/status /proc/15756/status /proc/15766/status
/proc/15745/status:VmSize: 688828 kB // default
/proc/15756/status:VmSize: 688892 kB // specifying initial size of 10 MB
/proc/15766/status:VmSize: 166596 kB // forcing max heap size of 10 MB
Here the size of process 15756 indicates that it just allocated the same as default.
I'm using the following JVM:
java version "1.6.0"
Java(TM) SE Runtime Environment (build pxa6460sr16fp1-20140706_01(SR16 FP1))
IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux amd64-64 jvmxa6460sr16-20140626_204542 (JIT enabled, AOT enabled)
J9VM - 20140626_204542
JIT - r9_20130920_46510ifx7
GC - GA24_Java6_SR16_20140626_1848_B204542)
JCL - 20140704_01
And I'm running on
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 3
The VM does honor the -Xms hint, but if you set it too low, the heap will immediately grow to satisfy allocation requests.
You conclude your code "doesn't need much memory" based on what's in your code. You do not take into account the memory consumed by the startup of the VM/JRE. Beware that due to the dependencies the VM has to load a lot of the JRE classes before it even begins executing your code.
The -Xms option is meant to ensure that the VM allocates at least as much heap, its the lower limit for the heap size. The VM is always free to allocate more. Thats where -Xmx comes into the picture (to define the upper limit).

What in Java is using 400M in virtual memory and how do I lower that usage?

Simple program:
public class SleepTest {
public static void main(String[] args) throws InterruptedException {
Thread.sleep(60 * 1000);
}
}
Then
$ javac SleepTest.java
$ java -cp . SleepTest
For OpenJDK 1.6.0_20 this uses 600M of virtual memory on my machine! That is, "top" shows "VIRT" 600M and RES 10m. (I am on Ubuntu 10.04, 32-bit or 64-bit).
For Sun's Java 1.6.0_22 it uses 400M of virtual memory.
What is using all that virtual memory, and how do I lower that usage?
Full "java -version":
OpenJDK:
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.4) (6b20-1.9.4-0ubuntu1~10.04.1)
OpenJDK Client VM (build 19.0-b09, mixed mode, sharing)
Sun:
$ /usr/lib/jvm/java-6-sun-1.6.0.22/jre/bin/jav -version
java version "1.6.0_22"
Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
Java HotSpot(TM) Client VM (build 17.1-b03, mixed mode, sharing)
Edit:
Compiling with javac from either package doesn't seem to help.
Adding some code to print used memory is as follows:
private static String megabyteString(long bytes) {
return String.format("%.1f", ((float)bytes) / 1024 / 1024);
}
private static void printUsedMemory() {
Runtime run = Runtime.getRuntime();
long free = run.freeMemory();
long total = run.totalMemory();
long max = run.maxMemory();
long used = total - free;
System.out.println("Memory: used " + megabyteString(used) + "M"
+ " free " + megabyteString(free) + "M"
+ " total " + megabyteString(total) + "M"
+ " max " + megabyteString(max) + "M");
}
shows
Sun:
Memory: used 0.3M free 15.2M total 15.5M max 247.5M
OpenJDK:
Memory: used 0.2M free 15.3M total 15.5M max 494.9M
even with -Xmx5m, so it must have a minimum? I've read about defaults before (depends on jvm, virtual machine, a common strategy by default one quarter of physical memory), but is that causing the large virtual memory use and can I not decrease it?
Edit #2:
Adding -Xmx changes things:
$ java -Xmx5m -cp . SleepTest
OpenJDK:
$ java -Xmx5m SleepTest
Memory: used 0.2M free 4.7M total 4.9M max 5.8M
uses "only" 150M of virtual memory for either JVM.
Edit #3:
nos, bestsss, Mikaveli, and maybe others pointed out that virtual memory does not use swap. nos claims the OOM killer is smart enough to go by real memory usage. If those things are true, then I guess I don't care about virtual memory usage. RES (resident size) is small, so I'm good.
Edit #4:
Not sure which answer to accept. Either of these, if it shows up as an answer: "Don't worry about it because virtual memory is cheap" or some explanation of why Java reserves at least 150M in virtual memory no matter what -Xmx or -Xms I give it, even though real memory usage is tiny.
Edit #5:
This is a dup. I voted to close.
"RES" is resident set size - essentially the physical memory you're using for that process.
From your example, top states that it's using 10 MiB - that's roughly what I'd expect for a -Xmx setting of 5m (the total physical memory used often seems to be double, from my experience of Java on *nix systems.
Are you actually getting any memory issues or are you just concerned about the misleading output from "top"?
Also, the *nix virtual memory includes of the available memory space - physical and swap. If the process isn't using any "swap", then it is only using physical "resident" memory.
Stack Overflow answer to why JVM uses more memory than just heap setting.
$ java -cp . SleepTest -Xmx5m
wow! the -Xmx5m is an argument you can find at public static void main(String[] a). It is parameter to the java program NOT to the VM
move it 1st
$ java -Xmx5m -cp . SleepTest
Try using -Xms to specify minimum memory?
The JVM takes some defaults for heap space (that is all that memory usage) in count. This defaults are coming from the machine caracteristics (RAM, cores, etc) and the JVM implementation (Sun/Oracle, OpenJDK).
I think -Xmx5m is too low and the JVM ignores this parameter silently.
Here is a good reference with lots of links to read:
Java Memory explained (SUN JVM)

Maximum Java heap size of a 32-bit JVM on a 64-bit OS

The question is not about the maximum heap size on a 32-bit OS, given that 32-bit OSes have a maximum addressable memory size of 4GB, and that the JVM's max heap size depends on how much contiguous free memory can be reserved.
I'm more interested in knowing the maximum (both theoretical and practically achievable) heap size for a 32-bit JVM running in a 64-bit OS. Basically, I'm looking at answers similar to the figures in a related question on SO.
As to why a 32-bit JVM is used instead of a 64-bit one, the reason is not technical but rather administrative/bureaucratic - it is probably too late to install a 64-bit JVM in the production environment.
You can ask the Java Runtime:
public class MaxMemory {
public static void main(String[] args) {
Runtime rt = Runtime.getRuntime();
long totalMem = rt.totalMemory();
long maxMem = rt.maxMemory();
long freeMem = rt.freeMemory();
double megs = 1048576.0;
System.out.println ("Total Memory: " + totalMem + " (" + (totalMem/megs) + " MiB)");
System.out.println ("Max Memory: " + maxMem + " (" + (maxMem/megs) + " MiB)");
System.out.println ("Free Memory: " + freeMem + " (" + (freeMem/megs) + " MiB)");
}
}
This will report the "Max Memory" based upon default heap allocation. So you still would need to play with -Xmx (on HotSpot). I found that running on Windows 7 Enterprise 64-bit, my 32-bit HotSpot JVM can allocate up to 1577MiB:
[C:scratch]> java -Xmx1600M MaxMemory
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
[C:scratch]> java -Xmx1590M MaxMemory
Total Memory: 2031616 (1.9375 MiB)
Max Memory: 1654456320 (1577.8125 MiB)
Free Memory: 1840872 (1.75559234619 MiB)
[C:scratch]>
Whereas with a 64-bit JVM on the same OS, of course it's much higher (about 3TiB)
[C:scratch]> java -Xmx3560G MaxMemory
Error occurred during initialization of VM
Could not reserve enough space for object heap
[C:scratch]> java -Xmx3550G MaxMemory
Total Memory: 94240768 (89.875 MiB)
Max Memory: 3388252028928 (3184151.84297 MiB)
Free Memory: 93747752 (89.4048233032 MiB)
[C:scratch]>
As others have already mentioned, it depends on the OS.
For 32-bit Windows: it'll be <2GB (Windows internals book says 2GB for user processes)
For 32-bit BSD / Linux: <3GB (from the Devil Book)
For 32-bit MacOS X: <4GB (from Mac OS X internals book)
Not sure about 32-bit Solaris, but the code above has been tested in this answer.
For a 64-bit host OS, if the JVM is 32-bit, it'll still depend, most likely like above as demonstrated.
-- UPDATE 20110905: I just wanted to point out some other observations / details:
The hardware that I ran this on was 64-bit with 6GB of actual RAM installed. The operating system was Windows 7 Enterprise, 64-bit
The actual amount of Runtime.MaxMemory that can be allocated also depends on the operating system's working set. I once ran this while I also had VirtualBox running and found I could not successfully start the HotSpot JVM with -Xmx1590M and had to go smaller. This also implies that you may get more than 1590M depending upon your working set size at the time (though I still maintain it'll be under 2GiB for 32-bit because of Windows' design)
32-bit JVMs which expect to have a single large chunk of memory and use raw pointers cannot use more than 4 Gb (since that is the 32 bit limit which also applies to pointers). This includes Sun and - I'm pretty sure - also IBM implementations. I do not know if e.g. JRockit or others have a large memory option with their 32-bit implementations.
If you expect to be hitting this limit you should strongly consider starting a parallel track validating a 64-bit JVM for your production environment so you have that ready for when the 32-bit environment breaks down. Otherwise you will have to do that work under pressure, which is never nice.
Edit 2014-05-15: Oracle FAQ:
The maximum theoretical heap limit for the 32-bit JVM is 4G. Due to various additional constraints such as available swap, kernel address space usage, memory fragmentation, and VM overhead, in practice the limit can be much lower. On most modern 32-bit Windows systems the maximum heap size will range from 1.4G to 1.6G. On 32-bit Solaris kernels the address space is limited to 2G. On 64-bit operating systems running the 32-bit VM, the max heap size can be higher, approaching 4G on many Solaris systems.
(http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_heap_32bit)
You don't specify which OS.
Under Windows (for my application - a long running risk management application) we observed that we could go no further than 1280MB on Windows 32bit. I doubt that running a 32bit JVM under 64bit would make any difference.
We ported the app to Linux and we are running a 32bit JVM on 64bit hardware and have had a 2.2GB VM running pretty easily.
The biggest problem you may have is GC depending on what you are using memory for.
From 4.1.2 Heap Sizing:
"For a 32-bit process model, the maximum virtual address size of the
process is typically 4 GB, though some operating systems limit this to
2 GB or 3 GB. The maximum heap size is typically -Xmx3800m (1600m) for
2 GB limits), though the actual limitation is application dependent.
For 64-bit process models, the maximum is essentially unlimited."
Found a pretty good answer here: Java maximum memory on Windows XP.
We recently had some experience with this. We have ported from Solaris (x86-64 Version 5.10) to Linux (RedHat x86-64) recently and have realized that we have less memory available for a 32 bit JVM process on Linux than Solaris.
For Solaris this almost comes around to 4GB (http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_heap_32bit).
We ran our app with -Xms2560m -Xmx2560m -XX:MaxPermSize=512m -XX:PermSize=512m with no issues on Solaris for past couple of years. Tried to move it to linux and we had issues with random out of memory errors on start up. We could only get it to consistently start up on -Xms2300 -Xmx2300. Then we were advised of this by support.
A 32 bit process on Linux has a
maximum addressable address space of
3gb (3072mb) whereas on Solaris it is
the full 4gb (4096mb).
The limitations of a 32-bit JVM on a 64-bit OS will be exactly the same as the limitations of a 32-bit JVM on a 32-bit OS. After all, the 32-bit JVM will be running In a 32-bit virtual machine (in the virtualization sense) so it won't know that it's running on a 64-bit OS/machine.
The one advantage to running a 32-bit JVM on a 64-bit OS versus a 32-bit OS is that you can have more physical memory, and therefore will encounter swapping/paging less frequently. This advantage is only really fully realized when you have multiple processes, however.
As to why a 32-bit JVM is used instead of a 64-bit one, the reason is not technical but rather administrative/bureaucratic ...
When I was working for BEA, we found that the average application actually ran slower in a 64-bit JVM, then it did when running in a 32-bit JVM. In some cases, the performance hit was as high as 25% slower. So, unless your application really needs all that extra memory, you were better off setting up more 32-bit servers.
As I recall, the three most common technical justifications for using a 64-bit that BEA professional services personnel ran into were:
The application was manipulating multiple massive images,
The application was doing massive number crunching,
The application had a memory leak, the customer was the prime on a
government contract, and they didn't want to take the time and the
expense of tracking down the memory leak. (Using a massive memory
heap would increase the MTBF and the prime would still get paid)
.
The JROCKIT JVM currently owned by Oracle supports non-contiguous heap usage, thus allowing the 32 bit JVM to access more then 3.8 GB of memory when the JVM is running on a 64 bit windows OS. (2.8 GB when running on a 32 bit OS).
http://blogs.oracle.com/jrockit/entry/how_to_get_almost_3_gb_heap_on_windows
The JVM can be freely downloaded (registration required) at
http://www.oracle.com/technetwork/middleware/jrockit/downloads/index.html
Here is some testing under Solaris and Linux 64-bit
Solaris 10 - SPARC - T5220 machine with 32 GB RAM (and about 9 GB free)
$ java -XX:PermSize=128M -XX:MaxPermSize=256M -Xms512m -Xmx3750m MaxMemory
Error occurred during initialization of VM
Could not reserve space for ObjectStartArray
$ java -XX:PermSize=128M -XX:MaxPermSize=256M -Xms512m -Xmx3700m MaxMemory
Total Memory: 518520832 (494.5 MiB)
Max Memory: 3451912192 (3292.0 MiB)
Free Memory: 515815488 (491.91998291015625 MiB)
Current PID is: 28274
Waiting for user to press Enter to finish ...
$ java -version
java version "1.6.0_30"
Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
Java HotSpot(TM) Server VM (build 20.5-b03, mixed mode)
$ which java
/usr/bin/java
$ file /usr/bin/java
/usr/bin/java: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped, no debugging information available
$ prstat -p 28274
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
28274 user1 670M 32M sleep 59 0 0:00:00 0.0% java/35
BTW: Apparently Java does not allocate much actual memory with the startup. It seemed to take only about 100 MB per instance started (I started 10)
Solaris 10 - x86 - VMWare VM with 8 GB RAM (about 3 GB free*)
The 3 GB free RAM is not really true. There is a large chunk of RAM that ZFS caches use, but I don't have root access to check how much exactly
$ java -XX:PermSize=128M -XX:MaxPermSize=256M -Xms512m -Xmx3650m MaxMemory
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
$ java -XX:PermSize=128M -XX:MaxPermSize=256M -Xms512m -Xmx3600m MaxMemory
Total Memory: 516423680 (492.5 MiB)
Max Memory: 3355443200 (3200.0 MiB)
Free Memory: 513718336 (489.91998291015625 MiB)
Current PID is: 26841
Waiting for user to press Enter to finish ...
$ java -version
java version "1.6.0_41"
Java(TM) SE Runtime Environment (build 1.6.0_41-b02)
Java HotSpot(TM) Server VM (build 20.14-b01, mixed mode)
$ which java
/usr/bin/java
$ file /usr/bin/java
/usr/bin/java: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, no debugging information available
$ prstat -p 26841
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
26841 user1 665M 22M sleep 59 0 0:00:00 0.0% java/12
RedHat 5.5 - x86 - VMWare VM with 4 GB RAM (about 3.8 GB used - 200 MB in buffers and 3.1 GB in caches, so about 3 GB free)
$ alias java='$HOME/jre/jre1.6.0_34/bin/java'
$ java -XX:PermSize=128M -XX:MaxPermSize=256M -Xms512m -Xmx3500m MaxMemory
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
$ java -XX:PermSize=128M -XX:MaxPermSize=256M -Xms512m -Xmx3450m MaxMemory
Total Memory: 514523136 (490.6875 MiB)
Max Memory: 3215654912 (3066.6875 MiB)
Free Memory: 511838768 (488.1274871826172 MiB)
Current PID is: 21879
Waiting for user to press Enter to finish ...
$ java -version
java version "1.6.0_34"
Java(TM) SE Runtime Environment (build 1.6.0_34-b04)
Java HotSpot(TM) Server VM (build 20.9-b04, mixed mode)
$ file $HOME/jre/jre1.6.0_34/bin/java
/home/user1/jre/jre1.6.0_34/bin/java: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped
$ cat /proc/21879/status | grep ^Vm
VmPeak: 3882796 kB
VmSize: 3882796 kB
VmLck: 0 kB
VmHWM: 12520 kB
VmRSS: 12520 kB
VmData: 3867424 kB
VmStk: 88 kB
VmExe: 40 kB
VmLib: 14804 kB
VmPTE: 96 kB
Same machine using JRE 7
$ alias java='$HOME/jre/jre1.7.0_21/bin/java'
$ java -XX:PermSize=128M -XX:MaxPermSize=256M -Xms512m -Xmx3500m MaxMemory
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
$ java -XX:PermSize=128M -XX:MaxPermSize=256M -Xms512m -Xmx3450m MaxMemory
Total Memory: 514523136 (490.6875 MiB)
Max Memory: 3215654912 (3066.6875 MiB)
Free Memory: 511838672 (488.1273956298828 MiB)
Current PID is: 23026
Waiting for user to press Enter to finish ...
$ java -version
java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
Java HotSpot(TM) Server VM (build 23.21-b01, mixed mode)
$ file $HOME/jre/jre1.7.0_21/bin/java
/home/user1/jre/jre1.7.0_21/bin/java: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped
$ cat /proc/23026/status | grep ^Vm
VmPeak: 4040288 kB
VmSize: 4040288 kB
VmLck: 0 kB
VmHWM: 13468 kB
VmRSS: 13468 kB
VmData: 4024800 kB
VmStk: 88 kB
VmExe: 4 kB
VmLib: 10044 kB
VmPTE: 112 kB
Should be a lot better
For a 32-bit JVM running on a 64-bit host, I imagine what's left over for the heap will be whatever unfragmented virtual space is available after the JVM, it's own DLL's, and any OS 32-bit compatibility stuff has been loaded. As a wild guess I would think 3GB should be possible, but how much better that is depends on how well you are doing in 32-bit-host-land.
Also, even if you could make a giant 3GB heap, you might not want to, as this will cause GC pauses to become potentially troublesome. Some people just run more JVM's to use the extra memory rather than one giant one. I imagine they are tuning the JVM's right now to work better with giant heaps.
It's a little hard to know exactly how much better you can do. I guess your 32-bit situation can be easily determined by experiment. It's certainly hard to predict abstractly, as a lot of things factor into it, particularly because the virtual space available on 32-bit hosts is rather constrained.. The heap does need to exist in contiguous virtual memory, so fragmentation of the address space for dll's and internal use of the address space by the OS kernel will determine the range of possible allocations.
The OS will be using some of the address space for mapping HW devices and it's own dynamic allocations. While this memory is not mapped into the java process address space, the OS kernel can't access it and your address space at the same time, so it will limit the size of any program's virtual space.
Loading DLL's depends on the implementation and the release of the JVM. Loading the OS kernel depends on a huge number of things, the release, the HW, how many things it has mapped so far since the last reboot, who knows...
In summary
I bet you get 1-2 GB in 32-bit-land, and about 3 in 64-bit, so an overall improvement of about 2x.
On Solaris the limit has been about 3.5 GB since Solaris 2.5. (about 10 years ago)
I was having the same problems with the JVM that App Inventor for Android Blocks Editor uses. It sets the heap at 925m max. This is not enough but I couldn't set it more than about 1200m, depending on various random factors on my machine.
I downloaded Nightly, the beta 64-bit browser from Firefox, and also JAVA 7 64 bit version.
I haven't yet found my new heap limit, but I just opened a JVM with a heap size of 5900m. No problem!
I am running Win 7 64 bit Ultimate on a machine with 24gb RAM.
I have tried setting the heap size upto 2200M on 32bit Linux machine and JVM worked fine. The JVM didnt start when I set it to 2300M.
This is heavy tunning, but you can get a 3gb heap.
http://www.microsofttranslator.com/bv.aspx?from=&to=en&a=http://forall.ru-board.com/egor23/online/FAQ/Virtual_Memory/Limits_Virtual_Memory.html
one more point here for hotspot 32-bit JVM:-
the native heap capacity = 4 Gig – Java Heap - PermGen;
It can get especially tricky for 32-bit JVM since the Java Heap and native Heap are in a race. The
bigger your Java Heap, the smaller the native Heap. Attempting to setup a large Heap for a 32-bit VM
e.g .2.5 GB+ increases risk of native OutOfMemoryError depending of your application(s) footprint,
number of Threads etc.
Theoretical 4gb, but in practice (for IBM JVM):
Win 2k8 64, IBM Websphere Application Server 8.5.5 32bit
C:\IBM\WebSphere\AppServer\bin>managesdk.bat -listAvailable -verbose CWSDK1003I: Доступные SDK: CWSDK1005I: Имя SDK: 1.6_32 - com.ibm.websphere.sdk.version.1.6_32=1.6 - com.ibm.websphere.sdk.bits.1.6_32=32 - com.ibm.websphere.sdk.location.1.6_32=${WAS_INSTALL_ROOT}/java - com.ibm.websphere.sdk.platform.1.6_32=windows - com.ibm.websphere.sdk.architecture.1.6_32=x86_32 - com.ibm.websphere.sdk.nativeLibPath.1.6_32=${WAS_INSTALL_ROOT}/lib/native/win /x86_32/
CWSDK1001I: Задача managesdk выполнена успешно.
C:\IBM\WebSphere\AppServer\java\bin>java -Xmx2036 MaxMemory
JVMJ9GC017E -Xmx слишком мала, должна быть не меньше 1 M байт
JVMJ9VM015W Ошибка инициализации для библиотеки j9gc26(2): Не удалось инициализи
ровать
Could not create the Java virtual machine.
C:\IBM\WebSphere\AppServer\java\bin>java -Xmx2047M MaxMemory
Total Memory: 4194304 (4.0 MiB)
Max Memory: 2146435072 (2047.0 MiB)
Free Memory: 3064536 (2.9225692749023438 MiB)
C:\IBM\WebSphere\AppServer\java\bin>java -Xmx2048M MaxMemory
JVMJ9VM015W Ошибка инициализации для библиотеки j9gc26(2): Не удалось создать эк
земпляр кучи; запрошено 2G
Could not create the Java virtual machine.
RHEL 6.4 64, IBM Websphere Application Server 8.5.5 32bit
[bin]./java -Xmx3791M MaxMemory
Total Memory: 4194304 (4.0 MiB)
Max Memory: 3975151616 (3791.0 MiB)
Free Memory: 3232992 (3.083221435546875 MiB)
[root#nagios1p bin]# ./java -Xmx3793M MaxMemory
Total Memory: 4194304 (4.0 MiB)
Max Memory: 3977248768 (3793.0 MiB)
Free Memory: 3232992 (3.083221435546875 MiB)
[bin]# /opt/IBM/WebSphere/AppServer/bin/managesdk.sh -listAvailable -verbose
CWSDK1003I: Available SDKs :
CWSDK1005I: SDK name: 1.6_32
- com.ibm.websphere.sdk.version.1.6_32=1.6
- com.ibm.websphere.sdk.bits.1.6_32=32
- com.ibm.websphere.sdk.location.1.6_32=${WAS_INSTALL_ROOT}/java
- com.ibm.websphere.sdk.platform.1.6_32=linux
- com.ibm.websphere.sdk.architecture.1.6_32=x86_32
-com.ibm.websphere.sdk.nativeLibPath.1.6_32=${WAS_INSTALL_ROOT}/lib/native/linux/x86_32/
CWSDK1001I: Successfully performed the requested managesdk task.
The limitation also comes from the fact that for a 32 bit VM, the heap itself has to start at address zero if you want all those 4GB.
Think about it, if you want to reference something via:
0000....0001
i.e.: a reference that has this particular bits representation, it means you are trying to access the very first memory from the heap. For that to be possible, the heap has to start at address zero. But that never happens, it starts at some offset from zero:
| .... .... {heap_start .... heap_end} ... |
--> (this can't be referenced) <--
Because heap never starts from address zero in an OS, there are quite a few bits that are never used from a 32 bits reference, and as such the heap that can be referenced is lower.

Categories