jvm issue at startup - java

I can set the max memory as 1000 and not more than that, if I set the memory more than that, it throws the following error.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
My question is, why jvm looks for the max memory at startup?
Thanks in advance.

The Sun JVM needs a contiguous area of memory for its heap. Using the tool vmmap from the Sysinternals suite you can examine the memory layout of the Java process exactly. To do that, write a simple Java program like this:
public class MemoryLayout {
public static void main(String[] args) throws java.io.IOException {
System.in.read();
}
}
Compile that program and run it using the large heap settings
javac MemoryLayout.java
java -Xmx1000m -Xms1000m MemoryLayout
Then, start vmmap, select the java process and look for the yellow memory region whose size is larger than 1000000k. This is the JVM heap. Look further below, and you will eventually find a purple row indicating that there is a DLL file mapped. This DLL file prevents your JVM heap from growing bigger.
If you know what you are doing, you can then rebase that DLL, so it will be loaded at a different address. Microsoft provides a tool called rebase.exe as part of the Microsoft Platform SDK (I have version 5.2.3790.1830).

There are two command line parameters that directly control the size of the (normal) heap:
-Xmx<nnn> sets the maximum heap size
-Xms<nnn> sets the initial heap size
In both cases <nnn> is a number of bytes, with a k or m on the end to indicate kilobytes and megabytes respectively. The initial size gives the heap size allocated when the JVM starts, and the maximum size puts a limit on how big it can grow. (But the JVM also allocates memory for buffers, the "permgen" heap, stacks and other things ... in addition to the normal heap.)
It is not clear what options you are actually giving. (A value of 1000 doesn't make any sense. The -Xmx size has to be more than 2 megabytes and the -Xms size has to be more than 1 megabytes; see this page.)
There are advantages and disadvantages in making the initial heap size smaller than the maximum heap size; e.g. -Xms100m -Xmx1000m. But there is no point making the maximum heap size larger than the amount of virtual memory your machine can allocate to the JVM.

why jvm looks for the max memory at startup.
It wants to make sure that it can eventually allocate the maximum amount which you said it could have.
Why do you need to set a higher maximum then your machine actually supports?
Answer: It would make JVM configuration easier, if you could just set it to basically unlimited, especially if you deployed to different machines. This was possible in earlier versions, but for the current Sun JVM, you have to figure out a "proper" value for every machine. Hopefully, there will be more clever/automatic memory settings in the future.

As elaborated here for implementation reasons (basically it makes performance faster and that is their priority) the JVM requires contiguous memory addressing, so it has to establish that it has that at startup, otherwise it might to be available later.
The fact of the matter is that the JVM in many ways is a server-side oriented technology. That is where Java is popular so that is what gets the development attention.

If you use a 64-bit JVM on a 64-bit OS you won't have this problem. This is only a problem on 32-bit OSes.

Related

Multiple JVMs with sum of -xms greater than host's RAM

Note: I am new to Java (I am a Python dev; the idea of JVM is alien to me)
Say you have a server w/ 8-core 160GB RAM
If you run a Java program with -xms 100G, it would not throw any errors.
What if you run two or more Java programs (multiple JVMs) with -xms -100G?
If memeory permits, its it acceptable to run multiple JVMs on a same host?
Any references would be appreciated!
No (sane) OS will give you actual memory to back-up your needs, ever heard about swap space? And this is true for python too, memory management is not different, since python still operates with virtual memory, right?
If you really want your VM's memory to be back-ed up by physical memory, there is flag for that: -XX:+AlwaysPreTouch, but using it will mean that all the memory that your VM uses has to be zeroed ("touched") and that means it will be a slower start - especially true for such big amounts of memory.
-Xmx and -Xms are really just start-up flags that tell you the initial memory you want and the max memory you want (usually they have the same value), but that is still virtual memory that is being asked from the OS.
-xms defines the minimum mandatory requirement for the JVM. At the time of launching the JVM if the memory is available, it will come up without any issues.Same is true for multiple JVM launches as well.
Inshort, as long as minimum memory requirement is getting fulfilled JVM will launch successfully.
http://blog.paulgu.com/java/6-common-errors-in-setting-java-heap-size/

How do I start a JVM with unlimited memory?

How do I start a JVM with no heap max memory restriction? So that it can take as much memory as it can ?
I searched if there is such an option, but I only seem to find the -Xmx and -Xms options.
EDIT:
Let's say I have a server with 4GB of RAM and it only runs my application. And, let's say I have another server with 32GB of RAM. I don't want to start my application with 4GB of memory limit because the second machine should be able to handle more objects
-XX:MaxRAMFraction=1 will auto-configure the max heap size to 100% of your physical ram or the limits imposed by cgroups if UseCGroupMemoryLimitForHeap is set.
OpenJDK 10 will also support a percentage based option MaxRAMPercentage allowing more fine-grained selection JDK-8186248. This is important to leave some spare capacity for non-heap data structures to avoid swapping.
You can't (or at least, I don't know a JVM implementation that supports it). The JVM needs to know at start up how much memory it can allocate, to ensure that it can reserve a contiguous range in virtual memory. This allows - among others - for simpler reasoning in memory management.
If virtual memory would be expanded at runtime, this could lead to fragmented virtual memory ranges, making tracking and referencing memory harder.
However, recent Java versions have introduced options like -XX:MaxRAMPercentage=n, which allows you to specify the percentage of memory to allocate to the Java heap, instead of an absolute in bytes. For example -XX:MaxRAMPercentage=80 will allocate 80% of the available memory to the Java heap (the default is 25%).
The -XX:MaxRAMPercentage only works for systems with more than 200MB of memory (otherwise you need to use -XX:MinRAMPercentage, default 50%).
You can also use -XX:InitialRAMPercentage to specify the initial memory allocated to Java (MaxRAMPercentage is similar to -Xmx, InitialRAMPercentage is similar to -Xms).
JVM is running on physical computer that has limited memory. Therefore JVM cannot have unlimited memory as far as the memory is limited by definition. You can however supply very huge limit in -Xmx option.
The question is however why do you need the unlimited memory and even more correct question is how much memory do you really need?
On Linux, you can use the free and awk commands to calculate an inline value like this:
JAVA_OPT_MAX_MEM=$(free -m | awk '/Mem:/ {printf "-Xmx%dm", 0.80*$2}')
Example Result (on a machine with 3950m of free memory):
JAVA_OPT_MAX_MEM=-Xmx3160m
The calculated option is 80% of the total reported memory.

JPype / Java - Initialize with, or get, remaining heap space

We have software written in Python, which uses JPype to call Java, which performs various resource heavy calculations / report building. We originally assigned 800mb of heap space when starting the JVM. The java side is fully multithreaded and will work with whatever resources are available to it.
jvmArgs = ["-Djava.class.path=" + classpath, "-Xmx800M"]
jpype.startJVM(u"java\\jre8\\bin\\client\\jvm.dll", *jvmArgs)
This worked well until we tested on Windows XP for our legacy clients. The new machines are Win 7 64-bit with 4GB of RAM, whereas the old ones are Win XP 32-bit with only 2 GB of ram.
The issue is that JPype causes our application to ungracefully and silently crash if we allocate too much memory. A try catch doesn't even get triggered on the statement above.
I'm wondering if there's a way to use java from command line to determine how much memory we can allocate on a computer. We can check if it's 32-bit or 64-bit which helps, but we need to make sure they aren't running other programs taking up heap space on the JVM. If they are, our application will crash.
Reader's Digest: We'd like to allocate 500mb of heap space when initializing the JVM, but can't be sure of how much space is currently being used. If we allocate too much, the entire application silently crashes.
We use the following
JPype: 0.5.4.2
Python: 2.7
Java: 1.8 or 1.7 (64-bit or 32-bit)
Thanks.
The memory consumed by JVM consists of 2 main areas:
Heap memory
Non heap memory - MetaSpace, native method stacks, pc register, direct byte buffers, sockets, jni allocated memory, thread stacks and more
While the maximum size that will be used for the heap memory is known and configurable, the size of the non heap memory cannot be fully controlled.
The size of the native memory used by the JVM will be effected by the number of threads you use, the amount of classes being loaded and the use of buffers (use of I/O).
You can limit the size of the metapsace by setting the MaxMetaspaceSize (-XX:MaxMetaspaceSize). You can control the amount of memory used for thread stacks by limiting the number of threads and setting the thread stack size (-Xss).
Assuming you do not have native memory leaks, the amount of classes being loaded is stable (no excessive use of dynamic proxies and bytecode generation) and the amount of threads being used is known - you can speculate how much memory will be required for your application to run by monitoring the overall memory used by the JVM over a period of time. When you do it, make sure the entire heap is being allocated when the JVM starts.

JVM Initialization error [duplicate]

I've always been able to allocate 1400 megabytes for Java SE running on 32-bit Windows XP (Java 1.4, 1.5 and 1.6).
java -Xmx1400m ...
Today I tried the same option on a new Windows XP machine using Java 1.5_16 and 1.6.0_07 and got the error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Through trial and error it seems 1200 megabytes is the most I can allocate on this machine.
Any ideas why one machine would allow 1400 and another only 1200?
Edit: The machine has 4GB of RAM with about 3.5GB that Windows can recognize.
Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous in its address space. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You could try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have much more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
This has to do with contiguous memory.
Here's some info I found online for somebody asking that before, supposedly from a "VM god":
The reason we need a contiguous memory
region for the heap is that we have a
bunch of side data structures that are
indexed by (scaled) offsets from the
start of the heap. For example, we
track object reference updates with a
"card mark array" that has one byte
for each 512 bytes of heap. When we
store a reference in the heap we have
to mark the corresponding byte in the
card mark array. We right shift the
destination address of the store and
use that to index the card mark array.
Fun addressing arithmetic games you
can't do in Java that you get to (have
to :-) play in C++.
Usually we don't have trouble getting
modest contiguous regions (up to about
1.5GB on Windohs, up to about 3.8GB on Solaris. YMMV.). On Windohs, the
problem is mostly that there are some
libraries that get loaded before the
JVM starts up that break up the
address space. Using the /3GB switch
won't rebase those libraries, so they
are still a problem for us.
We know how to make chunked heaps, but
there would be some overhead to using
them. We have more requests for faster
storage management than we do for
larger heaps in the 32-bit JVM. If you
really want large heaps, switch to the
64-bit JVM. We still need contiguous
memory, but it's much easier to get in
a 64-bit address space.
The Java heap size limits for Windows are:
maximum possible heap size on 32-bit Java: 1.8 GB
recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
This doesn't help you getting a bigger Java heap, but now you know you can't go beyond these values.
Oracle JRockit, which can handle a non-contiguous heap, can have a Java heap size of 2.85 GB on Windows 2003/XP with the /3GB switch. It seems that fragmentation can have quite an impact on how large a Java heap can be.
The JVM needs contiguous memory and depending on what else is running, what was running before, and how windows has managed memory you may be able to get up to 1.4GB of contiguous memory. I think 64bit Windows will allow larger heaps.
Sun's JVM needs contiguous memory. So the maximal amount of available memory is dictated by memory fragmentation. Especially driver's dlls tend to fragment the memory, when loading into some predefined base address. So your hardware and its drivers determine how much memory you can get.
Two sources for this with statements from Sun engineers: forum blog
Maybe another JVM? Have you tried Harmony? I think they planned to allow non-continuous memory.
I think it has more to do with how Windows is configured as hinted by this response:
Java -Xmx Option
Some more testing: I was able to allocate 1300MB on an old Windows XP machine with only 768MB physical RAM (plus virtual memory). On my 2GB RAM machine I can only get 1220MB. On various other corporate machines (with older Windows XP) I was able to get 1400MB. The machine with a 1220MB limit is pretty new (just purchased from Dell), so maybe it has newer (and more bloated) Windows and DLLs (it's running Window XP Pro Version 2002 SP2).
I got this error message when running a java program from a (limited memory) virtuozzo VPS. I had not specified any memory arguments, and found I had to explicitly set a small amount as the default must have been too high. E.g. -Xmx32m (obviously needs to be tuned depending on the program you run).
Just putting this here in case anyone else gets the above error message without specifying a large amount of memory like the questioner did.
sun's JDK/JRE needs a contiguous amount of memory if you allocate a huge block.
The OS and initial apps tend to allocate bits and pieces during loading which fragments the available RAM. If a contiguous block is NOT available, the SUN JDK cannot use it. JRockit from Bea(acquired by Oracle) can allocate memory from pieces.
Everyone seems to be answering about contiguous memory, but have neglected to acknowledge a more pressing issue.
Even with 100% contiguous memory allocation, you can't have a 2 GiB heap size on a 32-bit Windows OS (*by default). This is because 32-bit Windows processes cannot address more than 2 GiB of space.
The Java process will contain perm gen (pre Java 8), stack size per thread, JVM / library overhead (which pretty much increases with each build) all in addition to the heap.
Furthermore, JVM flags and their default values change between versions. Just run the following and you'll get some idea:
java -XX:+PrintFlagsFinal
Lots of the options affect memory division in and out of the heap. Leaving you with more or less of that 2 GiB to play with...
To reuse portions of this answer of mine (about Tomcat, but applies to any Java process):
The Windows OS
limits the memory allocation of a 32-bit process to 2 GiB in total (by
default).
[You will only be able] to allocate around 1.5 GiB heap
space because there is also other memory allocated to the process
(the JVM / library overhead, perm gen space etc.).
Why does 32-bit Windows impose a 2 GB process address space limit, but
64-bit Windows impose a 4GB limit?
Other modern operating systems [cough Linux] allow 32-bit processes to
use all (or most) of the 4 GiB addressable space.
That said, 64-bit Windows OS's can be configured to increase the limit
of 32-bit processes to 4 GiB (3 GiB on 32-bit):
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
Here is how to increase the Paging size
right click on mycomputer--->properties--->Advanced
in the performance section click settings
click Advanced tab
in Virtual memory section, click change. It will show ur current paging
size.
Select Drive where HDD space is available.
Provide initial size and max size ...e.g. initial size 0 MB and max size
4000 MB. (As much as you will require)
**There are numerous ways to change heap size like,
file->setting->build, exceution, deployment->compiler here you will find heap size
file->setting->build, exceution, deployment->compiler->andriod here also you will find heap size. You can refer this for andriod project if you facing same issue.
What worked for me was
Set proper appropriate JAVA_HOME path incase you java got updated.
create new system variable computer->properties->advanced setting->create new system variable
name: _JAVA_OPTION value: -Xmx750m
FYI:
you can find default VMoption in Intellij
help->edit custom VM option , In this file you see min and max size of heap.**
First, using a page-file when you have 4 GB of RAM is useless. Windows can't access more than 4GB (actually, less because of memory holes) so the page file is not used.
Second, the address space is split in 2, half for kernel, half for user mode. If you need more RAM for your applications use the /3GB option in boot.ini (make sure java.exe is marked as "large address aware" (google for more info).
Third, I think you can't allocate the full 2 GB of address space because java wastes some memory internally (for threads, JIT compiler, VM initialization, etc). Use the /3GB switch for more.

Why is my JVM's total memory usage more than 30 times greater than its Xmx value?

I am running a Java application with a maximum heap size of 128 MB (-Xmx128M). It is running to successful completion with no OutOfMemoryError, or any other unhandled exception. Therefore, I am assuming that its actual heap size did stay within the declared limit of 128 MB.
However, when observing the process for this Java application, I am seeing a peak total memory usage of 4,188,548 KB (~4 GB). This is a growth of more than 30 times the controlled maximum size of the heap. Although I understand that this value includes virtual memory allocated that may be significantly greater than the actual physical memory used, it affects hard limits such as those imposed by Sun Grid Engine, and therefore it is meaningful.
How exactly is this possible? I understand that the total memory consumed by the JVM includes quite a bit more than the size of the heap, but I do not understand how it could need several GB of extra memory beyond what the application actually needs to create its objects and perform its computation.
I am using Sun Java 1.6.0.31, on a 64-bit RHEL Linux distribution.
There are several memory sinks besides the Java heap controlled by -Xmx:
Thread stacks
PermGen space
direct ByteBuffers and mapped ByteBuffer
memory allocated by native code / libraries
Without knowing details of your system I would guess, that something uses mapped ByteBuffers.
But you could dig into the issue by examing the output of the pmap command. It lists all memory regions of the process together with the filenames any region is mapped to (if the regions is mapped of course).

Categories