Windows memory management and java - java

I'm running a Windows 2016 (x64) server with 32GB RAM. According to Resource Monitor the memory map looks like this:
1MB Reserved, 17376MB In Use, 96MB Modified, 4113MB Standby, 11016MB Free. Summary:
15280MB Available,
4210 MB Cached,
32767MB Total,
32768MB Installed
I have a java (64-bit JVM) service that I want to run on 8GB of memory:
java -Xms8192m -Xmx8192m -XX:MaxMetaspaceSize=128m ...
which results in
Error occurred during initialization of VM
Could not reserve enough space for object heap
I know that 32-bit OS and 32-bit JVM would limit the usable heap, but I verified both are 64-bit. I read that on 32-bit windows / JVM, the heap has be be contiguous. But here I had hoped to be able to even allocate 15GB for the heap, as over 15GB are 'Available' (available for whom / what?).
Page file size is automatically managed, and currently at 7680MB.
I'd be thankful for an explanation why Windows refuses to hand out the memory (or why java cannot make use of it), and what are my options (apart from resizing the host or using like 4GB, which works but is insufficient for the service).
I have tried rebooting the server, but when it's this service's turn to start, other services have already "worked" the memory quite a bit.
Edit: I noticed that the Resource Monitor has a graph called 'Commit Charge' which is over 90%. Task manager has a 'Committed' line which (today) lists 32,9/40,6 GB. Commit charge explains the term, and yes, I've seen the mentioned virtual memory popups already. It seems that for a reason unknown to me, a very high Commit Charge has built up and prevents the 8GB-java from starting. This puts even more emphasis on the question: What does '15 GB Available' memory mean - and to whom is it available, if not for a process?

Related

Java Memory Management - java web start "Error: could not create the java virtual machine"

I have defined a jnlp file with initial-heap-size="512m" max-heap-size="1024m" on a machine that has 16Gb with 12Gb available. The JVM running is a 32-bit JVM because of native libraries. I understand that I must have 1Gb of contiguous memory available to allocate that max. If I reduce the max-heap-size to 768, then it runs as normal, and sometimes I don't need to reduce it.
Two questions:
Why is the machine checking max-heap-size initially before the JVM starts up? Are there assertions that are being performed?
Why would I not be able to allocate the full 1Gb from the get go if I have 12Gb available - assuming that there is a contiguous 1Gb block available?
If you can't use the 64-bit JVM...
If on Windows, using 32-bit JVM, you need to research "large address aware" (JVM compiled/linked with /LARGEADDRESSAWARE option). It will allow you to use larger memory footprint with 32-bit. You can set the bit on a particular executable.
Drawbacks of using /LARGEADDRESSAWARE for 32 bit Windows executables?
If on Linux, look at your system hard or soft limits. That may be limiting the max size of your process. Also, you may have process control groups.
For ulimits, on Linux/UNIX try
ulimit -a
Specifically look at ulimit -m or -v settings. If those are unlimited, you may be experiencing another type of control mechanism.
Partial educated guess to:
Why would I not be able to allocate the full 1Gb from the get go if I have 12Gb available - assuming that there is a contiguous 1Gb block available?
You wrote that JVM running is a 32-bit JVM because of native libraries. Note that then those native libraries are part of your process, and even if they don't allocate huge amounts of memory, they may nevertheless fragment the 2 GB virtual address space (which you're limited to, without LARGESPACEAWARE) so that you may not have a contiguous 1 GB block left. You might want to study the the native libraries' base addresses and rebasing. There's a free tool called VMMap, which is great for studying these issues.

Analyze/track down potential native memory leak in JVM

We're running an application on Linux using Java 1.6 (OpenJDK as well as Oracle JDK). The JVM itself has a maximum of 3.5 GB heap and 512 MB permgen space. However, after running a while top reports the process is using about 8 GB of virtual memory and smem -s swap p reports about 3.5 GB being swapped.
After running a bigger import of thousands of image files on one server, almost no swap space is left and calls to native applications (in our case Im4java calls to Image Magick) fail due to the OS failing to allocate memory for those applications.
In another case the swap space filled over the course of several weeks resulting in the OS killing the JVM due to being out of swap space.
I understand that the JVM will need more than 4 GB of memory for heap (max 3.5 GB), permgen (max 512 MB), code cache, loaded libraries, JNI frames etc.
The problem I'm having is how to find out what is actually using how much of the memory. If the JVM was out of heap memory, I'd get a dump which I could analyze, but in our case it's the OS memory that is eaten up and thus the JVM doesn't generate a dump.
I know there's jrcmd for JRockit, but unfortunately we can't just switch the JVM.
There also seem to be a couple of libraries that allow to track native memory usage but most of those seem to need native code to be recompiled - and besides Im4java (which AFAIK just runs a native process, we don't use DLL/SO-integration here) and the JVM there's no other native code involved that we know of.
Besides that, we can't use a library/tool that might have a huge impact on performance or stability in order to track memory usage on a production system over a long period (several weeks).
So the question is:
How can we get information on what the JVM is actually needing all that memory for, ideally with some detailed information?
You may find references to "zlib/gzip" (pdf handling or http encoding since Java 7), "java2d" or "jai" when replacing memory allocator (jemalloc or tcmalloc) in JVM.
But to really diagnose native memory leak, JIT code symbol mapping and Linux recent profiling tools are required: perf, perf-map-agent and bcc.
Please refer to details in related answer https://stackoverflow.com/a/52767721/737790
Many thanks to Brendan Gregg

JVM Initialization error [duplicate]

I've always been able to allocate 1400 megabytes for Java SE running on 32-bit Windows XP (Java 1.4, 1.5 and 1.6).
java -Xmx1400m ...
Today I tried the same option on a new Windows XP machine using Java 1.5_16 and 1.6.0_07 and got the error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Through trial and error it seems 1200 megabytes is the most I can allocate on this machine.
Any ideas why one machine would allow 1400 and another only 1200?
Edit: The machine has 4GB of RAM with about 3.5GB that Windows can recognize.
Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous in its address space. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You could try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have much more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
This has to do with contiguous memory.
Here's some info I found online for somebody asking that before, supposedly from a "VM god":
The reason we need a contiguous memory
region for the heap is that we have a
bunch of side data structures that are
indexed by (scaled) offsets from the
start of the heap. For example, we
track object reference updates with a
"card mark array" that has one byte
for each 512 bytes of heap. When we
store a reference in the heap we have
to mark the corresponding byte in the
card mark array. We right shift the
destination address of the store and
use that to index the card mark array.
Fun addressing arithmetic games you
can't do in Java that you get to (have
to :-) play in C++.
Usually we don't have trouble getting
modest contiguous regions (up to about
1.5GB on Windohs, up to about 3.8GB on Solaris. YMMV.). On Windohs, the
problem is mostly that there are some
libraries that get loaded before the
JVM starts up that break up the
address space. Using the /3GB switch
won't rebase those libraries, so they
are still a problem for us.
We know how to make chunked heaps, but
there would be some overhead to using
them. We have more requests for faster
storage management than we do for
larger heaps in the 32-bit JVM. If you
really want large heaps, switch to the
64-bit JVM. We still need contiguous
memory, but it's much easier to get in
a 64-bit address space.
The Java heap size limits for Windows are:
maximum possible heap size on 32-bit Java: 1.8 GB
recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
This doesn't help you getting a bigger Java heap, but now you know you can't go beyond these values.
Oracle JRockit, which can handle a non-contiguous heap, can have a Java heap size of 2.85 GB on Windows 2003/XP with the /3GB switch. It seems that fragmentation can have quite an impact on how large a Java heap can be.
The JVM needs contiguous memory and depending on what else is running, what was running before, and how windows has managed memory you may be able to get up to 1.4GB of contiguous memory. I think 64bit Windows will allow larger heaps.
Sun's JVM needs contiguous memory. So the maximal amount of available memory is dictated by memory fragmentation. Especially driver's dlls tend to fragment the memory, when loading into some predefined base address. So your hardware and its drivers determine how much memory you can get.
Two sources for this with statements from Sun engineers: forum blog
Maybe another JVM? Have you tried Harmony? I think they planned to allow non-continuous memory.
I think it has more to do with how Windows is configured as hinted by this response:
Java -Xmx Option
Some more testing: I was able to allocate 1300MB on an old Windows XP machine with only 768MB physical RAM (plus virtual memory). On my 2GB RAM machine I can only get 1220MB. On various other corporate machines (with older Windows XP) I was able to get 1400MB. The machine with a 1220MB limit is pretty new (just purchased from Dell), so maybe it has newer (and more bloated) Windows and DLLs (it's running Window XP Pro Version 2002 SP2).
I got this error message when running a java program from a (limited memory) virtuozzo VPS. I had not specified any memory arguments, and found I had to explicitly set a small amount as the default must have been too high. E.g. -Xmx32m (obviously needs to be tuned depending on the program you run).
Just putting this here in case anyone else gets the above error message without specifying a large amount of memory like the questioner did.
sun's JDK/JRE needs a contiguous amount of memory if you allocate a huge block.
The OS and initial apps tend to allocate bits and pieces during loading which fragments the available RAM. If a contiguous block is NOT available, the SUN JDK cannot use it. JRockit from Bea(acquired by Oracle) can allocate memory from pieces.
Everyone seems to be answering about contiguous memory, but have neglected to acknowledge a more pressing issue.
Even with 100% contiguous memory allocation, you can't have a 2 GiB heap size on a 32-bit Windows OS (*by default). This is because 32-bit Windows processes cannot address more than 2 GiB of space.
The Java process will contain perm gen (pre Java 8), stack size per thread, JVM / library overhead (which pretty much increases with each build) all in addition to the heap.
Furthermore, JVM flags and their default values change between versions. Just run the following and you'll get some idea:
java -XX:+PrintFlagsFinal
Lots of the options affect memory division in and out of the heap. Leaving you with more or less of that 2 GiB to play with...
To reuse portions of this answer of mine (about Tomcat, but applies to any Java process):
The Windows OS
limits the memory allocation of a 32-bit process to 2 GiB in total (by
default).
[You will only be able] to allocate around 1.5 GiB heap
space because there is also other memory allocated to the process
(the JVM / library overhead, perm gen space etc.).
Why does 32-bit Windows impose a 2 GB process address space limit, but
64-bit Windows impose a 4GB limit?
Other modern operating systems [cough Linux] allow 32-bit processes to
use all (or most) of the 4 GiB addressable space.
That said, 64-bit Windows OS's can be configured to increase the limit
of 32-bit processes to 4 GiB (3 GiB on 32-bit):
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
Here is how to increase the Paging size
right click on mycomputer--->properties--->Advanced
in the performance section click settings
click Advanced tab
in Virtual memory section, click change. It will show ur current paging
size.
Select Drive where HDD space is available.
Provide initial size and max size ...e.g. initial size 0 MB and max size
4000 MB. (As much as you will require)
**There are numerous ways to change heap size like,
file->setting->build, exceution, deployment->compiler here you will find heap size
file->setting->build, exceution, deployment->compiler->andriod here also you will find heap size. You can refer this for andriod project if you facing same issue.
What worked for me was
Set proper appropriate JAVA_HOME path incase you java got updated.
create new system variable computer->properties->advanced setting->create new system variable
name: _JAVA_OPTION value: -Xmx750m
FYI:
you can find default VMoption in Intellij
help->edit custom VM option , In this file you see min and max size of heap.**
First, using a page-file when you have 4 GB of RAM is useless. Windows can't access more than 4GB (actually, less because of memory holes) so the page file is not used.
Second, the address space is split in 2, half for kernel, half for user mode. If you need more RAM for your applications use the /3GB option in boot.ini (make sure java.exe is marked as "large address aware" (google for more info).
Third, I think you can't allocate the full 2 GB of address space because java wastes some memory internally (for threads, JIT compiler, VM initialization, etc). Use the /3GB switch for more.

How to improve the amount of memory used by Jboss?

I have a Java EE application running on jboss-5.0.0.GA. The application uses BIRT report tool to generate several reports.
The server has 4 cores of 2.4 Ghz and 8 Gb of ram.
The startup script is using the next options:
-Xms2g -Xmx2g -XX:MaxPermSize=512m
The application has reached some stability with this configuration, some time ago I had a lot of crashes because of the memory was totally full.
Rigth now, the application is not crashing, but memory is always fully used.
Example of top command:
Mem: 7927100k total, 7874824k used, 52276k free
The java process shows a use of 2.6g, and this is the only application running on this server.
What can I do to ensure an amount of free memory?
What can I do to try to find a memory leak?
Any other suggestion?
TIA
Based in answer by mezzie:
If you are using linux, what the
kernel does with the memory is
different with how windows work. In
linux, it will try to use up all the
memory. After it uses everything, it
will then recycle the memory for
further use. This is not a memory
leak. We also have jboss tomcat on our
linux server and we did research on
this issue a while back.
I found more information about this,
https://serverfault.com/questions/9442/why-does-red-hat-linux-report-less-free-memory-on-the-system-than-is-actually-ava
http://lwn.net/Articles/329458/
And well, half memory is cached:
total used free shared buffers cached
Mem: 7741 7690 50 0 143 4469
If you are using linux, what the kernel does with the memory is different with how windows work. In linux, it will try to use up all the memory. After it uses everything, it will then recycle the memory for further use. This is not a memory leak. We also have jboss tomcat on our linux server and we did research on this issue a while back.
I bet those are operating system mem values, not Java mem values. Java uses all the memory up to -Xmx and then starts to garbage collect, to vastly oversimplify. Use jconsole to see what the real Java memory usage is.
To make it simple, the JVM's max amount of memory us is equal to MaxPermGen (permanently used as your JVM is running. It contains the class definitions, so it should not grow with the load of your server) + Xmx (max size of the object heap, which contains all instances of the objects currently running in the JVM) + Xss (Thread stacks space, depending on the number of threads running in you JVM, which can most of the time be limited for a server) + Direct Memory Space (set by -XX:MaxDirectMemorySize=xxxx)
So do the math.If you want to be sure you have free memory left, you will have to limit the MaxPermGen, the Xmx and the number of threads allowed on your server.
Risk is, if the load on your server grows, you can get an OutOfMemoryError...

IBM JRE 1.5 will not startup with the requested 1.5G memory

IBM JRE 5.0 on Windows, when given -Xmx1536m on a laptop with 2GB memory, refuses to start up: error message below. With -Xmx1000m it does start.
Also, it starts fine with -Xmx1536m on other servers and even laptops, so I think that there is something more than just inadequate memory.
Also, when started from within Eclipse (albeit, using the JRE in the IBM 5 JDK in this case) with the same memory parameter, it runs fine.
Any idea what is going on here?
JVMJ9VM015W Initialization error for library j9gc23(2): Failed to instantiate heap. 1536M requested
Could not create the Java virtual machine
Edit:
Does anyone know about the "3GB switch" and if it is relevant here (beyond the obvious fact that approximately that this is a memory limitations problem). How can I tell if it is enabled and what is the most straightforward way to turnit on?
According to IBM DeveloperWorks:
Cause
The system does not have the necessary resources to satisfy the
maximum default heap value required to
run the JVM.
To resolve, here is what it says
Resolving the problem
If you receive
this error message when starting the
JVM, free
memory by stopping other applications
that might be consuming system
resources.
Your JVM doesn't have enough memory resources to create maximum amount of heap space of 1536 MB. Just make sure that you have enough memory to accommodate it.
Also, I believe that in Windows, the maximum heap space is 1000MB? I'm not sure if that's solid, but in Linux/AIX, any Xmx more than 1GB works fine.
The JVM requires that it be able to allocate its memory as a single contiguous block. If you are on a 32-bit system, the maximum available is about 1280M more or less. To get more you must run a 64-bit JVM on a 64-bit OS.
You may be able to get a little more by starting the JVM immediately after rebooting.
As to starting OK on other systems, are those 32 or 64-bit?
Pretty much the maximum you are guaranteed to get on a Windows platform is 1450 MB. Sometimes Windows/Java.exe maps DLLS to addresses in the 1.5-2.0GB range. This doesn't change even if you use the /3GB trick (or you have an OS that supports it). You have to manually rebase the DLLs to get them higher towards the 2GB (or 3GB boundary). It's a real pain in the ass, and I've done it before, but the best I've ever been able to get with and without a combination of /3GB is 1.8G on 32bit Windows.
Best to be done with it and migrate to a 64-bit OS. They're prevalent now-a-days.
I have the same issue in IBM Engineering lifecycle installation:-
Problem:- JVMJ9VM015W Initialization error for library j9gc26(2): Failed to instantiate heap; Could not create the Java virtual machine.
Solution:- I just did it and solve my issue. If you don't have 16GB ram then please don't change the jazz server startup file. If you have 8GB ram then Only do not increase memory size in the server.:
**set JAVA_OPTS=%JAVA_OPTS% -Xmx4G**
**set JAVA_OPTS=%JAVA_OPTS% -Xms4G**
**set JAVA_OPTS=%JAVA_OPTS% -Xmn1G**

Categories