JPype / Java - Initialize with, or get, remaining heap space - java

We have software written in Python, which uses JPype to call Java, which performs various resource heavy calculations / report building. We originally assigned 800mb of heap space when starting the JVM. The java side is fully multithreaded and will work with whatever resources are available to it.
jvmArgs = ["-Djava.class.path=" + classpath, "-Xmx800M"]
jpype.startJVM(u"java\\jre8\\bin\\client\\jvm.dll", *jvmArgs)
This worked well until we tested on Windows XP for our legacy clients. The new machines are Win 7 64-bit with 4GB of RAM, whereas the old ones are Win XP 32-bit with only 2 GB of ram.
The issue is that JPype causes our application to ungracefully and silently crash if we allocate too much memory. A try catch doesn't even get triggered on the statement above.
I'm wondering if there's a way to use java from command line to determine how much memory we can allocate on a computer. We can check if it's 32-bit or 64-bit which helps, but we need to make sure they aren't running other programs taking up heap space on the JVM. If they are, our application will crash.
Reader's Digest: We'd like to allocate 500mb of heap space when initializing the JVM, but can't be sure of how much space is currently being used. If we allocate too much, the entire application silently crashes.
We use the following
JPype: 0.5.4.2
Python: 2.7
Java: 1.8 or 1.7 (64-bit or 32-bit)
Thanks.

The memory consumed by JVM consists of 2 main areas:
Heap memory
Non heap memory - MetaSpace, native method stacks, pc register, direct byte buffers, sockets, jni allocated memory, thread stacks and more
While the maximum size that will be used for the heap memory is known and configurable, the size of the non heap memory cannot be fully controlled.
The size of the native memory used by the JVM will be effected by the number of threads you use, the amount of classes being loaded and the use of buffers (use of I/O).
You can limit the size of the metapsace by setting the MaxMetaspaceSize (-XX:MaxMetaspaceSize). You can control the amount of memory used for thread stacks by limiting the number of threads and setting the thread stack size (-Xss).
Assuming you do not have native memory leaks, the amount of classes being loaded is stable (no excessive use of dynamic proxies and bytecode generation) and the amount of threads being used is known - you can speculate how much memory will be required for your application to run by monitoring the overall memory used by the JVM over a period of time. When you do it, make sure the entire heap is being allocated when the JVM starts.

Related

Analyze/track down potential native memory leak in JVM

We're running an application on Linux using Java 1.6 (OpenJDK as well as Oracle JDK). The JVM itself has a maximum of 3.5 GB heap and 512 MB permgen space. However, after running a while top reports the process is using about 8 GB of virtual memory and smem -s swap p reports about 3.5 GB being swapped.
After running a bigger import of thousands of image files on one server, almost no swap space is left and calls to native applications (in our case Im4java calls to Image Magick) fail due to the OS failing to allocate memory for those applications.
In another case the swap space filled over the course of several weeks resulting in the OS killing the JVM due to being out of swap space.
I understand that the JVM will need more than 4 GB of memory for heap (max 3.5 GB), permgen (max 512 MB), code cache, loaded libraries, JNI frames etc.
The problem I'm having is how to find out what is actually using how much of the memory. If the JVM was out of heap memory, I'd get a dump which I could analyze, but in our case it's the OS memory that is eaten up and thus the JVM doesn't generate a dump.
I know there's jrcmd for JRockit, but unfortunately we can't just switch the JVM.
There also seem to be a couple of libraries that allow to track native memory usage but most of those seem to need native code to be recompiled - and besides Im4java (which AFAIK just runs a native process, we don't use DLL/SO-integration here) and the JVM there's no other native code involved that we know of.
Besides that, we can't use a library/tool that might have a huge impact on performance or stability in order to track memory usage on a production system over a long period (several weeks).
So the question is:
How can we get information on what the JVM is actually needing all that memory for, ideally with some detailed information?
You may find references to "zlib/gzip" (pdf handling or http encoding since Java 7), "java2d" or "jai" when replacing memory allocator (jemalloc or tcmalloc) in JVM.
But to really diagnose native memory leak, JIT code symbol mapping and Linux recent profiling tools are required: perf, perf-map-agent and bcc.
Please refer to details in related answer https://stackoverflow.com/a/52767721/737790
Many thanks to Brendan Gregg

JVM Initialization error [duplicate]

I've always been able to allocate 1400 megabytes for Java SE running on 32-bit Windows XP (Java 1.4, 1.5 and 1.6).
java -Xmx1400m ...
Today I tried the same option on a new Windows XP machine using Java 1.5_16 and 1.6.0_07 and got the error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Through trial and error it seems 1200 megabytes is the most I can allocate on this machine.
Any ideas why one machine would allow 1400 and another only 1200?
Edit: The machine has 4GB of RAM with about 3.5GB that Windows can recognize.
Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous in its address space. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You could try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have much more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
This has to do with contiguous memory.
Here's some info I found online for somebody asking that before, supposedly from a "VM god":
The reason we need a contiguous memory
region for the heap is that we have a
bunch of side data structures that are
indexed by (scaled) offsets from the
start of the heap. For example, we
track object reference updates with a
"card mark array" that has one byte
for each 512 bytes of heap. When we
store a reference in the heap we have
to mark the corresponding byte in the
card mark array. We right shift the
destination address of the store and
use that to index the card mark array.
Fun addressing arithmetic games you
can't do in Java that you get to (have
to :-) play in C++.
Usually we don't have trouble getting
modest contiguous regions (up to about
1.5GB on Windohs, up to about 3.8GB on Solaris. YMMV.). On Windohs, the
problem is mostly that there are some
libraries that get loaded before the
JVM starts up that break up the
address space. Using the /3GB switch
won't rebase those libraries, so they
are still a problem for us.
We know how to make chunked heaps, but
there would be some overhead to using
them. We have more requests for faster
storage management than we do for
larger heaps in the 32-bit JVM. If you
really want large heaps, switch to the
64-bit JVM. We still need contiguous
memory, but it's much easier to get in
a 64-bit address space.
The Java heap size limits for Windows are:
maximum possible heap size on 32-bit Java: 1.8 GB
recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
This doesn't help you getting a bigger Java heap, but now you know you can't go beyond these values.
Oracle JRockit, which can handle a non-contiguous heap, can have a Java heap size of 2.85 GB on Windows 2003/XP with the /3GB switch. It seems that fragmentation can have quite an impact on how large a Java heap can be.
The JVM needs contiguous memory and depending on what else is running, what was running before, and how windows has managed memory you may be able to get up to 1.4GB of contiguous memory. I think 64bit Windows will allow larger heaps.
Sun's JVM needs contiguous memory. So the maximal amount of available memory is dictated by memory fragmentation. Especially driver's dlls tend to fragment the memory, when loading into some predefined base address. So your hardware and its drivers determine how much memory you can get.
Two sources for this with statements from Sun engineers: forum blog
Maybe another JVM? Have you tried Harmony? I think they planned to allow non-continuous memory.
I think it has more to do with how Windows is configured as hinted by this response:
Java -Xmx Option
Some more testing: I was able to allocate 1300MB on an old Windows XP machine with only 768MB physical RAM (plus virtual memory). On my 2GB RAM machine I can only get 1220MB. On various other corporate machines (with older Windows XP) I was able to get 1400MB. The machine with a 1220MB limit is pretty new (just purchased from Dell), so maybe it has newer (and more bloated) Windows and DLLs (it's running Window XP Pro Version 2002 SP2).
I got this error message when running a java program from a (limited memory) virtuozzo VPS. I had not specified any memory arguments, and found I had to explicitly set a small amount as the default must have been too high. E.g. -Xmx32m (obviously needs to be tuned depending on the program you run).
Just putting this here in case anyone else gets the above error message without specifying a large amount of memory like the questioner did.
sun's JDK/JRE needs a contiguous amount of memory if you allocate a huge block.
The OS and initial apps tend to allocate bits and pieces during loading which fragments the available RAM. If a contiguous block is NOT available, the SUN JDK cannot use it. JRockit from Bea(acquired by Oracle) can allocate memory from pieces.
Everyone seems to be answering about contiguous memory, but have neglected to acknowledge a more pressing issue.
Even with 100% contiguous memory allocation, you can't have a 2 GiB heap size on a 32-bit Windows OS (*by default). This is because 32-bit Windows processes cannot address more than 2 GiB of space.
The Java process will contain perm gen (pre Java 8), stack size per thread, JVM / library overhead (which pretty much increases with each build) all in addition to the heap.
Furthermore, JVM flags and their default values change between versions. Just run the following and you'll get some idea:
java -XX:+PrintFlagsFinal
Lots of the options affect memory division in and out of the heap. Leaving you with more or less of that 2 GiB to play with...
To reuse portions of this answer of mine (about Tomcat, but applies to any Java process):
The Windows OS
limits the memory allocation of a 32-bit process to 2 GiB in total (by
default).
[You will only be able] to allocate around 1.5 GiB heap
space because there is also other memory allocated to the process
(the JVM / library overhead, perm gen space etc.).
Why does 32-bit Windows impose a 2 GB process address space limit, but
64-bit Windows impose a 4GB limit?
Other modern operating systems [cough Linux] allow 32-bit processes to
use all (or most) of the 4 GiB addressable space.
That said, 64-bit Windows OS's can be configured to increase the limit
of 32-bit processes to 4 GiB (3 GiB on 32-bit):
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
Here is how to increase the Paging size
right click on mycomputer--->properties--->Advanced
in the performance section click settings
click Advanced tab
in Virtual memory section, click change. It will show ur current paging
size.
Select Drive where HDD space is available.
Provide initial size and max size ...e.g. initial size 0 MB and max size
4000 MB. (As much as you will require)
**There are numerous ways to change heap size like,
file->setting->build, exceution, deployment->compiler here you will find heap size
file->setting->build, exceution, deployment->compiler->andriod here also you will find heap size. You can refer this for andriod project if you facing same issue.
What worked for me was
Set proper appropriate JAVA_HOME path incase you java got updated.
create new system variable computer->properties->advanced setting->create new system variable
name: _JAVA_OPTION value: -Xmx750m
FYI:
you can find default VMoption in Intellij
help->edit custom VM option , In this file you see min and max size of heap.**
First, using a page-file when you have 4 GB of RAM is useless. Windows can't access more than 4GB (actually, less because of memory holes) so the page file is not used.
Second, the address space is split in 2, half for kernel, half for user mode. If you need more RAM for your applications use the /3GB option in boot.ini (make sure java.exe is marked as "large address aware" (google for more info).
Third, I think you can't allocate the full 2 GB of address space because java wastes some memory internally (for threads, JIT compiler, VM initialization, etc). Use the /3GB switch for more.

vm size (task manager) vs. heap size java application

I want to find a memory leak in a Java 1.5 application. I use JProfiler for profiling.
I see using the windows' task manager that the vm size for my application is about 790000KB (increased from approx 300000KB). In the profiler I see that that the allocated heap is 266MB (increasing also).
Probably it's a rookie question but, what else can occupy so much memory besides the heap so that it goes to approx 700MB vm size (or private bytes size)?
I mention that there are approx 1200 threads running, which can occupy, according to an answer from here quite some memory, but I think there still is some space until 700MB. By the way, how I can see how much memory the threads stacks occupy?
Thanks.
The JVM can use alot of virtual memory which may not use resident memory. On startup it allocates the heap, and maps in its shared libraries. Classes which are loaded use Perm Gen space. An application can use direct memory which can be as large as the heap maximum. As each thread is created a stack is allocated for each thread. In each case until this memory is used, it might not be allocated to the application i.e. not use physical memory. As the application warms up, more of the virtual memory can become physical memory.
If you believe your JVM is not running efficiently, the first thing I would try is Java 6 which has had many fixes and improvements since the last release of Java 5.0.

jvm issue at startup

I can set the max memory as 1000 and not more than that, if I set the memory more than that, it throws the following error.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
My question is, why jvm looks for the max memory at startup?
Thanks in advance.
The Sun JVM needs a contiguous area of memory for its heap. Using the tool vmmap from the Sysinternals suite you can examine the memory layout of the Java process exactly. To do that, write a simple Java program like this:
public class MemoryLayout {
public static void main(String[] args) throws java.io.IOException {
System.in.read();
}
}
Compile that program and run it using the large heap settings
javac MemoryLayout.java
java -Xmx1000m -Xms1000m MemoryLayout
Then, start vmmap, select the java process and look for the yellow memory region whose size is larger than 1000000k. This is the JVM heap. Look further below, and you will eventually find a purple row indicating that there is a DLL file mapped. This DLL file prevents your JVM heap from growing bigger.
If you know what you are doing, you can then rebase that DLL, so it will be loaded at a different address. Microsoft provides a tool called rebase.exe as part of the Microsoft Platform SDK (I have version 5.2.3790.1830).
There are two command line parameters that directly control the size of the (normal) heap:
-Xmx<nnn> sets the maximum heap size
-Xms<nnn> sets the initial heap size
In both cases <nnn> is a number of bytes, with a k or m on the end to indicate kilobytes and megabytes respectively. The initial size gives the heap size allocated when the JVM starts, and the maximum size puts a limit on how big it can grow. (But the JVM also allocates memory for buffers, the "permgen" heap, stacks and other things ... in addition to the normal heap.)
It is not clear what options you are actually giving. (A value of 1000 doesn't make any sense. The -Xmx size has to be more than 2 megabytes and the -Xms size has to be more than 1 megabytes; see this page.)
There are advantages and disadvantages in making the initial heap size smaller than the maximum heap size; e.g. -Xms100m -Xmx1000m. But there is no point making the maximum heap size larger than the amount of virtual memory your machine can allocate to the JVM.
why jvm looks for the max memory at startup.
It wants to make sure that it can eventually allocate the maximum amount which you said it could have.
Why do you need to set a higher maximum then your machine actually supports?
Answer: It would make JVM configuration easier, if you could just set it to basically unlimited, especially if you deployed to different machines. This was possible in earlier versions, but for the current Sun JVM, you have to figure out a "proper" value for every machine. Hopefully, there will be more clever/automatic memory settings in the future.
As elaborated here for implementation reasons (basically it makes performance faster and that is their priority) the JVM requires contiguous memory addressing, so it has to establish that it has that at startup, otherwise it might to be available later.
The fact of the matter is that the JVM in many ways is a server-side oriented technology. That is where Java is popular so that is what gets the development attention.
If you use a 64-bit JVM on a 64-bit OS you won't have this problem. This is only a problem on 32-bit OSes.

Java memory usage with native processes

What is the best way to tune a server application written in Java that uses a native C++ library?
The environment is a 32-bit Windows machine with 4GB of RAM. The JDK is Sun 1.5.0_12.
The Java process is given 1024MB of memory (-Xmx) at startup but I often see OutOfMemoryErrors due to lack of heap space. If the memory is increased to 1200MB, the OutOfMemoryErrors occur due to lack of swap space. How is the memory shared between the JVM and the native process?
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
I had lots of trouble with that setting (Java on 32-bit systems - msw and others) and they were all solved by reserving just *under 1GB of RAM to the JVM.
Otherwise, as stated, the actual occupied memory in the system for that process would be over 2GB; at that point I was having 'silent deaths' of the process - no errors, no warnings, just the process terminating very quietly.
I got more stability and performance running several JVM (each with under 1GB RAM) on the same system.
I found some info on JNI memory management here, and here's the JVM JNI section on memory management.
Well having a 3GB user space over a 2GB user space should help, but if your having problems running out of swap space at 2GB, I think 3GB is just going to make it worse. How big is your pagefile? Is it maxed out?
You can get a better idea on you heap allocation by hooking up jconsole to your jvm.
How is the memory shared between the JVM and the native process?
Sun's JVM's garbage collector is mark-and-sweep, with options to enable concurrent and incremental GC.
Well, more accurately, it's staged, and the above only applies to tenured (long-lived) objects. For young objects, GC is still done with a stop-and-copy collector, which is much better for working with short-lived objects (and all typical Java programs create many short-lived objects).
A copying collector walks over all elements in the heap, copying them to a new heap if they are referenced, and then discards the former heap. Thus 1M of live objects requires up to 2M of real memory: if every object is alive, there will be two copies of everything during garbage collection.
So the JVM requires far more system memory than is available to the code running within the VM, because there is a substantial overhead to management and garbage collection.
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
The /3GB allows user virtual memory address space to be 3GB, but only for executables whose headers are marked with IMAGE_FILE_LARGE_ADDRESS_AWARE. As far as I am aware, Sun's java.exe is not. I don't have a Windows system here, so I can't verify.
You haven't explained your problem well enough, unfortunately. The real question is --- why is the Java process growing so much. Do you have a memory leak? Do you have a real reason to have that much data in the JVM?
Is the C++ library allocating its own memory from the C stack, or is it allocating memory from the Java object space, or is it doing something else entirely?

Categories