i tried to compare my java web app behaviour on 32 bit windows and 64 bit linux.
When i view the memory usage via jconsole i find very different graph of memory usage.
On windows the appl never touches 512m.
However when i run on linux 64bit with 64 bit VM the memory keeps invcreasing gradually and reaches peak value about1000m very quickly and i also get oome error related to GC over head limit exceeded. on linux whenever i do manual run GC it drops below to less than 100m.
Its like the GC does seem to run so well as it does on windows.
On windows the app runs better with even more load.
How do i find the reason behind this?
Iam using jdk1.6.0.13
min heap:512m and max heap 1024m
EDIT:
Are you using the same JVM versions on both Windows and Linux?
yes.1.6.0.13.
Are you using the same garbage collectors on both systems?
I noticed in jconsole and i see that the gc are different.
Are you using the same web containers on both systems?
yes.Tomcat.
Does your webapp rely on native libraries?
Not sure. I use tomcat+spring+hibernate+jsf.
Are there other differences in the configuration of your webapp on the two platforms?
No
What exactly was the error message associated with the OOME?
java.lang.OutOfMemoryError: GC overhead limit exceeded
How long does it take for your webapp to start misbehaving / reporting errors on Linux?
The difference in usage pattern is seen after i leave it running for say 3hrs. The error appears after like a day or 2 days since by then avg memory usage is seen around 900 mb mark.
A 64bit JVM will naturally use a lot more memory than a 32bit JVM, that's to be expected (the internal pointers are twice the size, after all). You can't keep the same heap settings when moving from 32bit to 64bit and expect the same behaviour.
If your app runs happily in 512m on a 32bit JVM, there are no reasons whatsoever to use a 64bit JVM. The only rationale for doing that is to take advantage of giant heap sizes.
Remeber, it's perfectly valid to run a 32bit JVM on a 64bit operating system. The two are not related.
There are too many unknowns to be able to explain this:
Are you using the same JVM versions on both Windows and Linux?
Are you using the same garbage collectors on both systems?
Are you using the same web containers on both systems?
Does your webapp rely on native libraries?
Are there other differences in the configuration of your webapp on the two platforms?
What exactly was the error message associated with the OOME?
How long does it take for your webapp to start misbehaving / reporting errors on Linux?
Also, I agree with #skaffman ... don't use a 64bit JVM unless your application really requires it.
Related
We're running an application on Linux using Java 1.6 (OpenJDK as well as Oracle JDK). The JVM itself has a maximum of 3.5 GB heap and 512 MB permgen space. However, after running a while top reports the process is using about 8 GB of virtual memory and smem -s swap p reports about 3.5 GB being swapped.
After running a bigger import of thousands of image files on one server, almost no swap space is left and calls to native applications (in our case Im4java calls to Image Magick) fail due to the OS failing to allocate memory for those applications.
In another case the swap space filled over the course of several weeks resulting in the OS killing the JVM due to being out of swap space.
I understand that the JVM will need more than 4 GB of memory for heap (max 3.5 GB), permgen (max 512 MB), code cache, loaded libraries, JNI frames etc.
The problem I'm having is how to find out what is actually using how much of the memory. If the JVM was out of heap memory, I'd get a dump which I could analyze, but in our case it's the OS memory that is eaten up and thus the JVM doesn't generate a dump.
I know there's jrcmd for JRockit, but unfortunately we can't just switch the JVM.
There also seem to be a couple of libraries that allow to track native memory usage but most of those seem to need native code to be recompiled - and besides Im4java (which AFAIK just runs a native process, we don't use DLL/SO-integration here) and the JVM there's no other native code involved that we know of.
Besides that, we can't use a library/tool that might have a huge impact on performance or stability in order to track memory usage on a production system over a long period (several weeks).
So the question is:
How can we get information on what the JVM is actually needing all that memory for, ideally with some detailed information?
You may find references to "zlib/gzip" (pdf handling or http encoding since Java 7), "java2d" or "jai" when replacing memory allocator (jemalloc or tcmalloc) in JVM.
But to really diagnose native memory leak, JIT code symbol mapping and Linux recent profiling tools are required: perf, perf-map-agent and bcc.
Please refer to details in related answer https://stackoverflow.com/a/52767721/737790
Many thanks to Brendan Gregg
I'm running 2 jboss5.1 server, on linux & solaris machines, with similar jvm (xms & xmx) configurations. But when i check the memory usage on server start:
linux machine -- 2.1gb mem usage (RES)
Solaris machine -- 500mb mem usage
Memory used by jboss process on linux is above 1 gb from the start (even before any class loading starts). When i take dump from linux its size is around 700 mb only.
What could be causing such a difference of memory?
A lot of things could make the difference, and there is not enough information here to know what. For example, are they both 64-bit OS's and 64-bit JVMs? What about the behavior of malloc - that's up to the OS. Just because a process asks for N bytes of memory doesn't mean it immediately gets that much memory - memory allocators can be very clever. Then there is the question of whether it's actually an apples-to-apples measurement in terms of how the OS reports it.
"Memory usage" means a lot of things. Are we talking about the Java heap (if you take heap dumps of both VMs after startup and an identical priming bit of work, are they the same size or different?), or that plus class data, etc.? You also have hotspot in the picture, compiling Java bytecode into native code that will be different between the two OS's (maybe very different sizes if your Solaris box is a Sparc machine)
The most likely thing is 64-bit vs. 32-bit, but it's impossible to say. You might use some native profiling tools on each to see what calls are allocating memory - that would start to clarify things.
Unless it's causing a problem, it's probably not something to worry about - but healthy curiosity is a good thing.
I have a Solaris sparc (64-bit) server, which has 16 GB of memory. There are a lot of small Java processes running on it, but today I got the "Could not reserve enough space for object heap" error when trying to launch a new one. I was surprised, since there was still more than 4GB free on the server. The new process was able to successfully launch after some of the other processes were shut down; the system had definitely hit a ceiling of some kind.
After searching the web for an explanation, I began to wonder if it was somehow related to the fact that I'm using the 32-bit JVM (none of the java processes on this server require very much memory).
I believe the default max memory pool is 64MB, and I was running close to 64 of these processes. So that would be 4GB all told ... right at the 32-bit limit. But I don't understand why or how any of these processes would be affected by the others. If I'm right, then in order to run more of these processes I'll either have to tune the max heap to be lower than the default, or else switch to using the 64-bit JVM (which may mean raising the max heap to be higher than the default for these processes). I'm not opposed to either of these, but I don't want to waste time and it's still a shot in the dark right now.
Can anyone explain why it might work this way? Or am I completely mistaken?
If I am right about the explanation, then there is probably documentation on this: I'd very much like to find it. (I'm running Sun's JDK 6 update 17 if that matters.)
Edit: I was completely mistaken. The answers below confirmed my gut instinct that there's no reason why I shouldn't be able to run as many JVMs as I can hold. A little while later I got an error on the same server trying to run a non-java process: "fork: not enough space". So there's some other limit I'm encountering that is not java-specific. I'll have to figure out what it is (no, it's not swap space). Over to serverfault I go, most likely.
I believe the default max memory pool
is 64MB, and I was running close to 64
of these processes. So that would be
4GB all told ... right at the 32-bit
limit.
No. The 32bit limit is per process (at least on a 64bit OS). But the default maximum heap is not fixed at 64MB:
initial heap size: Larger of 1/64th of
the machine's physical memory on the
machine or some reasonable minimum.
maximum heap size: Smaller of 1/4th of
the physical memory or 1GB.
Note: The boundaries and fractions given for the heap size are correct for J2SE 5.0. They are likely to be different in subsequent releases as computers get more powerful.
I suspect the memory is fragmented. Check also Tools to view/solve Windows XP memory fragmentation for a confirmation that memory fragmentation can cause such errors.
Tomcat 5.5.x and 6.0.x
Grails 1.6.x
Java 1.6.x
OS CentOS 5.x (64bit)
VPS Server with memory as 384M
JAVA_OPTS : tried many combinations- including the following
export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m'
export JAVA_OPTS='-server -Xms128M -Xmx128M -XX:MaxPermSize=256M'
(As advised by http://www.grails.org/Deployment)
I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it
I am running Tomcat on a VPS Server
When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M
and used memory is about 156M
When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory
As you have seen, my app is as light as it can be.
Not sure why the memory consumption is as high it is.
I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.
UPDATE
I tested the same "blank" application on my local Tomcat 5.5.x on Windows and it worked fine
The memory consumption of the Java process shot from 32 M to 107M. But it did not crash and it remained under acceptable limits
So the hunt for answer continues... I wonder if something is wrong about my Linux box. Not sure what though...
UPDATE 2
Also see this http://www.grails.org/Grails+Test+On+Virtual+Server
It confirms my belief that my simple-blank app should work on my configuration.
It is a false economy to try to run a long running Java-based application in the minimal possible memory. The garbage collector, and hence the application will run much more efficiently if it has plenty of regular heap memory. Give an application too little heap and it will spend too much time garbage collecting.
(This may seem a bit counter-intuitive, but trust me: the effect is predictable in theory and observable in practice.)
EDIT
In practical terms, I'd suggest the following approach:
Start by running Tomcat + Grails with as much memory as you can possibly give it so that you have something that runs. (Set the permgen size to the default ... unless you have clear evidence that Tomcat + Grails are exhausting permgen.)
Run the app for a bit to get it to a steady state and figure out what its average working set is. You should be able to figure that out from a memory profiler, or by examining the GC logging.
Then set the Java heap size to be (say) twice the measured working set size or more. (This is the point I was trying to make above.)
Actually, there is another possible cause for your problems. Even though you are telling Java to use heaps of a given size, it may be that it is unable to do this. When the JVM requests memory from the OS, there are a couple of situations where the OS will refuse.
If the machine (real or virtual) that you are running the OS does not have any more unallocated "real" memory, and the OS's swap space is fully allocated, it will have to refuse requests for more memory.
It is also possible (though unlikely) that per-process memory limits are in force. That would cause the OS to refuse requests beyond that limit.
Finally, note that Java uses more virtual memory that can be accounted for by simply adding the stack, heap and permgen numbers together. There is the memory used by the executable + DLLs, memory used for I/O buffers, and possibly other stuff.
384MB is pretty small. I'm running a small Grails app in a 512MB VPS at enjoyvps.net (not affiliated in any way, just a happy customer) and it's been running for months at just under 200MB. I'm running a 32-bit Linux and JDK though, no sense wasting all that memory in 64-bit pointers if you don't have access to much memory anyway.
Can you try deploying a tomcat monitoring webapp e.g. psiprobe and see where the memory is being used?
IBM JRE 5.0 on Windows, when given -Xmx1536m on a laptop with 2GB memory, refuses to start up: error message below. With -Xmx1000m it does start.
Also, it starts fine with -Xmx1536m on other servers and even laptops, so I think that there is something more than just inadequate memory.
Also, when started from within Eclipse (albeit, using the JRE in the IBM 5 JDK in this case) with the same memory parameter, it runs fine.
Any idea what is going on here?
JVMJ9VM015W Initialization error for library j9gc23(2): Failed to instantiate heap. 1536M requested
Could not create the Java virtual machine
Edit:
Does anyone know about the "3GB switch" and if it is relevant here (beyond the obvious fact that approximately that this is a memory limitations problem). How can I tell if it is enabled and what is the most straightforward way to turnit on?
According to IBM DeveloperWorks:
Cause
The system does not have the necessary resources to satisfy the
maximum default heap value required to
run the JVM.
To resolve, here is what it says
Resolving the problem
If you receive
this error message when starting the
JVM, free
memory by stopping other applications
that might be consuming system
resources.
Your JVM doesn't have enough memory resources to create maximum amount of heap space of 1536 MB. Just make sure that you have enough memory to accommodate it.
Also, I believe that in Windows, the maximum heap space is 1000MB? I'm not sure if that's solid, but in Linux/AIX, any Xmx more than 1GB works fine.
The JVM requires that it be able to allocate its memory as a single contiguous block. If you are on a 32-bit system, the maximum available is about 1280M more or less. To get more you must run a 64-bit JVM on a 64-bit OS.
You may be able to get a little more by starting the JVM immediately after rebooting.
As to starting OK on other systems, are those 32 or 64-bit?
Pretty much the maximum you are guaranteed to get on a Windows platform is 1450 MB. Sometimes Windows/Java.exe maps DLLS to addresses in the 1.5-2.0GB range. This doesn't change even if you use the /3GB trick (or you have an OS that supports it). You have to manually rebase the DLLs to get them higher towards the 2GB (or 3GB boundary). It's a real pain in the ass, and I've done it before, but the best I've ever been able to get with and without a combination of /3GB is 1.8G on 32bit Windows.
Best to be done with it and migrate to a 64-bit OS. They're prevalent now-a-days.
I have the same issue in IBM Engineering lifecycle installation:-
Problem:- JVMJ9VM015W Initialization error for library j9gc26(2): Failed to instantiate heap; Could not create the Java virtual machine.
Solution:- I just did it and solve my issue. If you don't have 16GB ram then please don't change the jazz server startup file. If you have 8GB ram then Only do not increase memory size in the server.:
**set JAVA_OPTS=%JAVA_OPTS% -Xmx4G**
**set JAVA_OPTS=%JAVA_OPTS% -Xms4G**
**set JAVA_OPTS=%JAVA_OPTS% -Xmn1G**