When running the same java process (a jar) under Windows and Linux (Debian) the Linux proces uses a lot more (12MB vs 36 MB), just from starting up. Even when trying to limit the heap size with -Xmx/Xms/etc, it stays the same. Nothing I try seems to help and the process always takes 36 MB. What explains this difference between Linux and Windows and how can I reduce the memory usage?
EDIT:
I measure memory with the windows task manager and Linux top command.
The JVM are the same and they are both 32-bit systems.
I recommend using a profiler such as VisualVM to get a more granular view on what's going on.
One question I would ask to help me understand the problem better is :
Does my Java application's memory profile look dramatically different on the two platforms? You can answer this by running with -loggc and viewing the output in a gc visualizer like HPjmeter. You should try to look at a sample set with a statistically significant amount of data, perhaps 1000 or 10000 gc plots. If the answer is no, I would be tempted to attribute the difference you see to the JVM heap allocation requirements for start up. As 'nos' pointed out, pinpointing the difference can be notoriously hard. When you specified the -Xmx value on Linux, did the memory utilization exceed your Xmx value?
It is probably measuring the shared memory as well
Related
I face a problem with java application I built in javaFx. It consumes only 2-3% of cpu usage and around 50 to 80 MB of memory in windows. But in mac same application initially starts with 50 mb of memory and continuously increases to 1 GB and uses over 90% of CPU Usage. I found this information when I checked Mac task manager. When I use a java profiler to find memory leaks, the profiler shows memory usage same like window (not more than 100 MB).
I am confused with this behaviour in Mac.
Has anyone encountered this problem before, or am I doing something wrong with my application?
Lots of things possible, but i suspect this: Depending on the memory size and cpu count, the jvm may run in server mode, which causes memory management to be different. Use -server option to force it to be server mode always and compare again.
Can also take heap dumps (jmap -dump) to see what is taking up so much memory, and stack traces (kill -3) to see what is taking up so much cpu.
One application I have to deal with regularly launches shell helpers using ProcessBuilder. For reasons untold, it still runs on a 32bit JVM (Sun, 1.6.0.25) even though the underlying OS is 64bits (RHEL 5.x for what it's worth).
This application is memory-happy, so the heap size is set to its maximum of 3 GB, and the permgen is 128M.
However... At random moments, shell helpers fail to launch. Not because of an OutOfMemory, but ENOMEM... The only cause I can see for this is lack of address space.
Well, sure, but at the same moment, the memory is not really under pressure and top reports that the actual memory usage of the JVM and its virtual set size, is not even 3 GB...
Looking at what can be looked of the code of Process, I see that the core method is called forkAndExec(), which is pretty much self explanatory... From what I know of both syscalls, it just shouldn't fail. But it does. And not always.
Why?
edit: it should be noted that neo4j is used. It seems to use FileChannel a lot, can that be the cause of lack of address space?
I would decrease the heap size. The amount of heap actually used could be leaving less and less space for the forked process to run (it inherits resources from its parent)
It is highly likely that just upgrading to a 64-bit JVM would fix the problem, Can you try Java 6 update 30 64-bit instead (just to see if it would fix the problem) If it does or does not, it should tell more about what the cause is (and then you can decide if its worth switching)
I think that you are being bitten by Linux memory overcommits killing your processes. That blog post suggest a sysctl variable that you can tune.
i tried to compare my java web app behaviour on 32 bit windows and 64 bit linux.
When i view the memory usage via jconsole i find very different graph of memory usage.
On windows the appl never touches 512m.
However when i run on linux 64bit with 64 bit VM the memory keeps invcreasing gradually and reaches peak value about1000m very quickly and i also get oome error related to GC over head limit exceeded. on linux whenever i do manual run GC it drops below to less than 100m.
Its like the GC does seem to run so well as it does on windows.
On windows the app runs better with even more load.
How do i find the reason behind this?
Iam using jdk1.6.0.13
min heap:512m and max heap 1024m
EDIT:
Are you using the same JVM versions on both Windows and Linux?
yes.1.6.0.13.
Are you using the same garbage collectors on both systems?
I noticed in jconsole and i see that the gc are different.
Are you using the same web containers on both systems?
yes.Tomcat.
Does your webapp rely on native libraries?
Not sure. I use tomcat+spring+hibernate+jsf.
Are there other differences in the configuration of your webapp on the two platforms?
No
What exactly was the error message associated with the OOME?
java.lang.OutOfMemoryError: GC overhead limit exceeded
How long does it take for your webapp to start misbehaving / reporting errors on Linux?
The difference in usage pattern is seen after i leave it running for say 3hrs. The error appears after like a day or 2 days since by then avg memory usage is seen around 900 mb mark.
A 64bit JVM will naturally use a lot more memory than a 32bit JVM, that's to be expected (the internal pointers are twice the size, after all). You can't keep the same heap settings when moving from 32bit to 64bit and expect the same behaviour.
If your app runs happily in 512m on a 32bit JVM, there are no reasons whatsoever to use a 64bit JVM. The only rationale for doing that is to take advantage of giant heap sizes.
Remeber, it's perfectly valid to run a 32bit JVM on a 64bit operating system. The two are not related.
There are too many unknowns to be able to explain this:
Are you using the same JVM versions on both Windows and Linux?
Are you using the same garbage collectors on both systems?
Are you using the same web containers on both systems?
Does your webapp rely on native libraries?
Are there other differences in the configuration of your webapp on the two platforms?
What exactly was the error message associated with the OOME?
How long does it take for your webapp to start misbehaving / reporting errors on Linux?
Also, I agree with #skaffman ... don't use a 64bit JVM unless your application really requires it.
I have a Solaris sparc (64-bit) server, which has 16 GB of memory. There are a lot of small Java processes running on it, but today I got the "Could not reserve enough space for object heap" error when trying to launch a new one. I was surprised, since there was still more than 4GB free on the server. The new process was able to successfully launch after some of the other processes were shut down; the system had definitely hit a ceiling of some kind.
After searching the web for an explanation, I began to wonder if it was somehow related to the fact that I'm using the 32-bit JVM (none of the java processes on this server require very much memory).
I believe the default max memory pool is 64MB, and I was running close to 64 of these processes. So that would be 4GB all told ... right at the 32-bit limit. But I don't understand why or how any of these processes would be affected by the others. If I'm right, then in order to run more of these processes I'll either have to tune the max heap to be lower than the default, or else switch to using the 64-bit JVM (which may mean raising the max heap to be higher than the default for these processes). I'm not opposed to either of these, but I don't want to waste time and it's still a shot in the dark right now.
Can anyone explain why it might work this way? Or am I completely mistaken?
If I am right about the explanation, then there is probably documentation on this: I'd very much like to find it. (I'm running Sun's JDK 6 update 17 if that matters.)
Edit: I was completely mistaken. The answers below confirmed my gut instinct that there's no reason why I shouldn't be able to run as many JVMs as I can hold. A little while later I got an error on the same server trying to run a non-java process: "fork: not enough space". So there's some other limit I'm encountering that is not java-specific. I'll have to figure out what it is (no, it's not swap space). Over to serverfault I go, most likely.
I believe the default max memory pool
is 64MB, and I was running close to 64
of these processes. So that would be
4GB all told ... right at the 32-bit
limit.
No. The 32bit limit is per process (at least on a 64bit OS). But the default maximum heap is not fixed at 64MB:
initial heap size: Larger of 1/64th of
the machine's physical memory on the
machine or some reasonable minimum.
maximum heap size: Smaller of 1/4th of
the physical memory or 1GB.
Note: The boundaries and fractions given for the heap size are correct for J2SE 5.0. They are likely to be different in subsequent releases as computers get more powerful.
I suspect the memory is fragmented. Check also Tools to view/solve Windows XP memory fragmentation for a confirmation that memory fragmentation can cause such errors.
We ship Java applications that are run on Linux, AIX and HP-Ux (PA-RISC). We seem to struggle to get acceptable levels of performance on HP-Ux from applications that work just fine in the other two environments. This is true of both execution time and memory consumption.
Although I'm yet to find a definitive article on "why", I believe that measuring memory consumption using "top" is a crude approach due to things like the shared code giving misleading results. However, it's about all we have to go on with a customer site where memory consumption on HP-Ux has become an issue. It only became an issue this time when we moved from Java 1.4 to Java 1.5 (on HP-Ux 11.23 PA-RISC). By "an issue", I mean that the machine ceased to create new processes because we had exhausted all 16GB of physical memory.
By measuring "before" and "after" total "free memory" we are trying to gauge how much has been consumed by a Java application. I wrote a quick app that stores 10,000 random 64 bit strings in an ArrayList and tried this approach to measuring consumption on Linux and HP-Ux under Java 1.4 and Java 1.5.
The results:
HP Java 1.4 ~60MB
HP Java 1.5 ~150MB
Linux Java 1.4 ~24MB
Linux Java 1.5 ~16MB
Can anyone explain why these results might arise? Is this some idiosyncrasy of the way "top" measures free memory? Does Java 1.5 on HP really consume 2.5 times more memory than Java 1.4?
Thanks.
The JVMs might just have different default parameters. The heap will grow to the size that you have configured to let it. The default on the Sun VM is a certain percentage of the RAM in the machine - that's to say that Java will, by default, use more memory if you use a machine with more memory on it.
I'd be really surprised if the HP-UX VM hadn't had lots of tuning for this sort of thing by HP. I'd suggest you fiddle with the parameters on both - figure out what the smallest max heap size you can use without hurting performance or throughput.
I don't have a HP box right now to test my hypothesis. However, if I were you, I would use a profiler like JConsole(comes with JDK) OR yourkit to measure what is happening.
However, it appears that you started measuring after you saw something amiss; So, I'm NOT discounting that it's happening -- just pointing you at something I'd have done in the same situation.
First, it's not clear what did you measure by "10,000 random 64 bit strings" test. You supposed to start the application, measure it's bootstrap memory footprint, and then run your test. It could easily be that Java 1.5 acquires more heap right after start (due to heap manager settings, for instance).
Second, we do run Java apps under 1.4, 1.5 and 1.6 under HP-UX, and they don't demonstrate any special memory requirements. We have Itanium hardware, though.
Third, why do you use top? Why not just print Runtime.getRuntime().totalMemory()?
Fourth, by adding values to ArrayList you create memory fragmentation. ArrayList has to double it's internal storage now and then. Depending on GC settings and ArrayList.ensureCapacity() implementation the amount of non-collected memory may differ dramatically between 1.4 and 1.5.
Essentially, instead of figuring out the cause of problem you have run a random test that gives you no useful information. You should run a profiler on the application to figure out where the memory leaks.
You might also want to look at the problem you are trying to solve... I don't imagine there are many problems that eat 16GB of memory that aren't due for a good round of optimization.
Are you launching multiple VMs? Are you reading large datasets into memory, and not discarding them quickly enough? etc etc etc.