How many JVM we can have in one machine? - java

I have a class that run infinit (do nothing, just loop and sleep), called NeverReturn. I try to run it using following command in Windows XP 32bit:
java -Xms1200M NeverReturn
I find with the command I can create only 4 java instance at same time. The 5th and next java command will failed to create jvm.
If I change the command to -Xms600M, I can create 8 java instance. The 9th will failed.
Could anyone explain that? I'm using sun jdk1.6 update 23 and jdk1.5 update 22.

If you have four instances of the JVM each using 1200M of memory, that gives you 4800M of memory allocated.
If you have eight instances of the JVM each using up to 600M of memory, that gives you 4800M of memory as well.
If I had to guess, it looks like the problem is that you're trying to promise more memory to the JVM instances than exists on your system. Dropping the amount of memory you promise should have a corresponding increase in the number of instances you can run.

Simple answer:
as many JVMs as you want; of course as long as your machine can provide the necessary resources (read memory).
If you wanted to ask how many JDK/JREs you can use in a machine (different JDK/JRE versions); the answer is "there is no constraint".
So you can have many JDKs as well, I am not sure of the windows installers though. You can always choose to use a dump instead.
Hope this helps.

For every instance of the virtual machine launched in this way, 600 MB of memory is dedicated to it, meaning if you only had 1 GB of memory, you could only successfully launch one instance of JVM if you allowed each instance to consume 600 MB of memory. By the sounds of it, you had roughly 4.6 GB of memory free at the time of running 8 instances of 600 MB each.

Related

Java memory usage: Can someone explain the difference between memory reported by jconsole, ps, and prstat?

I'm investigating some memory bloat in a Java project. Confounded by the different statistics reported by different tools (we are using Java 8 on Solaris 10).
jconsole gives me three numbers:
Committed: the amount reserved for this process by the OS
Used: the amount actually being used by this process
Max: the amount available to the process (in our case it is limited to 128MB via Java command line option -Xmx128m).
For my project, jconsole reports 119.5MB max, 61.9MB committed, 35.5MB used.
The OS tools report something totally different:
ps -o vsz,rss and prstat -s rss and pmap -x all report that this process is using around 310MB virtual, 260MB physical
So my questions are:
Why does the OS report that I'm using around 5x as much as jconsole says is "committed" to my process?
Which of these measurements is actually accurate? (By "accurate", I mean, if I have 12GB of memory, can I run 40 of these (# 300MB) before I hit OutOfMemoryException? Or can I run 200 of them (# 60MB)? (Yes, I know I can't use all 12GB of memory, and yes I understand that virtual memory exists; I'm just using that number to illuminate the question better.)
This question goes quite deep. I'm just going to mention 3 of the many reasons:
VMs
Shared libraries
Stacks and permgen
VMs
Java is like a virtual mini computer. Imagine you ran an emulator on your computer that emulates an old macintosh computer, for example. The emulator app has a config screen where you set how much RAM is in the virtual computer. If you pick 1GB and start the emulator, your OS is going to say the 'Old Mac Emulator' application is taking 1GB. Eventhough inside the virtual machine, that virtual old mac might be reporting 800MB of 1GB free.
A JVM is the same thing. The JVM has its own memory management. As far as the OS is concerned, java.exe is an app that takes 1GB. As far as the JVM is concerned, there's 400MB available on the heap right now.
A JVM is slightly more convoluted, in that the total amount of memory a JVM 'claims' from the OS can fluctuate. Out of the box, a JVM will generally not ask for the maximum right away, but will ask for more over time before kicking in the garbage collector, or a combination thereof: Heap full? Garbage collect. That only freed up maybe 20% or so? Ask the OS for more. -Xms and -Xmx control this; set them to the same, and the JVM will on bootup ask for that much memory and will never ask for more. In general a JVM will never relinquish any memory it claimed.
JVMs, still, are primarily aimed at server deployments, where you want the RAM dedicated to your VM to be constant. There's little point in having each app take whatever they want when they want it, generally. In contrast to desktop apps where you tend to have a ton of apps running and given that a human is 'operating' it, generally only one app has particularly significant ram requirements.
This explains jconsole, which is akin to reporting the free memory inside the virtual old mac app: It's reporting on the state of the heap as the JVM sees it.
Whereas ps -o and friends are memory introspection tools at the OS level, and they just see the JVM as a big black box.
Which one is actually accurate
They both are. From their perspective, they are correct.
Shared library
OSes are highly complex beasts, these days. To put things in java terms, you can have a single JVM that is concurrently handling 100 simultaneous incoming https calls. One could want to see a breakdown of how much memory each of the currently 100 running 'handlers' is taking up. Okay... so how do we 'file' the memory load of String, the class itself (not any particular instance of String - the code. e.g. the instructions for how .toLowerCase() runs. Those are in memory too, someplace!). The web framework needs it, so does the core JVM, and so does probably every single last one of those 100 concurrent handlers. So how do we 'bookkeep' this?
In other words, the memory load on an entire system cannot be strictly divided up as 'that memory is 100% part of that app, and this memory is 10)% part of this app'. Shared libraries make that difficult.
The JVM is technically capable of rendering UIs, processing images, opening files both using the synchronous as well as the asynchronous API, and even the random access API if your OS offers a separate access library for it, sending network requests in async mode, in sync mode, and more. In effect, a JVM will immediately tell the OS: I can do allllll these things.
In my experience/recollection, most OSes report the total memory load of a single application as the sum of the memory they need as well as all the memory any (shared) library they load, in full.
That means ps and friends overreport JVMs considerably: The JVM loads in a ton of libraries. This doesn't actually cost RAM (The OS also loaded these libraries, the JVM doesn't use any large DLLs/.SO/.JNILIB files of its own, just hooks up the ones the OS provides, pretty much all of them), but is often 'bookkept' as such. You know this is happening if this trivial app:
class Test { public static void main(String[] args) throws Exception {
System.out.println("Hello!");
Thread.sleep(100000L);
}}
Already takes more than ~60MB or so.
I mean, if I have 12GB of memory, can I run 40 of these (# 300MB)
That shared library stuff means each VM's memory load according to ps and friends are over-inflated by however much the shared libraries 'cost', because each JVM is going to share that library - the OS only loads it once, not 40 times.
Stacks and permgen
The 'heap', which is where newly created objects go, is the largest chunk of any JVM's memory load. It's also generally the only one JVM introspection tools like jconsole show you. However, it's not the only memory a JVM needs. There's a small slice it needs for its core self (the 'C code', so to speak). Each active thread has a stack and each stack also needs memory. By default it's whatever you pass to -Xss, but times the number of concurrent threads. But that's not a certainty: You can construct a new thread with an alternate size (check the constructors of j.l.Thread). There used to be 'permgen' which is where class code lived. Modern JVM versions got rid of it; in general newer JVM versions try to do more and more on heap instead of in magic hard-to-introspect things like permgen.
I mean, if I have 12GB of memory, can I run 40 of these (# 300MB) before I hit OutOfMemoryException?
Run all 40 at once, and always specify both -Xms and -Xmx, setting them to equal sizes. Assuming all those 40 JVMs are relatively stable in terms of how many concurrent threads they ever run, if you're ever going to run into memory issues, it'll happen immediately (due to -Xms and -Xmx being equal you've removed the dynamism from this situation. All JVMs pretty much instaclaim all the memory they will ever claim, so it either 'works' or it won't. Stacks mess with the cleanliness of this somewhat, hence the caveat of stable-ish thread counts).

Java Heap Space Error - How To Increase Maximum Amount Able To Be Reserved? [Cannot Exceed 1505M]

I am running 64-bit windows 7 with 4GB of RAM. I have 32-bit java I am trying to run a graph search algorithm in eclipse. I commented absolutely everything out except for a simple println("Hello World") After a lot of tinkering, I found that I cannot reserve more than 1505M-1507M (it varies between that-- I've no idea why). That is to say, I set the following as my JVM arguments:
-Xms1505M
I read online that I should be able to reserve a maximum of 2G. A quick ctrl-alt-del check showed that I have 2400M available and 1200 cached. Here is where things get strange: As a stupid experiment, I opened 50 tabs of on google chrome such that I had 400 available memory, 450 cached. I ran my eclipse program with the flag above and it still ran. I reserved 1500M of non-existent RAM.
Someone please help! This program is for a grade and I've been stuck on this for hours.
An operating system with virtual memory can perform strange tricks, and the memory-usage statistics may not always tell you what you think they are. Some of the memory may be swapped out to disk, which sounds like what you're describing here, but some of the memory that's listed for each program is actually shared (e.g., copies of system libraries that are used by each program, but only one copy is loaded in memory).
The more fundamental question is why your graph algorithm is taking up such an inordinate amount of memory; unless you're trying to work on the global Internet routing table, you're probably implementing the algorithm incorrectly.
-Xms is to set the minimum heap, in your case you need to change the max heap using -Xmx
There are other posts here in SO that discuss -Xms vs -Xmx, here is one of them
A 32-bit windows program runs in an emulated 32-bit environment which is designed to work just like Windows XP for compatibility. This means it also has the same limitations as 32-bit windows and you cannot have a heap larger than 1.2 - 1.4 GB depending on what you have run before.
The simplest solution is; don't use 32-bit Java. The 64-bit Java will run better/faster unless you are forced to use 32-bit DLL. In that case I suggest you have one JVM running in 32-bit and you communicate (RMI/messaging/shared memory) with it from a 64-bit program which does all the real work.
I read online that I should be able to reserve a maximum of 2G.
That was never possible with windows 32-bit in Java. The problem was that the heap had to be continous and use what memory was left after all the shareed libraries were loaded.
A quick ctrl-alt-del check showed that I have 2400M available and 1200 cached.
Time to get some more memory I think. I wouldn't buy a laptop with less than 8 GB if you want to use memory seriously and I wouldn't buy a PC with less than 32 GB.
Here is where things get strange: As a stupid experiment, I opened 50 tabs of on google chrome such that I had 400 available memory, 450 cached. I ran my eclipse program with the flag above and it still ran. I reserved 1500M of non-existent RAM.
The OS has access to more memory, it just won't let you use it in a 32-bit emulation. You can have 32 GB of main memory and a 32-bit JVM still will not be able to allocate more.

Running 2 instances of JBoss on 1 machine. Getting "not enough space" error running native command

We have a 64 bit JBoss instance that deploys an axis web service, which is just the front end face to run a native executable command. When the web service is called, it executes this native executable command. The 64 bit instance runs with 3gb of memory.
We recently introduced a 2nd instance of JBoss running on the same physical machine. It runs in 32 bit mode, because it has to run some JNI 32 bit code. This second instance of JBoss is bound to ports-01 so that it runs on 8180 (basically +100 of the default JBoss ports). This instance runs with 512MB of memory.
Since introducing this second instance of JBoss, we are receiving "not enough space" error messages when the 64 bit instance tries to execute the native executable command when it is called. It's an IOException from java, from the unix forkAndExec command. Everything I read, says this has something to do with swap file size. Using the unix, top command, it looks like the swap file size never changes, and it is 3gb. When we run the 64 bit instance first, there seem to be no issues with this, but if the 32 bit instance starts first, we get this error. I'm wondering if the two instances are competing for resources, or if we really are running out of swap space from unix. I'm not sure if JBoss uses swap space and how much it uses, or does Java handle that?
I guess I'm looking for any ideas or suggestions for a solution about this problem. The main pattern I seem to see is that if the 64 bit instance starts first, the native executable works fine, but if the 32 bit instance starts first, it has issues.
The OS handles swap space, Java has no idea about these things. Running any part of Java in the swap space is a very bad idea in any case.
I would make sure there is plenty of main memory after these two programs are running (not just the heaps but the total memory used by these processes)
It turned out to be a swap space issue after all. We had 8GB of memory and 4GB of swap. One server was using 800MB of swap and the other 3.8GB of SWAP, which barely put us over our limit.
Instead of using the "top" command in unix, we had to use swap -s to view the available size of swap space, and that was more accurate.
We created a temporary swap file with a command like mkfile 10240M /opt/myswapfile. Then we added it to the swap area on the server with a command swap -a /opt/myswapfile.
Now they seem to be working fine together.

How to get information about a different java process (another JVM instance running different processes) from one JVM process?

I'm writing unit tests to find just how much memory instances of MyClass take up.
I can't seem to do anything about the random chaotic garbage collector that tells me my MyClass[] myinstances=new MyClass[10000]; takes up negative memory, so i decided to just start up 2 new JVM processes which simply start a class containing a main method, that instantiates a huge arry and hangs.
I know i can start up a new JVM with Runtime.getRuntime.exec("java my.package.SomeClassWithMainMethod");
So my question is: how do i get the info about the ammount of memory taken up by the JVM i started?
Thx, you guys rule.
You can use JVisualVM or JConsole , they are included in the JDK since 1.6
I think you are better off using a profiler, jvisualvm is probably good enough and comes with your Java distribution. See this question: How to determine the actual memory usage of a data structure

Java memory usage on Linux

I'm running a handfull of Java Application servers that are all running the latest versions of Tomcat 6 and Sun's Java 6 on top of CentOS 5.5 Linux. Each server runs multiple instances of Tomcat.
I'm setting the -Xmx450m -XX:MaxPermSize=192m parameters to control how large the heap and permgen will grow. These settings apply to all the Tomcat instances across all of the Java Application servers, totaling about 70 Tomcat instances.
Here is a typical memory usage of one of those Tomcat instances as reported by Psi-probe
Eden = 13M
Survivor = 1.5M
Perm Gen = 122M
Code Cache = 19M
Old Gen = 390M
Total = 537M
CentOS however is reporting RAM usage for this particular process at 707M (according to RSS) which leaves 170M of RAM unaccounted for.
I am aware that the JVM itself and some of it's dependancy libraries must be loaded into memory so I decided to fire up pmap -d to find out their memory footprint.
According to my calculations that accounts for about 17M.
Next there is the Java thread stack, which is 320k per thread on the 32 bit JVM for Linux.
Again, I use Psi-probe to count the number of threads on that particular JVM and the total is 129 threads. So 129 + 320k = 42M
I've read that NIO uses memory outside of the heap, but we don't use NIO in our applications.
So here I've calculated everything that comes to (my) mind. And I've only accounted for 60M of the "missing" 170M.
What am I missing?
Try using the incremental garbage collector, using the -Xincgc command line option.
It's little more aggressive on the whole GC efforts, and has a special happy little anomaly: it actually hands back some of its unused memory to the OS, unlike the default and other GC choices !
This makes the JVM consume a lot less memory, which is especially good if you're running multiple JVM's on one machine. At the expense of some performance - but you might not notice it. The incgc is a little secret it seems, because noone ever brings it up... It's been there for eons (90's even).
Arnar, In JVM initialization process JVM will allocate a memory (mmap or malloc) of size specified by -Xmx and MaxPermSize,so anyways JVM will allocate 450+192=642m of heap space for application at the start of the JVM process. So java heap space for application is not 537 but its 642m.So now if you do the calculation it will give you your missing memory.Hope it helps.
Java allocates as much virtual memory as it might need up front, however the resident side will be how much you actually use. Note: Many of the libraries and threads have their own over heads and while you don't use direct memory, it doesn't mean none of the underlying system do. e.g. if you use NIO, it will use some direct memory even if you use heap ByteBuffers.
Lastly, 100 MB is worth about £8. It may be that its not worth spending too much time worrying about it.
Not a direct answer, but, have you also considered hosting multiple sites within the same Tomcat instance? This could save you some memory at the expense of some additional configuration.
Arnar, the JVM also mmap's all jar files in use, which will use NIO and will contribute to the RSS. I don't believe those are accounted for in any of your measurements above. Do you by chance have a significant number of large jar files? If so, the pages used for those could be your missing memory.

Categories