limit amount of RAM the JVM will be allocated - java

I am running TWS from Interactive Brokers in Parallels on a Mac. When I use the cloud-based link or the stand-alone application, TWS takes up 99% of available CPU. Is there a way that I can limit amount of RAM the JVM will be allocated?
I have 4gig of memory allocated to the Parallels VM. TWS is taking up about 433K of memory.
I added -Xmx300M -Xms300M to the command line for starting TWS, but this did nothing. When I start up, it is still consuming 99% of CPU and has 400K of memory allocated

I found the problem to be with Parallels. I was using Parallels 8. When I used TWS on a stand-alone Windows machine, the CPU usage never exceeded 50%.
I created a new VM with Parallels 8, and the CPU usage was still 99%.
I upgraded to Parallels 11 and created a new VM and installed TWS. Now it takes less than 10%.

Related

Windows memory management and java

I'm running a Windows 2016 (x64) server with 32GB RAM. According to Resource Monitor the memory map looks like this:
1MB Reserved, 17376MB In Use, 96MB Modified, 4113MB Standby, 11016MB Free. Summary:
15280MB Available,
4210 MB Cached,
32767MB Total,
32768MB Installed
I have a java (64-bit JVM) service that I want to run on 8GB of memory:
java -Xms8192m -Xmx8192m -XX:MaxMetaspaceSize=128m ...
which results in
Error occurred during initialization of VM
Could not reserve enough space for object heap
I know that 32-bit OS and 32-bit JVM would limit the usable heap, but I verified both are 64-bit. I read that on 32-bit windows / JVM, the heap has be be contiguous. But here I had hoped to be able to even allocate 15GB for the heap, as over 15GB are 'Available' (available for whom / what?).
Page file size is automatically managed, and currently at 7680MB.
I'd be thankful for an explanation why Windows refuses to hand out the memory (or why java cannot make use of it), and what are my options (apart from resizing the host or using like 4GB, which works but is insufficient for the service).
I have tried rebooting the server, but when it's this service's turn to start, other services have already "worked" the memory quite a bit.
Edit: I noticed that the Resource Monitor has a graph called 'Commit Charge' which is over 90%. Task manager has a 'Committed' line which (today) lists 32,9/40,6 GB. Commit charge explains the term, and yes, I've seen the mentioned virtual memory popups already. It seems that for a reason unknown to me, a very high Commit Charge has built up and prevents the 8GB-java from starting. This puts even more emphasis on the question: What does '15 GB Available' memory mean - and to whom is it available, if not for a process?

Ubuntu kernel killing java process even though it is not out of memory

I have an ec2 instance with 48 cores and 192 GB memory and Ubuntu 18.04. I am running a java application on it where max memory is set to be 128 GB. In between the java application gets killed by the linux kernel. I connected JVisualVM and also the GC logs is saying that the Java VM is taking just 50GB heap at max. So why is Linux killing the java application? There is nothing else running on this machine just the application. I tried running dmesg and what I see is:
[166098.587603] Out of memory: Kill process 10273 (java) score 992 or sacrifice child
[166098.591428] Killed process 10273 (java) total-vm:287522172kB, anon-rss:191924060kB, file-rss:0kB, shmem-rss:0kB
[166104.034642] oom_reaper: reaped process 10273 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
The key thing to look at is anon-rss:191924060kB. RSS is the Resident set size, which the Wikipedia article defines as
the portion of memory occupied by a process that is held in main memory
Putting the commas in, 191,924,060kB is just short of 192Gb. Of that, 50GB is the portion of Java's heap -- the space that Java uses for objects allocated at run-time -- that's actually in use. The rest includes the JVM runtime, any libraries your program might be using, and of course your program itself.
The total virtual memory occupied by your program is 287.5GB; that presumably includes the other 78GB of the 128GB heap you've allocated.

Big difference between JVM process size and Memory Heap size

I am developing java swing application on Windows 8.1 64bit with 4GB RAM with JDK version 8u20 64bit.
The problem is when I launched the application with Netbeans profiler with Monitor option.
When the first Jframe is loaded the application Memory Heap is around 18mb and the JVM process size around 50mb (image1).
Then when I launch the other Jframe which contains a JFxPanel with webView the Heap jumps to 45mb and the JVM proccess jumps to 700mb very fast (image2) which is very confusing.
Then when I close the second JFrame and it get disposed and a System.gc() is called and the JVM perform a GC (in most times) the heap drops to around 20mb but the JVM proccess never drops (image3).
Why there is a huge difference between memory Heap (45 Mb) and JVM proccess (699 Mb)?
Why the JVM needs all that Memory? And how to reduce that amount?
And I am launching the app with those vm options:
-Xms10m -Xmx60m -Xss192k
-XX:+UseG1GC -XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=10 -XX:PermSize=20m
-XX:MaxPermSize=32m
EDIT :- I just read the question in that link JVM memory usage out of control and he have the same problem but the situation is different his heap size is arround 33% of the total JVM process memory size which is in my case less than 7% and he is doing multiple jobs simultaneously (Tomcat webapp) which I don't (java swing application) and he didn't launch his application with the same VM arguments I did.
UPDATE :-
after the first JFrame is launched (image1)
after the second JFrame is launched (image2)
after the second JFrame is closed (image3)
EDIT 2 :-
I just tried the same application with the same VM arguments above and added
-client
-XX:+UseCompressedOops
and used JDK 8u25 32-bit because as mentioned in this answer https://stackoverflow.com/a/15471505/4231826 the 64-bit version doesn't include client folder in the JRE and will ignore -client argument.
The result is that the total memory process jumped to 540Mb when the second JFrame was open and the heap sizes (in the the three points) were almost the same numbers as in the 64-bit version, Does this confirm that this is a problem related to the JVM (The same heap sizes and a 260Mb difference in total process sizes)?
Virtual memory allocation is mostly irrelevant (see for an explanation this answer) and very different from actual memory usage. The JVM is not designed to limit virtual memory allocation, see the question about limiting virtual memory usage.
The end-user may see a lot of virtual memory usage in the task-manager, but that is mostly meaningless. The different numbers for memory usage shown in the Windows task manager are explained in this article. In summary: in the Windows task manager look at the "Memory (Private Working Set)" and "Page Fault Delta" (the relevance of the latter is explained in this answer).

How to improve the amount of memory used by Jboss?

I have a Java EE application running on jboss-5.0.0.GA. The application uses BIRT report tool to generate several reports.
The server has 4 cores of 2.4 Ghz and 8 Gb of ram.
The startup script is using the next options:
-Xms2g -Xmx2g -XX:MaxPermSize=512m
The application has reached some stability with this configuration, some time ago I had a lot of crashes because of the memory was totally full.
Rigth now, the application is not crashing, but memory is always fully used.
Example of top command:
Mem: 7927100k total, 7874824k used, 52276k free
The java process shows a use of 2.6g, and this is the only application running on this server.
What can I do to ensure an amount of free memory?
What can I do to try to find a memory leak?
Any other suggestion?
TIA
Based in answer by mezzie:
If you are using linux, what the
kernel does with the memory is
different with how windows work. In
linux, it will try to use up all the
memory. After it uses everything, it
will then recycle the memory for
further use. This is not a memory
leak. We also have jboss tomcat on our
linux server and we did research on
this issue a while back.
I found more information about this,
https://serverfault.com/questions/9442/why-does-red-hat-linux-report-less-free-memory-on-the-system-than-is-actually-ava
http://lwn.net/Articles/329458/
And well, half memory is cached:
total used free shared buffers cached
Mem: 7741 7690 50 0 143 4469
If you are using linux, what the kernel does with the memory is different with how windows work. In linux, it will try to use up all the memory. After it uses everything, it will then recycle the memory for further use. This is not a memory leak. We also have jboss tomcat on our linux server and we did research on this issue a while back.
I bet those are operating system mem values, not Java mem values. Java uses all the memory up to -Xmx and then starts to garbage collect, to vastly oversimplify. Use jconsole to see what the real Java memory usage is.
To make it simple, the JVM's max amount of memory us is equal to MaxPermGen (permanently used as your JVM is running. It contains the class definitions, so it should not grow with the load of your server) + Xmx (max size of the object heap, which contains all instances of the objects currently running in the JVM) + Xss (Thread stacks space, depending on the number of threads running in you JVM, which can most of the time be limited for a server) + Direct Memory Space (set by -XX:MaxDirectMemorySize=xxxx)
So do the math.If you want to be sure you have free memory left, you will have to limit the MaxPermGen, the Xmx and the number of threads allowed on your server.
Risk is, if the load on your server grows, you can get an OutOfMemoryError...

Sun JVM Committed Virtual Memory High Consumption

We have production Tomcat (6.0.18) server which runs with the following settings:
-server -Xms7000M -Xmx7000M -Xss128k -XX:+UseFastAccessorMethods
-XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=7009
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=/opt/apache-tomcat-6.0.18/conf/logging.properties
-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n
-Djava.endorsed.dirs=/opt/apache-tomcat-6.0.18/endorsed
-classpath :/opt/apache-tomcat-6.0.18/bin/bootstrap.jar
java version "1.6.0_12"
Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)
After some time of work we get (via JConsole) the following memory consumption:
Current heap size: 3 034 233 kbytes
Maximum heap size: 6 504 832 kbytes
Committed memory:  6 504 832 kbytes
Pending finalization: 0 objects
Garbage collector: Name = 'PS MarkSweep', Collections = 128, Total time spent = 16 minutes
Garbage collector: Name = 'PS Scavenge', Collections = 1 791, Total time spent = 17 minutes
Operating System: Linux 2.6.26-2-amd64
Architecture: amd64
Number of processors: 2
Committed virtual memory: 9 148 856 kbytes
Total physical memory:  8 199 684 kbytes
Free physical memory:     48 060 kbytes
Total swap space: 19 800 072 kbytes
Free swap space: 15 910 212 kbytes
The question is why do we have a lot of committed virtual memory? Note that max heap size is ~7Gb (as expected since Xmx=7G).
top shows the following:
31413 root 18 -2 8970m 7.1g 39m S 90 90.3 351:17.87 java
Why does JVM need additional 2Gb! of virtual memory? Can I get non-heap memory disrtibution just like in JRockit http://blogs.oracle.com/jrockit/2009/02/why_is_my_jvm_process_larger_t.html ?
Edit 1: Perm is 36M.
Seems that this problem was caused by a very high number of page faults JVM had. Most likely when Sun's JVM experiences a lot of page faults it starts to allocate additional virtual memory (still don't know why) which may in turn increase IO pressure even more and so on. As a result we got a very high virtual memory consumption and periodical hangs (up to 30 minutes) on full GC.
Three things helped us to get stable work in production:
Decreasing tendency of the Linux kernel to swap (for description see here What Is the Linux Kernel Parameter vm.swappiness?) helped a lot. We have vm.swappiness=20 on all Linux servers which run heavy background JVM tasks.
Decrease maximum heap size value (-Xmx) to prevent excessive pressure on OS itself. We have 9GB value on 12GB machines now.
And the last but very important - code profiling and memory allocations bottlenecks optimizations to eliminate allocation bursts as much as possible.
That's all. Now servers work very well.
-Xms7000M -Xmx7000M
That to me is saying to the JVM "allocate 7gb as an initial heap size with a maximum of 7gb".
So the process will always be 7gb to the OS as that's what the JVM has asked for via the Xms flag.
What it's actually using internal to the JVM is what is being reported as the heap size of a few hundred mb. Normally you set a high Xms when you are preventing slowdowns due to excessive garbage collection. When the JVM hits a (JVM defined) percentage of memory in use it'll do a quick garbage collection. if this fails to free up memory then it'll try a detaillled collection. Finally, if this fails and the max memory defined by Xmx hasn't been reached then it'll ask the OS for more memory. All this takes time and can really notice on a production server - doing this in advance saves this from happening.
You might want to try to hook up a JConsole to your JVM and look at the memory allocation... Maybe your Perm space is taking this extra 2GB... Heap is only a portion of what your VM needs to be alive...
I'm not familiar with jconsole, but are you sure the JVM is using the extra 2Gb? It looks to me like it's the OS or other processes that bring the total up to 9Gb.
Also, a common explanation for a JVM using significantly more virtual memory than the -Xmx param allows is that you have memory-mapped-files (MappedByteBuffer) or use a library that uses MappedByteBuffer.

Categories