We have production Tomcat (6.0.18) server which runs with the following settings:
-server -Xms7000M -Xmx7000M -Xss128k -XX:+UseFastAccessorMethods
-XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=7009
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=/opt/apache-tomcat-6.0.18/conf/logging.properties
-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n
-Djava.endorsed.dirs=/opt/apache-tomcat-6.0.18/endorsed
-classpath :/opt/apache-tomcat-6.0.18/bin/bootstrap.jar
java version "1.6.0_12"
Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)
After some time of work we get (via JConsole) the following memory consumption:
Current heap size: 3 034 233 kbytes
Maximum heap size: 6 504 832 kbytes
Committed memory: 6 504 832 kbytes
Pending finalization: 0 objects
Garbage collector: Name = 'PS MarkSweep', Collections = 128, Total time spent = 16 minutes
Garbage collector: Name = 'PS Scavenge', Collections = 1 791, Total time spent = 17 minutes
Operating System: Linux 2.6.26-2-amd64
Architecture: amd64
Number of processors: 2
Committed virtual memory: 9 148 856 kbytes
Total physical memory: 8 199 684 kbytes
Free physical memory: 48 060 kbytes
Total swap space: 19 800 072 kbytes
Free swap space: 15 910 212 kbytes
The question is why do we have a lot of committed virtual memory? Note that max heap size is ~7Gb (as expected since Xmx=7G).
top shows the following:
31413 root 18 -2 8970m 7.1g 39m S 90 90.3 351:17.87 java
Why does JVM need additional 2Gb! of virtual memory? Can I get non-heap memory disrtibution just like in JRockit http://blogs.oracle.com/jrockit/2009/02/why_is_my_jvm_process_larger_t.html ?
Edit 1: Perm is 36M.
Seems that this problem was caused by a very high number of page faults JVM had. Most likely when Sun's JVM experiences a lot of page faults it starts to allocate additional virtual memory (still don't know why) which may in turn increase IO pressure even more and so on. As a result we got a very high virtual memory consumption and periodical hangs (up to 30 minutes) on full GC.
Three things helped us to get stable work in production:
Decreasing tendency of the Linux kernel to swap (for description see here What Is the Linux Kernel Parameter vm.swappiness?) helped a lot. We have vm.swappiness=20 on all Linux servers which run heavy background JVM tasks.
Decrease maximum heap size value (-Xmx) to prevent excessive pressure on OS itself. We have 9GB value on 12GB machines now.
And the last but very important - code profiling and memory allocations bottlenecks optimizations to eliminate allocation bursts as much as possible.
That's all. Now servers work very well.
-Xms7000M -Xmx7000M
That to me is saying to the JVM "allocate 7gb as an initial heap size with a maximum of 7gb".
So the process will always be 7gb to the OS as that's what the JVM has asked for via the Xms flag.
What it's actually using internal to the JVM is what is being reported as the heap size of a few hundred mb. Normally you set a high Xms when you are preventing slowdowns due to excessive garbage collection. When the JVM hits a (JVM defined) percentage of memory in use it'll do a quick garbage collection. if this fails to free up memory then it'll try a detaillled collection. Finally, if this fails and the max memory defined by Xmx hasn't been reached then it'll ask the OS for more memory. All this takes time and can really notice on a production server - doing this in advance saves this from happening.
You might want to try to hook up a JConsole to your JVM and look at the memory allocation... Maybe your Perm space is taking this extra 2GB... Heap is only a portion of what your VM needs to be alive...
I'm not familiar with jconsole, but are you sure the JVM is using the extra 2Gb? It looks to me like it's the OS or other processes that bring the total up to 9Gb.
Also, a common explanation for a JVM using significantly more virtual memory than the -Xmx param allows is that you have memory-mapped-files (MappedByteBuffer) or use a library that uses MappedByteBuffer.
Related
I have 20 Spring Boot (2.3) embedded Tomcat applications running on a Linux machine with 8GB. All applications are Java 1.8 apps. The machine was running out of memory and Linux started killing some of my app processes as a result.
Using Linux top and Spring Boot admin, I noticed that the max memory heap was set to 2GB:
java -XX:+PrintFlagsFinal -version | grep HeapSize
As a result, each of the 20 apps are trying to get 2GB of heap size (1/4th of physical mem). Using Spring Boot admin I could see only ~128 MB is being used. So I reduced the max heap size to 512 via java -Xmx512m ... Now, Spring Boot admin shows:
1.33 GB is allocated to non-heap space but only 121 MB is being used. Why is so much being allocated to non-heap space? How can I reduce?
Update
According to top each Java process is taking around 2.4GB (VIRT):
KiB Mem : 8177060 total, 347920 free, 7127736 used, 701404 buff/cache
KiB Swap: 1128444 total, 1119032 free, 9412 used. 848848 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2547 admin 20 0 2.418g 0.372g 0.012g S 0.0 4.8 27:14.43 java
.
.
.
Update 2
I ran jcmd 7505 VM.native_memory for one of the processes and it reported:
7505:
Native Memory Tracking:
Total: reserved=1438547KB, committed=296227KB
- Java Heap (reserved=524288KB, committed=123808KB)
(mmap: reserved=524288KB, committed=123808KB)
- Class (reserved=596663KB, committed=83423KB)
(classes #15363)
(malloc=2743KB #21177)
(mmap: reserved=593920KB, committed=80680KB)
- Thread (reserved=33210KB, committed=33210KB)
(thread #32)
(stack: reserved=31868KB, committed=31868KB)
(malloc=102KB #157)
(arena=1240KB #62)
- Code (reserved=254424KB, committed=27120KB)
(malloc=4824KB #8265)
(mmap: reserved=249600KB, committed=22296KB)
- GC (reserved=1742KB, committed=446KB)
(malloc=30KB #305)
(mmap: reserved=1712KB, committed=416KB)
- Compiler (reserved=1315KB, committed=1315KB)
(malloc=60KB #277)
(arena=1255KB #9)
- Internal (reserved=2695KB, committed=2695KB)
(malloc=2663KB #19903)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=20245KB, committed=20245KB)
(malloc=16817KB #167011)
(arena=3428KB #1)
- Native Memory Tracking (reserved=3407KB, committed=3407KB)
(malloc=9KB #110)
(tracking overhead=3398KB)
- Arena Chunk (reserved=558KB, committed=558KB)
(malloc=558KB)
First of all - no, 1.33GB is not allocated. On the screenshot you have 127MB of nonheap memory allocated. The 1.33GB is the max limit.
I see your metaspace is about 80MB which should not pose a problem. The rest of the memory can be composed by a lot of things. Compressed classes, code cache, native buffers etc...
To get the detailed view of what is eating up the offheap memory, you can query the MBean java.lang:type=MemoryPool,name=*, for example via VisualVM with an MBean plugin.
However, your apps may simply be eating too much native memory. For example many I/O buffers from Netty may be the culprit (used up by the java.nio.DirectByteBuffer). If that's the culprit, you can for example limit the caching of the DirectByteBuffers with the flag -Djdk.nio.maxCachedBufferSize, or place a limit with -XX:MaxDirectMemorySize.
For a definitive answer of what exactly is eating your RAM, you'd have to create a heap dump and analyze it.
So to answer your question "Why is so much being allocated to non-heap space? How can I reduce?" There's not a lot allocated to non-heap space. Most of it is native buffers for I/O, and JVM internals. There is no universal switch or flag to limit all the different caches and pools at once.
Now to adress the elephant in the room. I think your real issue stems from simply having very little RAM. You've said you are running 20 instances of JVM limited to 512MB of heap space on 8GB machine. That is unsustainable. 20 x 512MB = 10GB of heap, which is more than you can accommodate with 8GB of total RAM. And that is before you even count in the off-heap/native memory. You need to either provide more HW resources, decrease the JVM count or further decrease the heap/metaspace and other limits (which I strongly advise not to).
In addition to what has already been stated, here's a very good article about the Metaspace in the JVM which by defaults reserves about 1GB (though it may not actually use that much). So that's another thing you can tune using the flag -XX:MaxMetaspaceSize if you have many small apps and want to decrease the amount of memory used/reserved.
If I deploy to my local machine the application runs 4 times faster then when it is deployed to our Sun Application Server. Im not getting any memory errors and it does not mater how many sessions i have going. It just seems like every request waits before it runs. If I plug my machine into the same port as the server it still runs faster so im guessing its a Weblogic setting
My start up specifics
weblogic.Server
Virtual Machine:
Java HotSpot(TM) 64-Bit Server VM version 14.3-b01
Vendor:
Sun Microsystems Inc.
Name:
Uptime:
15 days 21 hours 18 minutes
Process CPU time:
16 days 16 hours 19 minutes
JIT compiler:
HotSpot 64-Bit Server Compiler
Total compile time:
12 minutes
Live threads:
69
Peak:
71
Daemon threads:
68
Total threads started:
51,573
Current classes loaded:
27,654
Total classes loaded:
33,709
Total classes unloaded:
6,055
Current heap size:
1,324,827 kbytes
Maximum heap size:
1,867,776 kbytes
Committed memory:
1,730,240 kbytes
Pending finalization:
0 objects
Garbage collector:
Name = 'PS MarkSweep', Collections = 153, Total time spent = 37 minutes
Garbage collector:
Name = 'PS Scavenge', Collections = 21,402, Total time spent = 40 minutes
Operating System:
SunOS 5.10
Architecture:
sparcv9
Number of processors:
48
Committed virtual memory:
3,013,032 kbytes
Total physical memory:
66,879,488 kbytes
Free physical memory:
21,949,936 kbytes
Total swap space:
20,932,024 kbytes
Free swap space:
20,932,024 kbytes
VM arguments:
-Dweblogic.nodemanager.ServiceEnabled=true
-Dweblogic.security.SSL.ignoreHostnameVerification=false
-Dweblogic.ReverseDNSAllowed=false
-Xms1024m -Xmx2048m
-XX:PermSize=256m -XX:MaxPermSize=512m
Try doing a memory analysis. One way is to take a heap dump during a transaction and perform an analysis it.
This is the best way to go about it, use a tool like JRockit mission control or Oracle VisualVM to perform analysis. This will let you see what happens in your JVM real time. You could take your application off the JVM, connect to it and load your application and see information like heap and gc. They have many more features and definitely something a developer must have.
EDIT:
I have updated the link for mission control. Thanks Klara!
I have a java(1.6 u25) process running on linux(centos 6.3 x64) with JAVA_OPTS="-server -Xms128M -Xmx256M -Xss256K -XX:PermSize=32M -XX:MaxPermSize=32M -XX:MaxDirectMemorySize=128M -XX:+UseAdaptiveSizePolicy -XX:MaxDirectMemorySize=128M -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:GCTimeRatio=39 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:gc.log", the java app used thrift 0.8.0 lib;
run the TOP command everyday, the java process RES value will keep increasing(from 80MB to 1.2GB(after started the app one month)), but see the jvm heap size stay around 100 to 200MB, and see the GC log about 1~2 times PSyoungGC per minute, 1~2 times PSOld GC per day, no memory leak.
So, why the java process used mem will keep increasing and greatly exceeds the JVM settings? I think the java process really used mem will equals Xmx256M + MaxPermSize32M + MaxDirectMemorySize128M + JVM self used mem = about 416MB?
relation info : Virtual Memory Usage from Java under Linux, too much memory used
I suggest you look at pmap for the process. This will give a you a breakdown of native memory usage. Memory which you don't have so much control over are
The total stack space used by threads.
The size of memory mapped files
The size of shared libraries.
Native library memory usage. e.g Socket buffers (if you have enough sockets)
Some combination of these is using the difference.
Following are the snapshots I took after executing perform GC from jvisualvm.
and
First image is Heap stats and 2nd one is perm gen stats. I am not able to understand when I did GC utilized heap size decreased(as expected) but the allocated size of permanent generation increased (though the utilized permgen size remained the same).What could be the possible explanation of such behavior?
JVM arguments used
-Xbootclasspath/p:../xyz.jar
-Xbootclasspath/a:../abc.jar
-Djava.endorsed.dirs=../resolver
-Djava.library.path=../framework
-Djavax.management.builder.initial=JBeanServerBuilder
-Djavax.net.ssl.trustStore=../certs
-Dorg.mortbay.log.class=JettyLogger
-Xms128m
-Xmx256m
-Xdebug
-Xnoagent
-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=2000
Note : I have changed the name(ex xyz.jar) for propriety reasons.
JVm Info:
JVM: Java HotSpot(TM) 64-Bit Server VM (23.6-b04, mixed mode)
Java: version 1.7.0_11, vendor Oracle Corporation
Java Home: /home/aniket/jdk1.7.0_11/jre
JVM Flags: <none>
The memory-allocation Heap/Perm/Eden/Young/S1/S2 etc spaces depends on the underlying algorithm used for GC.
Memory allocation to above spaces are not defined as absolute values parameters. They are defined as ratios to total heap / perm available to JVM.
Above two points probably point out that when Heap size changes, all memory allocations to all spaces are re-evaluated to maintain the ratio which are defined.
Below link will be really useful:
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
since nobody offers an answer, I make my comment an answer, although it is a little vague:
I'd guess that the GC reevaluates the various sizes it has to control as part of the collection run. So it might decide that it is a little tight on the perm gen side of things an increase it.
Just a quick question on the memory usage of the play framework.
I have a production instance, which appears to use 680768 kB of memory. Most of it is located in the swap.
The (virtual) server has about 750 MB, but also runs the MySQL server and 12 Apache virtual servers. Sometimes becomes temporary unrespondent (or very slow) for short periods.
I guess it is because of the swapping (it is not the CPU).
Does the framework need that much memory?
I could limit the memory usage with a JVM parameter -Xmx256m or so, but what value to put in, and what is the reason it uses so much memory?
This is the usage by Play! before and after start:
Java: ~~~~~ Version: 1.6.0_26 Home:
/usr/lib/jvm/java-6-sun-1.6.0.26/jre
Max memory: 194641920 Free
memory: 11813896
Total memory: 30588928
Available processors: 2
After restart: Java: ~~~~~ Version: 1.6.0_26 Home:
/usr/lib/jvm/java-6-sun-1.6.0.26/jre
Max memory: 194641920 Free
memory: 9893688
Total memory: 21946368
Available processors: 2
I am assuming that the 680768 kB of memory that you are reporting is from an OS tool like ps or task manager. The total amount of memory used by the JVM is not causing the temporary freezing of the app. The likely cause of the pause is that the JVM Garbage collector is running a full GC which will suspend all threads in the JVM which the full GC is running (unless you have a concurrent gc configured).
You should run the JVM running the playframework with -verbosegc -XX:+PrintGCDetails to see what the GC is doing.
Your question "Does the Play Framework need that much memory" can not be answered because the amount of memory used will depend on what your application is doing on a pre request basis. Also the JVM will let the heap run down and then do a GC cycle to clean up the Heap. A well behaved JVM app should show a saw tooth pattern on the GC graph.
I don't know which JVM you are using if you are using the hotspot VM read the JVM tunning guide. http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html You generally need to understand the following GC concepts before reading the JVM tuning guide for the guide to make sense.
Mark and Sweep Garbage Collection
Mark, Sweep and Compact Garbage Collection
Copy collector
Generational Garbage Collection
Parallel Garbage Collection
Concurrent Garbage Collection
http://www.amazon.com/Garbage-Collection-Handbook-Management-Algorithms/dp/1420082795/ is probably a good book on this subject
A couple of free tools that ship with the hotspot JVM that you can use include jconsole, and jvisualvm. jvisualvm has a nice plugin called VisualGC which is great at learning how the hotspot vm manages memory.
It depends on a lot of things but yes java need some memory for native allocation, the heap and the non heap memory space.
Play status says that your heap consumes only 30588928 bytes but at startup java allocates 194641920 for the heap. You can try to start with -Xmx64M to limit heap allocation.
You can then save about 128 Mo of RAM but, java also allocates memory for the jvm, so the footprint of the process will be more than 64 Mo, it depends on your platform but it will be at least 200/250 Mo.
Try with limiting your heap to 64Mo but 750 Mo may not be enough to run the jvm and mysql.
Have in mind that you must not use swap with java because memory is allocated in one block so you swap in/swap out the entire heap.