Can I do -Xmx1792m in Linux box
If I have like this
MemTotal: 10145678 kb
MemFree : 256128 kb
Cached : 9216534 kb
SwapTotal: 2097124 kb
SwapFree: 2045263 kb
Buffers : 243208 kb
Active: 3283536 kb
Inactive: 6224084 kb
VmallocTotal: 34359738367 kB
VmallocUsed: 303168 kB
VmallocChunk: 34359423100 kB
It is perfectly fine to run a 1,5GB Java Heap on a 10GB box, why do you think otherwise? In Linux you need to add cached+buffer to free to see what is unused. If you use the free command, it will do that for you. Make sure to leave some room for buffers, of course.
Your command is valid, but its a bad idea. You only have 256128 kb of free memory, that's 256 mb. 256 < 1792.
Try to end some processes to free up RAM. You've got plenty, its just currently all in use.
Related
I am running different Java Containers in Kubernetes with OpenJDK 11.0.8 and Payara-Micro and Wildfly 20. Resources are defined as followed:
resources:
requests:
memory: "512Mi"
limits:
memory: "1Gi"
OpenJDK is running with default memory settings which means a Heap Ratio of 25%.
So I assume that 1G is the upper limit of memory the container can consume. But after some days the upper limits are exceeded as the containers memory consumption increases slowly but steadily. In particular, it is the Non-Heap memory which is increasing and not the heap memory.
So I have two questions: why is the non-heap memory increasing and why is the container/jdk/gc ignoring the container memory limits?
Example Measurement
This is some example data about the pod and the memory consumption to illustrate the situation.
POD information (I skipped irrelevant data here):
$ kubectl describe pod app-5df66f48b8-4djbs -n my-namespace
Containers:
app:
...
State: Running
Started: Sun, 27 Sep 2020 13:06:44 +0200
...
Limits:
memory: 1Gi
Requests:
memory: 512Mi
...
QoS Class: Burstable
Check Memory usage with kubectl top:
-$ kubectl top pod app-5df66f48b8-4djbs -n my-namespace
NAME CPU(cores) MEMORY(bytes)
app-587894cd8c-dpldq 72m 1218Mi
Checking memory limit inside the pod:
[root#app-5df66f48b8-4djbs jboss]# cat /sys/fs/cgroup/memory/memory.limit_in_bytes
1073741824
VM Flags inside the pod/jvm:
[root#app-5df66f48b8-4djbs jboss]# jinfo -flags <PID>
VM Flags:
-XX:CICompilerCount=2 -XX:InitialHeapSize=16777216 -XX:MaxHeapSize=268435456 -XX:MaxNewSize=89456640 -XX:MinHeapDeltaBytes=196608 -XX:NewSize=5570560 -XX:NonNMethodCodeHeapSize=5825164 -XX:NonProfiledCodeHeapSize=122916538 -XX:OldSize=11206656 -XX:ProfiledCodeHeapSize=122916538 -XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseSerialGC
Heap Info inside the POD/jvm:
[root#app-5df66f48b8-4djbs jboss]# jcmd <PID> GC.heap_info
68:
def new generation total 78656K, used 59229K [0x00000000f0000000, 0x00000000f5550000, 0x00000000f5550000)
eden space 69952K, 77% used [0x00000000f0000000, 0x00000000f34bfa10, 0x00000000f4450000)
from space 8704K, 59% used [0x00000000f4cd0000, 0x00000000f51e7ba8, 0x00000000f5550000)
to space 8704K, 0% used [0x00000000f4450000, 0x00000000f4450000, 0x00000000f4cd0000)
tenured generation total 174784K, used 151511K [0x00000000f5550000, 0x0000000100000000, 0x0000000100000000)
the space 174784K, 86% used [0x00000000f5550000, 0x00000000fe945e58, 0x00000000fe946000, 0x0000000100000000)
Metaspace used 122497K, capacity 134911K, committed 135784K, reserved 1165312K
class space used 15455K, capacity 19491K, committed 19712K, reserved 1048576K
VM Metaspace Info:
$ jcmd 68 VM.metaspace
68:
Total Usage - 1732 loaders, 24910 classes (1180 shared):
Non-Class: 5060 chunks, 113.14 MB capacity, 104.91 MB ( 93%) used, 7.92 MB ( 7%) free, 5.86 KB ( <1%) waste, 316.25 KB ( <1%) overhead, deallocated: 4874 blocks with 1.38 MB
Class: 2773 chunks, 19.04 MB capacity, 15.11 MB ( 79%) used, 3.77 MB ( 20%) free, 256 bytes ( <1%) waste, 173.31 KB ( <1%) overhead, deallocated: 1040 blocks with 412.14 KB
Both: 7833 chunks, 132.18 MB capacity, 120.01 MB ( 91%) used, 11.69 MB ( 9%) free, 6.11 KB ( <1%) waste, 489.56 KB ( <1%) overhead, deallocated: 5914 blocks with 1.78 MB
Virtual space:
Non-class space: 114.00 MB reserved, 113.60 MB (>99%) committed
Class space: 1.00 GB reserved, 19.25 MB ( 2%) committed
Both: 1.11 GB reserved, 132.85 MB ( 12%) committed
Chunk freelists:
Non-Class:
specialized chunks: 43, capacity 43.00 KB
small chunks: 92, capacity 368.00 KB
medium chunks: (none)
humongous chunks: (none)
Total: 135, capacity=411.00 KB
Class:
specialized chunks: 18, capacity 18.00 KB
small chunks: 64, capacity 128.00 KB
medium chunks: (none)
humongous chunks: (none)
Total: 82, capacity=146.00 KB
Waste (percentages refer to total committed size 132.85 MB):
Committed unused: 128.00 KB ( <1%)
Waste in chunks in use: 6.11 KB ( <1%)
Free in chunks in use: 11.69 MB ( 9%)
Overhead in chunks in use: 489.56 KB ( <1%)
In free chunks: 557.00 KB ( <1%)
Deallocated from chunks in use: 1.78 MB ( 1%) (5914 blocks)
-total-: 14.62 MB ( 11%)
MaxMetaspaceSize: unlimited
CompressedClassSpaceSize: 1.00 GB
InitialBootClassLoaderMetaspaceSize: 4.00 MB
Answer
The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data. I will investigate in this issue.
So sometimes I was getting a bad gateway error from a server when running a Jenkins job, and recently 'sometimes' became 'always'. We managed to find that it was most likely cause by:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x000000008c900000, 113246208, 0) failed; error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 113246208 bytes for committing reserved memory.
An error report file with more information is saved as:
//hs_err_pid33.log
[ timer expired, abort... ]
Aborted (core dumped)
When I run free -m I get:
total used free shared buff/cache available
Mem: 7687 5983 209 83 1494 1248
Swap: 0 0 0
When I run less /proc/meminfo I get:
MemTotal: 7872324 kB
MemFree: 210708 kB
MemAvailable: 1281020 kB
Buffers: 346224 kB
Cached: 754116 kB
SwapCached: 0 kB
Active: 6580964 kB
Inactive: 526484 kB
Active(anon): 6012832 kB
Inactive(anon): 82220 kB
Active(file): 568132 kB
Inactive(file): 444264 kB
Unevictable: 3652 kB
Mlocked: 3652 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 284 kB
Writeback: 0 kB
AnonPages: 6010792 kB
Mapped: 102896 kB
Shmem: 85520 kB
Slab: 436552 kB
SReclaimable: 362696 kB
SUnreclaim: 73856 kB
KernelStack: 14336 kB
PageTables: 22640 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3936160 kB
Committed_AS: 9303016 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 419816 kB
DirectMap2M: 7661568 kB
DirectMap1G: 0 kB
The job in configured that JAVA_OPTS: '-server -Xmx512m -Dlogging.config=/config/log/logback.xml' Increasing -Xmx512m to -Xmx1024m did not help. Also, there are more Jenkins build triggered one after another.
Would be thankful for tips what to try / check to fix this.
Edit: when I run top -o RES I got:
Tasks: 183 total, 1 running, 182 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.8 us, 0.5 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 7872324 total, 242744 free, 6122952 used, 1506628 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 1282492 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16119 root 20 0 4766892 1.029g 16536 S 0.0 13.7 3:18.00 java
16095 root 20 0 4754704 1.014g 16640 S 0.0 13.5 2:26.41 java
16022 root 20 0 3595076 590968 16580 S 0.0 7.5 2:00.92 java
15912 root 20 0 3083232 504068 16024 S 0.0 6.4 3:08.03 java
16067 root 20 0 3106944 483148 16732 S 0.0 6.1 1:44.71 java
16037 root 20 0 3039324 464488 16692 S 0.0 5.9 1:37.74 java
16082 root 20 0 3111044 464216 16656 S 0.0 5.9 1:43.01 java
16032 root 20 0 3026936 427452 16040 S 0.0 5.4 1:31.43 java
15926 root 20 0 3118228 419600 16608 S 0.0 5.3 2:45.84 java
15967 root 20 0 3016596 357356 16472 S 0.0 4.5 1:20.40 java
1113 root 20 0 2520832 121108 1376 S 0.0 1.5 605:52.95 dockerd
I created a service which has in total 11 scheduled jobs running. 3 of them are scheduled by a cron job (2 of them every 15min and the last one every minute). These three tasks are only for monitoring the service (Checking ehCache and RAM used by the JVM). All the other scheduled tasks are annotated with the 'fixedDelay' attribute - so a new thread should only be started if the last one is finished and x time passed, right?
With http://ask.xmodulo.com/number-of-threads-process-linux.html I found out, that I can check the number of threads per process by executing
cat /proc/PID/status
This resulted in the following
Name: jsvc
Umask: 0022
State: S (sleeping)
Tgid: 17263
Ngid: 0
Pid: 17263
PPid: 17260
TracerPid: 0
Uid: 99 99 99 99
Gid: 99 99 99 99
FDSize: 8192
Groups: 99 11332 16600 34691 50780 52730 52823 53043 54173
NStgid: 17263
NSpid: 17263
NSpgid: 17260
NSsid: 17260
VmPeak: 35247540 kB
VmSize: 35232620 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 5679220 kB
VmRSS: 5663344 kB
RssAnon: 5660016 kB
RssFile: 3328 kB
RssShmem: 0 kB
VmData: 32106616 kB
VmStk: 1012 kB
VmExe: 44 kB
VmLib: 16648 kB
VmPTE: 50908 kB
VmPMD: 128 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
Threads: 19922
SigQ: 0/64039
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000004
SigIgn: 0000000000000000
SigCgt: 2000000181005ecf
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Speculation_Store_Bypass: vulnerable
Cpus_allowed: 7fff
Cpus_allowed_list: 0-14
Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 5986
nonvoluntary_ctxt_switches: 26
So my first question is: What is the 'Threads' number telling me? Are there 19922 threads INCLUDING the threads which are ended or are this only the currently active threads?
I also wondered why all of this threads are currently in SLEEPING state...
I made a graph (#1) which displays the current number of threads for this process and I can see that the number is not only increasing.
So why is this number so wiggly?
Should the subdirectory of a thread be deleted after the thread has finished?
And what is with threads with state "SLEEPING" - are they finished? Because I have nothing to wait for...
So, I found out that
the number "Threads" excludes every finished thread - so all the threads are running or waiting for something.
That is also the reason why this number is wiggling like that.
After checking my source code again, I found out that some ExecutorService objects are not correctly closed so I correted that and received the following graph (which looks better!)
So when somebody else have similar issues, this is what I did:
Login to the machine where the application is running
Get the correct PID from the process by running
ps -aux | grep -i 'NAME' (replace NAME with the correct name of the application)
Get the number of running/waiting threads by executing cat /proc/[PID]/status
Create the graph-data with for x in {1..100000}; do echo $(date) - $(find /proc/[PID]/task -maxdepth 1 | wc -l); sleep 1; done >> thread_counter.csv
I need to check whether a java process is consuming more paging space in linux and AIX.
virtual memory size
To get just the virtual memory size you can read /proc/self/maps on Linux which has all the address ranges used. Take the different and you will know how much virtual memory is being used for what.
If you want more detail such as the resident size you can read /proc/self/smaps
This gives fine grain detail on every mapping including how much is private, dirty, swapped etc.
00400000-004f4000 r-xp 00000000 08:01 12058626 /bin/bash
Size: 976 kB
Rss: 888 kB
Pss: 177 kB
Shared_Clean: 888 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 0 kB
Referenced: 888 kB
Anonymous: 0 kB
AnonHugePages: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd ex mr mw me dw sd
list how much swap each process is using (linux )
for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file; done
I am using ElasticBeanstalk to host my Grails app.
As of now I am using m1.small instance which has 1.7 GiB of main Memory. My question is what is the MaxHeap and Max PermGen I can allocate to my instance? As of now my configuration looks like below
Initial JVM heap size: 512m
JVM command line options: -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+UseConcMarkSweepGC
Maximum JVM heap size: 512m
Maximum JVM permanent generation size: 256m
Any suggestion for selecting the optimum numbers so that I can use max memory for my Tomcat and still have enough left for the OS itself?
Rephrasing the question what is the MAX out of 1.7 GiB can I allocate to something other than the OS(tomcat in this case)
You really need to profile your application stack.
It's easy to increase the size of the heap above the amount of physical memory by adding swap which increases Virtual memory.
For performance you want to find the right fit between instance and application.
Run your application stack in a vagrant instance first and measure it. Adapt until
you get consistent performance, but m1.small instances may be slowed down by others on the same physical host.
That said, you can run top, and sort on Virtual memory.
Example: (Compare VIRT, RES, and SHR columns)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1220 root 20 0 308m 1316 868 S 0.0 0.3 0:00.43 VBoxService
1126 root 20 0 243m 1576 1040 S 0.0 0.3 0:00.02 rsyslogd
1324 root 20 0 114m 1380 764 S 0.0 0.3 0:00.84 crond
1966 vagrant 20 0 105m 1880 1536 S 0.0 0.4 0:00.12 bash
1962 root 20 0 95788 3740 2832 S 0.0 0.7 0:00.18 sshd
1965 vagrant 20 0 95788 1748 836 S 0.0 0.3 0:00.11 sshd
1323 postfix 20 0 78868 3280 2448 S 0.0 0.7 0:00.00 qmgr
1322 postfix 20 0 78800 3232 2408 S 0.0 0.6 0:00.00 pickup
1314 root 20 0 78720 3252 2400 S 0.0 0.6 0:00.01 master
1238 root 20 0 64116 1180 512 S 0.0 0.2 0:00.80 sshd
This is on a rather small vagrant instance:
$ cat /proc/meminfo
MemTotal: 502412 kB
MemFree: 350976 kB
Buffers: 27148 kB
Cached: 45668 kB
SwapCached: 0 kB
Active: 45240 kB
Inactive: 39616 kB
Active(anon): 12164 kB
Inactive(anon): 44 kB
Active(file): 33076 kB
Inactive(file): 39572 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1048568 kB
SwapFree: 1048568 kB
Dirty: 4 kB
Writeback: 0 kB