Am deploying springboot application in kubernets using Jib. When the service starting the memory usage is around 300MB but it grows up to 1.3gb over time. How to avoid this increase without any usage? The application is up and running. The API gateways are not open to user now still the memory is incrementing over time.
kubernets deployment configuration
# Source: services/charts/login/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: login
app.kubernetes.io/version: 1.16.0
name: login
spec:
selector:
matchLabels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/name: login
template:
metadata:
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/name: login
spec:
containers:
- env:
- name: APP_NAME
value: login-release-name
- name: JAVA_TOOL_OPTIONS
value: -Dspring.profiles.active=prod
image: dockerregistry.com/login:1.0.0
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- sh
- -c
- sleep 10
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
name: login
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 1Gi
imagePullSecrets:
- name: regcred
terminationGracePeriodSeconds: 60
spring boot configuration for kubernets
server.port=8080
server.shutdown=graceful
spring.lifecycle.timeout-per-shutdown-phase=45s
server.tomcat.accept-count=100
server.tomcat.max-connections=8000
server.tomcat.connection-timeout=10000
server.tomcat.max-threads=200
server.tomcat.min-spare-threads=10
spring.datasource.url=jdbc:postgresql://${DB_HOST:#{"postgres"}}/postgres
spring.datasource.username=${DB_USER:#{"postgres"}}
spring.datasource.password=${DB_PASSWORD:#{"na"}}
spring.datasource.type=org.springframework.jdbc.datasource.DriverManagerDataSource
spring.datasource.driver-class-name=org.postgresql.Driver
Do we need to configure anything to limit the memory usage to 1GB limit? Now the kubernets will kill the pod if it goes beyond 1GB.
am creating the image using the Jib.
mvn compile com.google.cloud.tools:jib-maven-plugin:3.3.0:dockerBuild -Dimage=login -DskipTests
To limit max heap size for your Spring Boot app add max memory argument to JAVA_TOOL_OPTIONS (-Xmx1024M)
To see why memory consumption grows, use VisualVM (https://visualvm.github.io/download.html) to connect to your process and take a heap dump and analyze it
After a long analysis I found that there is no memory leak from the application.
It was due to the container so I switched to openjdk:8-jdk-alpine using Djib.from.image= option in Jib
first I added some JVM option to set The JVM Heap But its also behaving same.
-XX:+UseG1GC -Xms100M -Xmx1G (Garbage collection changed and set max heap to 1GB
using VisualVM suggest by #Mihai Pasca I analyzed the memory usage in the container.
updated command of Jib
mvn compile com.google.cloud.tools:jib-maven-plugin:3.3.0:dockerBuild -Djib.from.image=openjdk:8-jdk-alpine -Dimage=$imageName -DskipTests -Djib.container.environment=JAVA_TOOL_OPTIONS="-XX:+UseG1GC -Xms100M -Xmx1G -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=9010 -Djava.rmi.server.hostname=localhost" -Djib.container.ports=8080,9010
And connected to 9010 from visual vm using add JMX option.
There is no over usage I found and the GC works fine
And memory usage by the container also fine and not growing continuesly its gorwing up and come down if no use. Previusly its continuly growing.
now I started the same in k8s working fine now.
Related
How do you set up a docker-compose file for a spring application that has client-side load balancing with ribbon? Say I have in my application.properties file specify server.port=8000. I need to create 3 additional copies of the service that run on different ports than 8000 (exposing or not). How do you achieve this by not generate different images or use an orchestration tool?
This may solve your problem.
version: '3.5'
services:
myapp:
ports:
- "8000"
#port 8000 is mapped to a random portnumber
# deploy:
# mode: replicated
# replicas: 3
docker-compose up -d --scale myapp=3 myapp
Tried to get the pod name inside the Tomcat container at startup. Already exposed the pod's full name as an environment variable using the Kubernetes Downward API as below in search.yaml file ( only a portion of the file attached).
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: dev
labels:
app: search
spec:
replicas: 1
selector:
matchLabels:
app: search
template:
metadata:
labels:
app: search
spec:
hostname: search-host
imagePullSecrets:
- name: regcred
containers:
- name: search-container
image: docker.test.net/search:develop-202104070845
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 8080
resources:
requests:
memory: "2048Mi"
cpu: "1"
limits:
memory: "2048Mi"
cpu: "2"
env:
- name: SERVER_GROUP
value: "SEARCH"
- name: MIN_TOMCAT_MEMORY
value: "512M"
- name: MAX_TOMCAT_MEMORY
value: "5596M"
- name: DOCKER_TIME_ZONE
value: "Asia/Colombo"
- name: AVAILABILITY_ZONE
value: "CMB"
After running the pod this environment variable is available in docker level.
Pod details
NAME READY STATUS RESTARTS AGE
search-56c9544d58-bqrxv 1/1 Running 0 4s
Pod environment variable for pod name
POD_NAME=search-56c9544d58-bqrxv
When accessed this as below in Tomcat container's java code in a jar called BootsTrap.jar and it returned as null.
String dockerPodName = System.getenv( "POD_NAME" );
Is it because the pod is not up and running before the tomcat container initialized or accessing the environment variable in Java is incorrect or is there another way of accessing pod's environment variable through java.
You are setting MY_POD_NAME as environment variable, but do the lookup for POD_NAME. Use the same name in the Java code and the deployment.
Note: Your YAML seems to have wrong indentation, I assume that this is just a copy-paste artifact. If that is not the case, it could lead to rejected changes of the deployment since the YAML is invalid.
I am trying to deploy my Jhipster (v5.5.0) project onto Kubernetes (v1.16.3), but the pod keeps failing on an attempt to read the database. I assume its because the /var/lib/h2/data directory is not allowing the user read/write access
Here is my YAML that will create the deployment / pod. I have another YAML that creates the PV and PVC.
apiVersion: apps/v1
kind: Deployment
metadata:
name: portal
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
template:
spec:
containers:
- name: portal
# for argoCD
image: xxxx.xxxx.com/appliances/kubernetes/portal:temp-test-17
envFrom:
- configMapRef:
name: portal-config
# NOTE: the ports.protocol used to be set to TCP, in okd yaml. TCP is the default, so we no longer set it.
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
# resources:
# requests:
# cpu: 1
# memory: 1Gi
# limits:
# memory: 4Gi
resources:
limits:
cpu: "0.5"
memory: "2048Mi"
requests:
cpu: "0.1"
memory: "64Mi"
# an image pull policy of IfNotPresent is useful when docker is less available, but requires updating tags during development more often. Originally was "Always"
imagePullPolicy: IfNotPresent
volumeMounts:
# NOTE: if we use SSL, we will need a 'keystores' volume mount
# - mountPath: /var/run/secrets/java.io/keystores
# name: keystore-volume
# Volume mount for the database files. Stored on a PV so we can upgrade without removing stored DB data.
- mountPath: /var/lib/h2/data
name: portal-db-vol01
# DEBUG USE ONLY - run as root with elevated permissions.
securityContext:
# allowPrivilegeEscalation: true
# capabilities: {}
# privileged: false
runAsNonRoot: true
runAsUser: 950
imagePullSecrets:
- name: regcred-nexus
# Monitor for future clean deployment: make sure it doesnt create 2 pvcs
volumes:
- name: xxx-db-vol01
persistentVolumeClaim:
claimName: xxxx-db-pvc-volume01
# terminationGracePeriodSeconds - we allow 30 seconds for DB cleanup
terminationGracePeriodSeconds: 30
Here's the error:
Caused by: java.lang.IllegalStateException: Could not open file nio:/var/lib/h2/data/odp.mv.db [1.4.200/1]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:950)
at org.h2.mvstore.FileStore.open(FileStore.java:179)
at org.h2.mvstore.MVStore.<init>(MVStore.java:381)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:3579)
at org.h2.mvstore.db.MVTableEngine$Store.open(MVTableEngine.java:170)
... 51 common frames omitted
Caused by: java.io.FileNotFoundException: /var/lib/h2/data/odp.mv.db (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:124)
at org.h2.store.fs.FileNio.<init>(FilePathNio.java:43)
at org.h2.store.fs.FilePathNio.open(FilePathNio.java:23)
at org.h2.mvstore.FileStore.open(FileStore.java:153)
... 54 common frames omitted
And here's something I threw into the Dockerfile in order to force this to work, but it seems like the .../h2/data directory is not getting the right permissions
USER root
RUN mkdir -p /var/lib/h2/data
RUN chmod -R 777 /var/lib/h2
RUN useradd -d /home/user -m -s /bin/bash user
USER user
Change your /var/lib/h2/data to something like /usr/share/..., and make sure your image has more permissions on /var/lib (such as 777)
I am new to kubernetes. Recently set up kubernetes cluster with 1 master and 1 node.
I am able to start a docker container by running
sudo docker run <docker-image> in my node machine.
But i failed to start docker container as a pod using kubernetes yml file.
by running sudo kubectl create -f deployment.yml
I describe the pod information and saw this error message.
Last State: Terminated
Reason: ContainerCannotRun
Message: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"HOSTNAME\": executable file not found in $PATH": unknown
Exit Code: 128
docker container supposes to start a java executable.
this is my deployment file
kind: Service
apiVersion: v1
metadata:
name: service1-service
spec:
selector:
app: service1
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 26666
# Port to forward to inside the pod
targetPort: 26666
# Port accessible outside cluster
nodePort: 26666
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service1-depolyment
spec:
selector:
matchLabels:
app: service1
replicas: 1
template:
metadata:
labels:
app: service1
spec:
containers:
- name: service1
image: service1-docker-image
imagePullPolicy: Never
ports:
- containerPort: 26666
# args: ["HOSTNAME", "KUBERNETES_PORT"]
In this deployment file, I try to create a nginx and one java web applicaition service.
It is because i defined wrong apiVersion and kind ?
Any help would be appreciated.
Look at this error exec: \"HOSTNAME\": executable file not found in $PATH
I had a similar error since the container could not locate the docker "CMD" binary since I gave it the wrong path. Check the path to the file and that should do the trick.
I need to create a Multibranch Jenkins job to deploy a .war file in Tomcat that should run on Kubernetes. Basically, I need the following:
A way to install Tomcat on Kubernetes platform.
Deploy my war file on this newly installed Tomcat.
I need to make use of Dockerfile to make this happen.
PS: I am very new to Kubernetes and Docker stuff and need basic details as well. I tried finding tutorials but couldn't get any satisfactory article.
Any help will be highly highly appreciated.
Docker part
You can use the tomcat docker official image
In your Dockerfile just copy your war file in /usr/local/tomcat/webapps/ directory :
FROM tomcat
COPY app.war /usr/local/tomcat/webapps/
Build it :
docker build --no-cache -t <REGISTRY>/<IMAGE>:<TAG> .
Once your image is built, push it into a Docker registry of your choice.
docker push <REGISTRY>/<IMAGE>:<TAG>
Kubernetes part
1) Here is a simple kubernetes Deployment for your tomcat image
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: <REGISTRY>/<IMAGE>:<TAG>
ports:
- containerPort: 8080
This Deployment definition will create a pod based on your tomcat image.
Put it in a yml file and execute kubectl create -f yourfile.yml to create it.
2) Create a Service :
kind: Service
apiVersion: v1
metadata:
name: tomcat-service
spec:
selector:
app: tomcat
ports:
- protocol: TCP
port: 80
targetPort: 8080
You can now access your pod inside the cluster with http://tomcat-service.your-namespace/app (because your war is called app.war)
3) If you have Ingress controller, you can create an Ingress ressource to expose the application outside the cluster :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tomcat-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app
backend:
serviceName: tomcat-service
servicePort: 80
Now access the application using http://ingress-controller-ip/app