Spring cloud Kubernetes reload timing issue - java

I stumbled upon a very subtle issue trying to implement ConfigMap property source live-reloading for my app deployed to K8S.
Here are a few config snippets from my current project:
application.yaml
spring:
application:
name: myapp
cloud:
kubernetes:
config:
enabled: true
name: myapp
namespace: myapp
sources:
- namespace: myapp
name: myapp-configmap
reload:
enabled: true
mode: event
strategy: refresh
refresh:
refreshable:
- com.myapp.PropertiesConfig
extra-refreshable:
- javax.sql.DataSource
myapp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: myapp
name: myapp
namespace: myapp
spec:
replicas: 1
selector:
matchLabels:
name: myapp-backend
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
name: myapp-backend
spec:
serviceAccountName: myapp-config-reader
volumes:
- name: myapp-configmap
configMap:
name: myapp-configmap
containers:
- name: myapp
image: eu.gcr.io/myproject/myapp:latest
volumeMounts:
- name: myapp-configmap
mountPath: /config
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: myapp-configmap
env:
- name: DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: myapp-db-credentials
key: password
myapp-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-configmap
namespace: myapp
data:
SPRING_PROFILES_ACTIVE: dtest
application.yml: |-
reload.message: 1
PropertiesConfig.java
#Data
#Configuration
#ConfigurationProperties(prefix = "reload")
public class PropertiesConfig {
private String message;
}
I'm using the following dependencies, in my maven POM:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.4.RELEASE</version>
</parent>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-kubernetes-config</artifactId>
<version>1.1.1.RELEASE</version>
</dependency>
I can successfully deploy myapp to my K8S cluster.
I have a scheduled task printing propertiesConfig.getMessage() every 10 seconds.
Therefore, I see a series of "1" in my log as myapp starts.
Right after, I change my reload.message ConfigMap's property to "2". What happens?
in less than one second the 'event' is triggered, and k8s calls my /actuator/refresh Spring Boot endpoint;
I still see "1" in the log, because of....
/config/application.yml (mounted volume) takes ~10s to update, then I can see reload.message=2 in there
'refresh' happened just a few seconds ago, when the volume was not updated yet!
In addition, I tried other combos: mode polling, strategy restart_context, and so on.
But... I definitely do want event+refresh! It's the required solution for our use case.
My question:
can I set some kind of "delay" for the refresh event, in order to give the volume the needed time to sync w the ConfigMap?
can I configure my ConfigMaps inside deployment without using volumeMounts at all? (if I remove config map volume now, Spring simply doesn't pick up the properties from ConfigMap)

The application.yml file in the project and the one described in the configmap conflict, somehow.
I fixed that by renaming the one in the project to bootstrap.yml

Related

Java Spring Active profile in kubernetes Cluster

i want to start Java spring app with active profile...
I build Docker image in Gitlab CI/CD using maven wrapper ,
./mvnw compile jib:build -Dimage=image/sms-service:1
after that i deploy app in k8s....
now i want to run with active profile , what is best way? how can i define in k8s to run specific user
apiVersion: apps/v1
kind: Deployment
metadata:
name: sms-service
namespace: sms-service
spec:
selector:
matchLabels:
app: sms-service
replicas: 4 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: sms-service
spec:
template:
spec:
containers:
- name: sms-service
image: image/sms-service:1
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: sms-service
Set the SPRING_PROFILES_ACTIVE environment variable to the profile(s) you want to run.
You can set it in the deployment yaml or at build time in your image but usually better to add it to deployment.
Create a new file, named configmap.yaml under the k8s config folder and add the following lines:
apiVersion: v1
kind: ConfigMap
metadata:
name: blabla
namespace: bla
data:
application.yaml: |
spring:
profiles:
active: prod (here goes the profile)
This tells Kubernetes to set this configuration when starting the container

Kubernetes deployment error with Java/Micronaut: ERR_CONNECTION_REFUSED

I am trying to deploy an app having 3 services - frontend (Angular), backend 1 (Java/Micronaut), and backend 2 (Java/Micronaut).
My frontend works properly but the Java apps are not working.
Sometimes, I observed it started after 20 min. of deploying a Java app, but this time it does not work even after 1 hr.
Deployment, pod service - all are in running state in Kubernetes, but when I try to hit the URL I see below error:
deployment.yaml for java app
apiVersion: apps/v1
kind: Deployment
metadata:
name: authentication-deploy
labels:
name: authentication-deploy
app: supply-chain-app
spec:
replicas: 1
selector:
matchLabels:
name: authentication-pod
app: supply-chain-app
template:
metadata:
name: authentication-pod
labels:
name: authentication-pod
app: supply-chain-app
spec:
containers:
- name: authentication
image: cawishika/authentication-service:1.1
ports:
- containerPort: 80
service.yaml for java app
apiVersion: v1
kind: Service
metadata:
name: authentication-service
labels:
name: authentication-service
app: supply-chain-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30006
selector:
name: authentication-pod
app: supply-chain-app
Docker file
FROM adoptopenjdk/openjdk11:latest
EXPOSE 8002
ADD target/authentication-service-0.1.jar authentication-service-0.1.jar
ENTRYPOINT ["java", "-jar", "/authentication-service-0.1.jar"]
kubectl logs podname
Your Dockerfile is exposing port 8002 (EXPOSE 8002), but your app is started on port 8080.
Additionally, your Kubernetes configuration is pointing to port 80 of your pod.
You should set it so that all three configurations use the same port.

Not able to access placeholder, which is added as secret and the secret is mounted as volume rather than environment variable

My password placeholder in Application.yaml in spring boot project:
password: {DB_PASSWORD}
My secret file:
apiVersion: v1
data:
DB_PASSWORD: QXBwX3NhXzA1X2pzZHVlbmRfMzIx
kind: Secret
type: Opaque
metadata:
name: test-secret
My Deployment config file part:
spec:
containers:
- envFrom:
- configMapRef:
name: gb-svc-rpt-dtld-cc
image: >-
artifactory.global.standardchartered.com/colt/gb-svc-reports-dataloader-cc/gb-svc-reports-dataloader-cc-develop#sha256:c8b7e210c18556155d8314eb41965fac57c1c9560078e3f14bf7407dbde564fb
imagePullPolicy: Always
name: gb-svc-rpt-dtld-cc
ports:
- containerPort: 8819
protocol: TCP
volumeMounts:
- mountPath: /etc/secret
name: secret-test
volumes:
- name: secret-test
secret:
defaultMode: 420
secretName: test-secret
I'm able to see the secrets added in /etc/secret path also. But it is not getting referred in placeholders and getting error while server startup.
Could not resolve placeholder 'DB_PASSWORD' in value "${DB_PASSWORD}"
Note: Same code works if i add the secret as environment variable in deployment config
As I understand from your question you are trying to mount secret to a pod as an environment variable. In kubernetes secrets are able to mount as a volume (which you did in the attached code) and as env variable (as you like to do)
For that you should use:
spec:
containers:
- env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: DB_PASSWORD
name: test-secret
image: "fedora:29"
name: my_app

Kubernetes get pod's full name inside tomcat container in Java

Tried to get the pod name inside the Tomcat container at startup. Already exposed the pod's full name as an environment variable using the Kubernetes Downward API as below in search.yaml file ( only a portion of the file attached).
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: dev
labels:
app: search
spec:
replicas: 1
selector:
matchLabels:
app: search
template:
metadata:
labels:
app: search
spec:
hostname: search-host
imagePullSecrets:
- name: regcred
containers:
- name: search-container
image: docker.test.net/search:develop-202104070845
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 8080
resources:
requests:
memory: "2048Mi"
cpu: "1"
limits:
memory: "2048Mi"
cpu: "2"
env:
- name: SERVER_GROUP
value: "SEARCH"
- name: MIN_TOMCAT_MEMORY
value: "512M"
- name: MAX_TOMCAT_MEMORY
value: "5596M"
- name: DOCKER_TIME_ZONE
value: "Asia/Colombo"
- name: AVAILABILITY_ZONE
value: "CMB"
After running the pod this environment variable is available in docker level.
Pod details
NAME READY STATUS RESTARTS AGE
search-56c9544d58-bqrxv 1/1 Running 0 4s
Pod environment variable for pod name
POD_NAME=search-56c9544d58-bqrxv
When accessed this as below in Tomcat container's java code in a jar called BootsTrap.jar and it returned as null.
String dockerPodName = System.getenv( "POD_NAME" );
Is it because the pod is not up and running before the tomcat container initialized or accessing the environment variable in Java is incorrect or is there another way of accessing pod's environment variable through java.
You are setting MY_POD_NAME as environment variable, but do the lookup for POD_NAME. Use the same name in the Java code and the deployment.
Note: Your YAML seems to have wrong indentation, I assume that this is just a copy-paste artifact. If that is not the case, it could lead to rejected changes of the deployment since the YAML is invalid.

Dockerized Spring Boot app not using mounted Kubernetes ConfigMap (application.properties)

I have a problem where in my dockerized Spring Boot application is not using the application.properties I stored in a configMap.
However, I can see and confirm that my configMap has been mounted properly in the right directory of my Spring Boot app when I enter the pod's shell.
Note that I have an application.properties by default wherein Kubernetes mounts / overwrites it later on.
It seems that the Spring Boot uses the first application.properties and when k8s overwrites it, apparently, it doesn't use it.
It seems that, apparently, what happens is:
run the .jar file inside the Dockerized Spring Boot app
use the first/default application.properties file on runtime
Kubernetes proceeds to mount the configMap
mount / overwrite success, but how will Spring Boot use this one since it's already running?
Here is the Dockerfile of my Spring Boot / Docker image for reference:
FROM maven:3.5.4-jdk-8-alpine
# Copy whole source code to the docker image
# Note of .dockerignore, this ensures that folders such as `target` is not copied
WORKDIR /usr/src/myproject
COPY . /usr/src/myproject/
RUN mvn clean package -DskipTests
WORKDIR /usr/src/my-project-app
RUN cp /usr/src/myproject/target/*.jar ./my-project-app.jar
EXPOSE 8080
CMD ["java", "-jar", "my-project-app.jar"]
Here's my Kubernetes deployment .yaml file for reference:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-project-api
namespace: my-cluster
labels:
app: my-project-api
spec:
replicas: 1
selector:
matchLabels:
app: my-project-api
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: my-project-api
spec:
containers:
- name: my-project-api
image: "my-project:latest"
imagePullPolicy: Always
env:
.
.
.
volumeMounts:
- name: my-project-config
mountPath: /usr/src/my-project/my-project-service/src/main/resources/config/application.properties
ports:
- containerPort: 8080
name: my-project-api
protocol: TCP
volumes:
# Name of the volume
- name: my-project-config
# Get a ConfigMap with this name and attach to this volume
configMap:
name: my-project-config
And my configMap for reference:
kind: ConfigMap
apiVersion: v1
data:
application.properties: |-
# This comment means that this is coming from k8s ConfigMap. Nice!
server.port=8999
.
.
.
.
metadata:
name: my-project-config
namespace: my-cluster
Any help is greatly appreciated... Thank you so much.. :)
The thing is that /src/main/resources/application.properties that your application uses is the one that is inside the jar file by default. If you open your jar, you should see it there.
That being said, your expectations to mount a /src/main/resources directory where your jar is are not going to be fulfilled, unfortunately.
These are the docs you should be looking at.
I won't go into much detail as it's explained pretty good in the docs but I will say that you are better off explicitly declaring your config location so that new people on the project know from where the config is coming from right off the bat.
You can do something like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-project-api
labels:
app: my-project-api
spec:
selector:
matchLabels:
app: my-project-api
template:
metadata:
labels:
app: my-project-api
spec:
containers:
- name: my-project-api
image: "my-project:latest"
imagePullPolicy: Always
env:
- name: JAVA_OPTS
value: "-Dspring.config.location=/opt/config"
.
.
.
volumeMounts:
- name: my-project-config
mountPath: /opt/config
ports:
- containerPort: 8080
volumes:
- name: my-project-config
configMap:
name: my-project-config
Hope that helps,
Cheers!
I did slightly differently. I made sure I have mounted application.properties at config/. i.e; below is my example mounted application.properties (below commands show the values in pod - i.e; after kubectl exec -it into the pod)
/ # pwd
/
/ # cat config/application.properties
logback.access.enabled=false
management.endpoints.web.exposure.include=health, loggers, beans, configprops, env
Basically, the trick is based on the link in the above answer. Below is an excerpt from the link in which it does say application.properties will be picked from config/. So, I made sure my environment (dev, test, prod) specific config map was mounted at config/. Do note there is precedence for the below list (per the link: locations higher in the list override lower items)
A /config subdir of the current directory.
The current directory
A classpath /config package
The classpath root
Below is the config map definition (just pasted data section)
data:
application.properties: |+
logback.access.enabled={{.Values.logacbkAccessEnabled}}
management.endpoints.web.exposure.include=health, loggers, beans, configprops, env
And you can also see from actuator/env endpoint SpringBootApp did pick those values.
{
"name": "Config resource 'file [config/application.properties]' via location 'optional:file:./config/'",
"properties": {
"logback.access.enabled": {
"value": "false",
"origin": "URL [file:config/application.properties] - 1:24"
},
"management.endpoints.web.exposure.include": {
"value": "health, loggers, beans, configprops, env",
"origin": "URL [file:config/application.properties] - 2:43"
}
}
},

Categories