Invalid env value with Quarkus RestClient property when create kubernetes deployment - java

Following the Quarkus Rest Client tutorial I need to add something similar to this to the application.properties file:
country-api/mp-rest/url=https://restcountries.eu/rest
With Docker it works and I can pass the property value by parameter:
docker run -it --privileged --rm --env country-api/mp-rest/url="https://restcountries.eu/rest" mydockerhost/my-project:SNAPSHOT
The YAML file for Kubernetes looks like this:
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
labels:
app.kubernetes.io/name: "my-project"
app.kubernetes.io/version: "SNAPSHOT"
name: "my-project-deployment"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: "my-project"
app.kubernetes.io/version: "SNAPSHOT"
template:
metadata:
labels:
app.kubernetes.io/name: "my-project"
app.kubernetes.io/version: "SNAPSHOT"
spec:
containers:
- env:
- name: "country-api/mp-rest/url"
value: "https://restcountries.eu/rest"
However the following error is occurring when executing the command kubectl apply -f my-projetc.yaml
The Deployment "my-project-deployment" is invalid:
* spec.template.spec.containers[0].env[1].name: Invalid value: "country-api/mp-rest/url": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for
validation is '[-._a-zA-Z][-._a-zA-Z0-9]*')
Quarkus version: 1.3.1.Final

You can use environment variables in application.properties so you could do something like:
country-api/mp-rest/url=${MY_SERVICE_URL}
and define MY_SERVICE_URL in your Yaml file.
Also, MicroProfile Config has a way to work around your issue. Using COUNTRY_API_MP_REST_URL as an environment variable should work (uppercase everything, replace anything non alphanumeric with _).

Related

Java Spring Active profile in kubernetes Cluster

i want to start Java spring app with active profile...
I build Docker image in Gitlab CI/CD using maven wrapper ,
./mvnw compile jib:build -Dimage=image/sms-service:1
after that i deploy app in k8s....
now i want to run with active profile , what is best way? how can i define in k8s to run specific user
apiVersion: apps/v1
kind: Deployment
metadata:
name: sms-service
namespace: sms-service
spec:
selector:
matchLabels:
app: sms-service
replicas: 4 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: sms-service
spec:
template:
spec:
containers:
- name: sms-service
image: image/sms-service:1
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: sms-service
Set the SPRING_PROFILES_ACTIVE environment variable to the profile(s) you want to run.
You can set it in the deployment yaml or at build time in your image but usually better to add it to deployment.
Create a new file, named configmap.yaml under the k8s config folder and add the following lines:
apiVersion: v1
kind: ConfigMap
metadata:
name: blabla
namespace: bla
data:
application.yaml: |
spring:
profiles:
active: prod (here goes the profile)
This tells Kubernetes to set this configuration when starting the container

Kubernetes get pod's full name inside tomcat container in Java

Tried to get the pod name inside the Tomcat container at startup. Already exposed the pod's full name as an environment variable using the Kubernetes Downward API as below in search.yaml file ( only a portion of the file attached).
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: dev
labels:
app: search
spec:
replicas: 1
selector:
matchLabels:
app: search
template:
metadata:
labels:
app: search
spec:
hostname: search-host
imagePullSecrets:
- name: regcred
containers:
- name: search-container
image: docker.test.net/search:develop-202104070845
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 8080
resources:
requests:
memory: "2048Mi"
cpu: "1"
limits:
memory: "2048Mi"
cpu: "2"
env:
- name: SERVER_GROUP
value: "SEARCH"
- name: MIN_TOMCAT_MEMORY
value: "512M"
- name: MAX_TOMCAT_MEMORY
value: "5596M"
- name: DOCKER_TIME_ZONE
value: "Asia/Colombo"
- name: AVAILABILITY_ZONE
value: "CMB"
After running the pod this environment variable is available in docker level.
Pod details
NAME READY STATUS RESTARTS AGE
search-56c9544d58-bqrxv 1/1 Running 0 4s
Pod environment variable for pod name
POD_NAME=search-56c9544d58-bqrxv
When accessed this as below in Tomcat container's java code in a jar called BootsTrap.jar and it returned as null.
String dockerPodName = System.getenv( "POD_NAME" );
Is it because the pod is not up and running before the tomcat container initialized or accessing the environment variable in Java is incorrect or is there another way of accessing pod's environment variable through java.
You are setting MY_POD_NAME as environment variable, but do the lookup for POD_NAME. Use the same name in the Java code and the deployment.
Note: Your YAML seems to have wrong indentation, I assume that this is just a copy-paste artifact. If that is not the case, it could lead to rejected changes of the deployment since the YAML is invalid.

Dockerized Spring Boot app not using mounted Kubernetes ConfigMap (application.properties)

I have a problem where in my dockerized Spring Boot application is not using the application.properties I stored in a configMap.
However, I can see and confirm that my configMap has been mounted properly in the right directory of my Spring Boot app when I enter the pod's shell.
Note that I have an application.properties by default wherein Kubernetes mounts / overwrites it later on.
It seems that the Spring Boot uses the first application.properties and when k8s overwrites it, apparently, it doesn't use it.
It seems that, apparently, what happens is:
run the .jar file inside the Dockerized Spring Boot app
use the first/default application.properties file on runtime
Kubernetes proceeds to mount the configMap
mount / overwrite success, but how will Spring Boot use this one since it's already running?
Here is the Dockerfile of my Spring Boot / Docker image for reference:
FROM maven:3.5.4-jdk-8-alpine
# Copy whole source code to the docker image
# Note of .dockerignore, this ensures that folders such as `target` is not copied
WORKDIR /usr/src/myproject
COPY . /usr/src/myproject/
RUN mvn clean package -DskipTests
WORKDIR /usr/src/my-project-app
RUN cp /usr/src/myproject/target/*.jar ./my-project-app.jar
EXPOSE 8080
CMD ["java", "-jar", "my-project-app.jar"]
Here's my Kubernetes deployment .yaml file for reference:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-project-api
namespace: my-cluster
labels:
app: my-project-api
spec:
replicas: 1
selector:
matchLabels:
app: my-project-api
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: my-project-api
spec:
containers:
- name: my-project-api
image: "my-project:latest"
imagePullPolicy: Always
env:
.
.
.
volumeMounts:
- name: my-project-config
mountPath: /usr/src/my-project/my-project-service/src/main/resources/config/application.properties
ports:
- containerPort: 8080
name: my-project-api
protocol: TCP
volumes:
# Name of the volume
- name: my-project-config
# Get a ConfigMap with this name and attach to this volume
configMap:
name: my-project-config
And my configMap for reference:
kind: ConfigMap
apiVersion: v1
data:
application.properties: |-
# This comment means that this is coming from k8s ConfigMap. Nice!
server.port=8999
.
.
.
.
metadata:
name: my-project-config
namespace: my-cluster
Any help is greatly appreciated... Thank you so much.. :)
The thing is that /src/main/resources/application.properties that your application uses is the one that is inside the jar file by default. If you open your jar, you should see it there.
That being said, your expectations to mount a /src/main/resources directory where your jar is are not going to be fulfilled, unfortunately.
These are the docs you should be looking at.
I won't go into much detail as it's explained pretty good in the docs but I will say that you are better off explicitly declaring your config location so that new people on the project know from where the config is coming from right off the bat.
You can do something like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-project-api
labels:
app: my-project-api
spec:
selector:
matchLabels:
app: my-project-api
template:
metadata:
labels:
app: my-project-api
spec:
containers:
- name: my-project-api
image: "my-project:latest"
imagePullPolicy: Always
env:
- name: JAVA_OPTS
value: "-Dspring.config.location=/opt/config"
.
.
.
volumeMounts:
- name: my-project-config
mountPath: /opt/config
ports:
- containerPort: 8080
volumes:
- name: my-project-config
configMap:
name: my-project-config
Hope that helps,
Cheers!
I did slightly differently. I made sure I have mounted application.properties at config/. i.e; below is my example mounted application.properties (below commands show the values in pod - i.e; after kubectl exec -it into the pod)
/ # pwd
/
/ # cat config/application.properties
logback.access.enabled=false
management.endpoints.web.exposure.include=health, loggers, beans, configprops, env
Basically, the trick is based on the link in the above answer. Below is an excerpt from the link in which it does say application.properties will be picked from config/. So, I made sure my environment (dev, test, prod) specific config map was mounted at config/. Do note there is precedence for the below list (per the link: locations higher in the list override lower items)
A /config subdir of the current directory.
The current directory
A classpath /config package
The classpath root
Below is the config map definition (just pasted data section)
data:
application.properties: |+
logback.access.enabled={{.Values.logacbkAccessEnabled}}
management.endpoints.web.exposure.include=health, loggers, beans, configprops, env
And you can also see from actuator/env endpoint SpringBootApp did pick those values.
{
"name": "Config resource 'file [config/application.properties]' via location 'optional:file:./config/'",
"properties": {
"logback.access.enabled": {
"value": "false",
"origin": "URL [file:config/application.properties] - 1:24"
},
"management.endpoints.web.exposure.include": {
"value": "health, loggers, beans, configprops, env",
"origin": "URL [file:config/application.properties] - 2:43"
}
}
},

Single docker image thats being deployed as 2 different services thru kubernetes and helm. Change context path of app

We have a single docker image thats being deployed as 2 different services thru kubernetes and helm with names like "ServiceA" and "ServiceB". At the point deploy happens need to set the context path of Tomcat to be something different like /ServiceA and /ServiceB. How can this be done ? is there anything that can be set directly on the yaml ?
ex: Looks like below
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "fullname" . }}-bg
{{- include "labels" . }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "name" . }}-bg
app.kubernetes.io/instance: {{ .Release.Name }}
strategy:
type: Recreate
rollingUpdate: null
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "name" . }}-bg
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: SERVICE_NAME
value: "ServiceB"
- name: background.jobs.enabled
value: "true"
envFrom:
- configMapRef:
name: {{ include "commmonBaseName" . }}-configmap
-
There are few approaches into setting up the context path of an app.
From the app itself: This depends on the language/framework/runtime your application uses. For example, if it's a traditional Java web application that runs on Tomcat, it would be served by default from the context path of the name of the .war file you put in the webapp directory. Or, if it is a Spring Boot 2.X app, you could set it up with the Spring Boot property server.servlet.context-path, which can be also passed via an environment variable, specifically SERVER_SERVLET_CONTEXT_PATH. So, to give an example, in the container in your deployment pod spec:
env:
- name: SERVER_SERVLET_CONTEXT_PATH
value: "/ServiceB"
However, this kind of app-specific settings are most of the times not needed in Kubernetes, since you can handle those concerns on the outer layers.
Using Ingress objects: If you have an Ingress controller running and properly configured, you can create an Ingress that will manage path prefix stripping, and other HTTP Layer7 concerns. This means, you can leave your application itself as-is (like serving from root context /) but configure the context path from the Ingress. An example is the following, assuming you use Nginx Ingress Controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-a-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: service-a.yourdomain.com
http:
paths:
- path: /ServiceA/(.*)
backend:
serviceName: service-a
servicePort: service-a-port
Note the capture group (.*) in the path, and $1 in the rewrite target - it will rewrite the request paths like /ServiceA/something to /something before forwarding the packet to your backend.
See this page to lear more about ingresses.
You can use an HTTP router software such as skipper to handle all this HTTP traffic configuration in-cluster.
If you use a service mesh solution such as Istio, they give you many ways to manage the traffic inside the mesh.

reference variables from pom.xml to deployment.yaml

i want to use maven pom.xml variables in a kubernetes deployment.yaml file. the variables i want to reference are ${project.artifactId} and ${project.version} which is pulled from
pom.xml
<groupId>my-project</groupId>
<artifactId>>my-project</artifactId>
<version>1.0.0-SNAPSHOT</version>
and this is what i want to achieve
deploment.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: my-project
image: ${project.artifactId}:${project.version}
with this attempt i get an InvalidImageName error.
please advice on which better way of doing this.
I would say its issue with your deploment.yaml content.
I have use it (with nginx image) on K8s and get error below:
error: error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "", Namespace: "default"
Object: &{map["apiVersion":"v1" "kind":"Pod" "spec":map["containers":[map["name":"test" "image":"nginx"]]] "metadata":map["namespace":"default" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]]]}
from server for: "pod.yaml": resource name may not be empty
In your current file you have named only container. You have to specify your POD name using metadata.name. In metadata section you can also specify namespace.
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx
In addition keep in mind that kind: Pod and kind: Deployment are two different things (bit confused regarding your YAML file name).

Categories