Modifying container cacerts at runtime - java

I have an containerized JVM application (multiple actually) running on kubernetes, which needs to trust additional custom/private CAs (which are not known beforehand, the application will be deployed in multiple unrelated data-centers that have their own PKI).
The cacerts in the base image are not writable at runtime.
Currently I see these options:
do not provide an option to modify cacerts, force the DCs to manage & inject their own cacert files via container volumes.
make cacerts file writeable at runtime and modify cacerts in entrypoint
do not use JDK TLS - set truststore at "client" level (e.g. CXF)
...?
Under the assumption the DCs will not run JVM apps only, they will not like to manage cacerts themselves, because cacerts is specific to JVM and they then potentially need to do that for every technology. Thus I do not really like that option.
The second option seems to be a quite pragmatic one - but I suspect that making the cacerts writable at runtime is a bad practice because an attacker could modify configuration s/he should not be able to.
The third option has it's limitations and intricacies because you need to make each each and every client configurable. (In case of CXF for example fetching the initial WSDL file does not seem to covered by CXF but by the JVM...) But this means if your client is not (properly) configurable this does not work.
Thus I am back at option 2.
My questions would be:
Is it a bad practice to have cacerts writeable at runtime?
Is there an option I missed that allows injecting (arbitrary) additional CAs into cacerts without making it writeable at runtime?
Note: I have asked about this on Security Stack Exchange, too, but got no response there.

Have you considered a combination of an init container and an emptyDir volume which is readOnly: true in the application container?
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
initContainers:
- name: cacerts
image: cacerts-importer
volumeMounts:
- name: cacerts
mountPath: /media/cacerts
- name: cacert-imports
mountPath: /media/cacert-imports
containers:
- name: app
image: app:v1
volumeMounts:
- name: cacerts
mountPath: /media/cacerts
readOnly: true
volumes:
- name: cacerts
emptyDir: {}
- name: cacert-imports
configMap:
name: cacert-imports

Related

How to read kubernetes secrets values from volume mount in spring boot

My code is below
apiVersion: v1
kind: Secret
metadata:
name: test-secret
namespace: default
type: Opaque
data:
secret.db_user: |
dGVzdA==
secret.db_password: |
dGVzdA==
And then i mount this as volume mount in the deployment section, Now i want to read this secret and map to spring.datasource.username and spring.datasource.passowrd without env configuration in the deployment section. Read should be from java code. How can i do that.
You can either mount the secrets as environment variables OR as files into a pod. The best remains to mount them as environment variables for me (https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables) but all cases are differents.
You can use "subPath" to mount the secret key to a specific subdirectory without overwriting its whole content (that means only mount the file in the existing directory among with others existing files for example).
volumeMounts:
- mountPath: "/var/my-app/db_creds"
subPath: db_creds
name: db-creds
readOnly: true
volumes:
- name: db-creds
secret:
secretName: test-secret
(or you can mount both username and password into two separate files)
If you want to read a file with spring (here with spring environment varialbes but you can do the same in your app.properties, you get the point), here is an example with my truststore that is a file (the ${SSL_TRUSTSTORE} environment variable contains the path to the truststore:
SPRING_KAFKA_SSL_TRUSTSTORE_LOCATION: "file://${SSL_TRUSTSTORE}"

Quarkus - configure logging with kubernetes cluster

According to the official resource, logging configuration relies on application.properties file.
Now I need to have several configuration according to the cluster in use (let's say we have the typical dev, staging and production environments, thus dev should have a DEBUG level and production at least INFO).
At first I thought using Kubernetes ConfigMaps, but I can't see any connection with quarkus logging.
How can I solve this issue?
EDIT:
This is my ConfigMap
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-cm-config-map
namespace: default
uid: d992d86f-c247-471d-8e31-53e9a1858b76
resourceVersion: '8484'
creationTimestamp: '2021-04-22T13:12:43Z'
managedFields:
- manager: kubectl-create
operation: Update
apiVersion: v1
time: '2021-04-22T13:12:43Z'
fieldsType: FieldsV1
fieldsV1:
'f:data':
.: {}
'f:myenv': {}
'f:myname': {}
- manager: kubectl-edit
operation: Update
apiVersion: v1
time: '2021-04-22T16:52:18Z'
fieldsType: FieldsV1
fieldsV1:
'f:data':
'f:log.file.level': {}
- manager: dashboard
operation: Update
apiVersion: v1
time: '2021-04-23T08:03:06Z'
fieldsType: FieldsV1
fieldsV1:
'f:data':
'f:quarkus.log.file.level': {}
data:
log.file.level: DEBUG
myenv: cl1
myname: cluster1
quarkus.log.file.level: DEBUG
EDIT2
This is my config map (through command kubectl edit cm ):
apiVersion: v1
data:
QUARKUS_LOG_FILE_ENABLE: "true"
QUARKUS_LOG_FILE_FORMAT: '%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n'
QUARKUS_LOG_FILE_LEVEL: ERROR
QUARKUS_LOG_FILE_PATH: /tmp/kube-cm.log
myenv: cl1
myname: cluster 1b
kind: ConfigMap
metadata:
creationTimestamp: "2021-04-22T13:12:43Z"
name: kube-cm-config-map
namespace: default
resourceVersion: "39810"
uid: d992d86f-c247-471d-8e31-53e9a1858b76
If you are using Kubernetes resource yaml to deploy your app, use the snippet below to push your custom ConfigMap as environment variables to your applicaton (https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables):
spec:
containers:
- name:
image:
envFrom:
- configMapRef:
name: kube-cm-config-map
Use a different ConfigMap for each environment but with the same name. If your environments (dev/qa/etc) are Kubernetes namespaces, then it is very easy to setup. Just duplicate the ConfigMap in each namespace, and change the log level value in each namespace.
Also, change the naming convention for your ConfigMap properties from log.file.level to LOG_FILE_LEVEL
See https://quarkus.io/guides/config-reference#environment_variables
Ok solved by editing the cm in the following way:
data:
QUARKUS_LOG_FILE_ENABLE: "true"
QUARKUS_LOG_FILE_FORMAT: '%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n'
QUARKUS_LOG_FILE_LEVEL: ERROR
QUARKUS_LOG_FILE_PATH: /tmp/kube-cm.log
Then I set application.properties on quarkus with:
quarkus.kubernetes.env.configmaps=kube-cm-config-map
There are 2 ways to use a ConfigMap in Quarkus to read runtime configuration.
The first is to let Quarkus query the API server using the quarkus-kubernetes-config extension which is described here.
The second way to configure the Kubernetes Deployment to turn ConfigMap values into environment variables for the Pod. This can be done with the quarkus-kubernetes extension which is described here.
So you would add the proper quarkus logging configuration (i.e a key value pair) in the ConfigMap and then use one of the above methods to use that at runtime

Dockerized Spring Boot app not using mounted Kubernetes ConfigMap (application.properties)

I have a problem where in my dockerized Spring Boot application is not using the application.properties I stored in a configMap.
However, I can see and confirm that my configMap has been mounted properly in the right directory of my Spring Boot app when I enter the pod's shell.
Note that I have an application.properties by default wherein Kubernetes mounts / overwrites it later on.
It seems that the Spring Boot uses the first application.properties and when k8s overwrites it, apparently, it doesn't use it.
It seems that, apparently, what happens is:
run the .jar file inside the Dockerized Spring Boot app
use the first/default application.properties file on runtime
Kubernetes proceeds to mount the configMap
mount / overwrite success, but how will Spring Boot use this one since it's already running?
Here is the Dockerfile of my Spring Boot / Docker image for reference:
FROM maven:3.5.4-jdk-8-alpine
# Copy whole source code to the docker image
# Note of .dockerignore, this ensures that folders such as `target` is not copied
WORKDIR /usr/src/myproject
COPY . /usr/src/myproject/
RUN mvn clean package -DskipTests
WORKDIR /usr/src/my-project-app
RUN cp /usr/src/myproject/target/*.jar ./my-project-app.jar
EXPOSE 8080
CMD ["java", "-jar", "my-project-app.jar"]
Here's my Kubernetes deployment .yaml file for reference:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-project-api
namespace: my-cluster
labels:
app: my-project-api
spec:
replicas: 1
selector:
matchLabels:
app: my-project-api
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: my-project-api
spec:
containers:
- name: my-project-api
image: "my-project:latest"
imagePullPolicy: Always
env:
.
.
.
volumeMounts:
- name: my-project-config
mountPath: /usr/src/my-project/my-project-service/src/main/resources/config/application.properties
ports:
- containerPort: 8080
name: my-project-api
protocol: TCP
volumes:
# Name of the volume
- name: my-project-config
# Get a ConfigMap with this name and attach to this volume
configMap:
name: my-project-config
And my configMap for reference:
kind: ConfigMap
apiVersion: v1
data:
application.properties: |-
# This comment means that this is coming from k8s ConfigMap. Nice!
server.port=8999
.
.
.
.
metadata:
name: my-project-config
namespace: my-cluster
Any help is greatly appreciated... Thank you so much.. :)
The thing is that /src/main/resources/application.properties that your application uses is the one that is inside the jar file by default. If you open your jar, you should see it there.
That being said, your expectations to mount a /src/main/resources directory where your jar is are not going to be fulfilled, unfortunately.
These are the docs you should be looking at.
I won't go into much detail as it's explained pretty good in the docs but I will say that you are better off explicitly declaring your config location so that new people on the project know from where the config is coming from right off the bat.
You can do something like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-project-api
labels:
app: my-project-api
spec:
selector:
matchLabels:
app: my-project-api
template:
metadata:
labels:
app: my-project-api
spec:
containers:
- name: my-project-api
image: "my-project:latest"
imagePullPolicy: Always
env:
- name: JAVA_OPTS
value: "-Dspring.config.location=/opt/config"
.
.
.
volumeMounts:
- name: my-project-config
mountPath: /opt/config
ports:
- containerPort: 8080
volumes:
- name: my-project-config
configMap:
name: my-project-config
Hope that helps,
Cheers!
I did slightly differently. I made sure I have mounted application.properties at config/. i.e; below is my example mounted application.properties (below commands show the values in pod - i.e; after kubectl exec -it into the pod)
/ # pwd
/
/ # cat config/application.properties
logback.access.enabled=false
management.endpoints.web.exposure.include=health, loggers, beans, configprops, env
Basically, the trick is based on the link in the above answer. Below is an excerpt from the link in which it does say application.properties will be picked from config/. So, I made sure my environment (dev, test, prod) specific config map was mounted at config/. Do note there is precedence for the below list (per the link: locations higher in the list override lower items)
A /config subdir of the current directory.
The current directory
A classpath /config package
The classpath root
Below is the config map definition (just pasted data section)
data:
application.properties: |+
logback.access.enabled={{.Values.logacbkAccessEnabled}}
management.endpoints.web.exposure.include=health, loggers, beans, configprops, env
And you can also see from actuator/env endpoint SpringBootApp did pick those values.
{
"name": "Config resource 'file [config/application.properties]' via location 'optional:file:./config/'",
"properties": {
"logback.access.enabled": {
"value": "false",
"origin": "URL [file:config/application.properties] - 1:24"
},
"management.endpoints.web.exposure.include": {
"value": "health, loggers, beans, configprops, env",
"origin": "URL [file:config/application.properties] - 2:43"
}
}
},

Single docker image thats being deployed as 2 different services thru kubernetes and helm. Change context path of app

We have a single docker image thats being deployed as 2 different services thru kubernetes and helm with names like "ServiceA" and "ServiceB". At the point deploy happens need to set the context path of Tomcat to be something different like /ServiceA and /ServiceB. How can this be done ? is there anything that can be set directly on the yaml ?
ex: Looks like below
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "fullname" . }}-bg
{{- include "labels" . }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "name" . }}-bg
app.kubernetes.io/instance: {{ .Release.Name }}
strategy:
type: Recreate
rollingUpdate: null
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "name" . }}-bg
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: SERVICE_NAME
value: "ServiceB"
- name: background.jobs.enabled
value: "true"
envFrom:
- configMapRef:
name: {{ include "commmonBaseName" . }}-configmap
-
There are few approaches into setting up the context path of an app.
From the app itself: This depends on the language/framework/runtime your application uses. For example, if it's a traditional Java web application that runs on Tomcat, it would be served by default from the context path of the name of the .war file you put in the webapp directory. Or, if it is a Spring Boot 2.X app, you could set it up with the Spring Boot property server.servlet.context-path, which can be also passed via an environment variable, specifically SERVER_SERVLET_CONTEXT_PATH. So, to give an example, in the container in your deployment pod spec:
env:
- name: SERVER_SERVLET_CONTEXT_PATH
value: "/ServiceB"
However, this kind of app-specific settings are most of the times not needed in Kubernetes, since you can handle those concerns on the outer layers.
Using Ingress objects: If you have an Ingress controller running and properly configured, you can create an Ingress that will manage path prefix stripping, and other HTTP Layer7 concerns. This means, you can leave your application itself as-is (like serving from root context /) but configure the context path from the Ingress. An example is the following, assuming you use Nginx Ingress Controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-a-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: service-a.yourdomain.com
http:
paths:
- path: /ServiceA/(.*)
backend:
serviceName: service-a
servicePort: service-a-port
Note the capture group (.*) in the path, and $1 in the rewrite target - it will rewrite the request paths like /ServiceA/something to /something before forwarding the packet to your backend.
See this page to lear more about ingresses.
You can use an HTTP router software such as skipper to handle all this HTTP traffic configuration in-cluster.
If you use a service mesh solution such as Istio, they give you many ways to manage the traffic inside the mesh.

Programmatically get the name of the pod that a container belongs to in Kubernetes?

Is there a way to programmatically get the name of the pod that a container belongs to in Kubernetes? If so how? I'm using fabric8's java client but curl or something similar will be fine as well.
Note that I don't want to find the pod using a specific label since then (I assume) I may not always find the right pod if it's scaled with a replication controller.
You can tell Kubernetes to put the pod name in an environment variable of your choice using the downward API.
For example:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
restartPolicy: Never
The pod name is written to /etc/hostname so it's possible to read it from there. In Java (which I'm using) you can also get the hostname (and thus the name of the pod) by calling System.getenv("HOSTNAME").

Categories