We want to build a Spring Boot-based project using Maven. We found the Maven Task on the Tekton Hub and already have a running Pipeline. In a shortened version our pipeline.yml looks like this:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
spec:
params:
- name: SOURCE_URL
type: string
description: A git repo url where the source code resides.
- name: SOURCE_REVISION
description: The branch, tag or SHA to checkout.
default: ""
workspaces:
- name: maven-settings
- name: source-workspace
tasks:
- name: fetch-repository
taskRef:
name: git-clone
workspaces:
- name: output
workspace: source-workspace
params:
- name: url
value: "$(params.SOURCE_URL)"
- name: revision
value: "$(params.SOURCE_REVISION)"
- name: subdirectory
value: ""
- name: deleteExisting
value: "true"
- name: maven
taskRef:
name: maven
runAfter:
- fetch-repository
params:
- name: GOALS
value:
- package
workspaces:
- name: source
workspace: source-workspace
- name: maven-settings
workspace: maven-settings
And a PipelineRun is defined as:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
spec:
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: maven-settings
emptyDir: {}
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: source-pvc
params:
- name: SOURCE_URL
value: https://gitlab.com/jonashackt/microservice-api-spring-boot
- name: SOURCE_REVISION
value: 3c4131f8566ef157244881bacc474543ef96755d
The source-pvc PersistentVolumeClaim is defined as:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: source-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
Our project is being build fine, but the Task downloads all the project's Maven dependencies over and over again when we start another PipelineRun:
The Tekton Hub's Maven Task https://hub.tekton.dev/tekton/task/maven doesn't seem to support using a cache. How can we cache nevertheless?
There's an easy way to accomplish caching using Tekto Hub's Maven Task. Instead of specifying an empty directory in the maven-settings workspace with emptyDir: {} you need to create a new subPath inside your already defined source-pvc PersistentVolumeClaim. Also link the persistentVolumeClaim the same way as you already linked it for the source-workspace. Your PipelineRun now somehow looks like this:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
spec:
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: maven-settings
subPath: maven-repo-cache
persistentVolumeClaim:
claimName: source-pvc
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: source-pvc
params:
- name: SOURCE_URL
value: https://gitlab.com/jonashackt/microservice-api-spring-boot
- name: SOURCE_REVISION
value: 3c4131f8566ef157244881bacc474543ef96755d
Now the new subPath is already available via the maven-settings workspace inside the Tekton Hub's Maven Task (which doesn't implement an extra cache workspace right now). We only need to tell Maven to use the path workspaces.maven-settings.path as the cache repository.
Therefore we add -Dmaven.repo.local=$(workspaces.maven-settings.path) as a value to the GOALS parameter of the maven Task like this:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
spec:
params:
- name: SOURCE_URL
type: string
description: A git repo url where the source code resides.
- name: SOURCE_REVISION
description: The branch, tag or SHA to checkout.
default: ""
workspaces:
- name: maven-settings
- name: source-workspace
tasks:
- name: fetch-repository # This task fetches a repository from github, using the `git-clone` task you installed
taskRef:
name: git-clone
workspaces:
- name: output
workspace: source-workspace
params:
- name: url
value: "$(params.SOURCE_URL)"
- name: revision
value: "$(params.SOURCE_REVISION)"
- name: subdirectory
value: ""
- name: deleteExisting
value: "true"
- name: maven
taskRef:
name: maven
runAfter:
- fetch-repository
params:
- name: GOALS
value:
- -Dmaven.repo.local=$(workspaces.maven-settings.path)
- verify
workspaces:
- name: source
workspace: source-workspace
- name: maven-settings
workspace: maven-settings
Now after the first pipeline execution every next run should re-use the Maven repository inside the maven-settings workspace. This should also prevent the log from beeing polluted with Maven Download statements and speeds up the pipeline depending on the number of dependencies:
Our simple example builds more than twice as fast.
Related
My password placeholder in Application.yaml in spring boot project:
password: {DB_PASSWORD}
My secret file:
apiVersion: v1
data:
DB_PASSWORD: QXBwX3NhXzA1X2pzZHVlbmRfMzIx
kind: Secret
type: Opaque
metadata:
name: test-secret
My Deployment config file part:
spec:
containers:
- envFrom:
- configMapRef:
name: gb-svc-rpt-dtld-cc
image: >-
artifactory.global.standardchartered.com/colt/gb-svc-reports-dataloader-cc/gb-svc-reports-dataloader-cc-develop#sha256:c8b7e210c18556155d8314eb41965fac57c1c9560078e3f14bf7407dbde564fb
imagePullPolicy: Always
name: gb-svc-rpt-dtld-cc
ports:
- containerPort: 8819
protocol: TCP
volumeMounts:
- mountPath: /etc/secret
name: secret-test
volumes:
- name: secret-test
secret:
defaultMode: 420
secretName: test-secret
I'm able to see the secrets added in /etc/secret path also. But it is not getting referred in placeholders and getting error while server startup.
Could not resolve placeholder 'DB_PASSWORD' in value "${DB_PASSWORD}"
Note: Same code works if i add the secret as environment variable in deployment config
As I understand from your question you are trying to mount secret to a pod as an environment variable. In kubernetes secrets are able to mount as a volume (which you did in the attached code) and as env variable (as you like to do)
For that you should use:
spec:
containers:
- env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: DB_PASSWORD
name: test-secret
image: "fedora:29"
name: my_app
I'm working on a library to read secrets from a given directory that I've got easily up and running with Docker Swarm by using the /run/secrets directory as the defined place to read secrets from. I'd like to do the same for a Kubernetes deployment but looking online I see many guides that advise using various Kubernetes APIs and libraries. Is it possible to simply read from disk as it is with Docker Swarm? If so, what is the directory that these are stored in?
Please read the documentation
I see 2 practical ways to access the k8s secrets:
Mount the secret as a file
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
Expose the secret as an environmental variable
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
Tried to get the pod name inside the Tomcat container at startup. Already exposed the pod's full name as an environment variable using the Kubernetes Downward API as below in search.yaml file ( only a portion of the file attached).
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: dev
labels:
app: search
spec:
replicas: 1
selector:
matchLabels:
app: search
template:
metadata:
labels:
app: search
spec:
hostname: search-host
imagePullSecrets:
- name: regcred
containers:
- name: search-container
image: docker.test.net/search:develop-202104070845
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 8080
resources:
requests:
memory: "2048Mi"
cpu: "1"
limits:
memory: "2048Mi"
cpu: "2"
env:
- name: SERVER_GROUP
value: "SEARCH"
- name: MIN_TOMCAT_MEMORY
value: "512M"
- name: MAX_TOMCAT_MEMORY
value: "5596M"
- name: DOCKER_TIME_ZONE
value: "Asia/Colombo"
- name: AVAILABILITY_ZONE
value: "CMB"
After running the pod this environment variable is available in docker level.
Pod details
NAME READY STATUS RESTARTS AGE
search-56c9544d58-bqrxv 1/1 Running 0 4s
Pod environment variable for pod name
POD_NAME=search-56c9544d58-bqrxv
When accessed this as below in Tomcat container's java code in a jar called BootsTrap.jar and it returned as null.
String dockerPodName = System.getenv( "POD_NAME" );
Is it because the pod is not up and running before the tomcat container initialized or accessing the environment variable in Java is incorrect or is there another way of accessing pod's environment variable through java.
You are setting MY_POD_NAME as environment variable, but do the lookup for POD_NAME. Use the same name in the Java code and the deployment.
Note: Your YAML seems to have wrong indentation, I assume that this is just a copy-paste artifact. If that is not the case, it could lead to rejected changes of the deployment since the YAML is invalid.
I stumbled upon a very subtle issue trying to implement ConfigMap property source live-reloading for my app deployed to K8S.
Here are a few config snippets from my current project:
application.yaml
spring:
application:
name: myapp
cloud:
kubernetes:
config:
enabled: true
name: myapp
namespace: myapp
sources:
- namespace: myapp
name: myapp-configmap
reload:
enabled: true
mode: event
strategy: refresh
refresh:
refreshable:
- com.myapp.PropertiesConfig
extra-refreshable:
- javax.sql.DataSource
myapp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: myapp
name: myapp
namespace: myapp
spec:
replicas: 1
selector:
matchLabels:
name: myapp-backend
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
name: myapp-backend
spec:
serviceAccountName: myapp-config-reader
volumes:
- name: myapp-configmap
configMap:
name: myapp-configmap
containers:
- name: myapp
image: eu.gcr.io/myproject/myapp:latest
volumeMounts:
- name: myapp-configmap
mountPath: /config
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: myapp-configmap
env:
- name: DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: myapp-db-credentials
key: password
myapp-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-configmap
namespace: myapp
data:
SPRING_PROFILES_ACTIVE: dtest
application.yml: |-
reload.message: 1
PropertiesConfig.java
#Data
#Configuration
#ConfigurationProperties(prefix = "reload")
public class PropertiesConfig {
private String message;
}
I'm using the following dependencies, in my maven POM:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.4.RELEASE</version>
</parent>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-kubernetes-config</artifactId>
<version>1.1.1.RELEASE</version>
</dependency>
I can successfully deploy myapp to my K8S cluster.
I have a scheduled task printing propertiesConfig.getMessage() every 10 seconds.
Therefore, I see a series of "1" in my log as myapp starts.
Right after, I change my reload.message ConfigMap's property to "2". What happens?
in less than one second the 'event' is triggered, and k8s calls my /actuator/refresh Spring Boot endpoint;
I still see "1" in the log, because of....
/config/application.yml (mounted volume) takes ~10s to update, then I can see reload.message=2 in there
'refresh' happened just a few seconds ago, when the volume was not updated yet!
In addition, I tried other combos: mode polling, strategy restart_context, and so on.
But... I definitely do want event+refresh! It's the required solution for our use case.
My question:
can I set some kind of "delay" for the refresh event, in order to give the volume the needed time to sync w the ConfigMap?
can I configure my ConfigMaps inside deployment without using volumeMounts at all? (if I remove config map volume now, Spring simply doesn't pick up the properties from ConfigMap)
The application.yml file in the project and the one described in the configmap conflict, somehow.
I fixed that by renaming the one in the project to bootstrap.yml
I am trying to run elasticsearch on kubernetes following
https://github.com/pires/kubernetes-elasticsearch-cluster
The yaml file that I am using to deploy on the cluster looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: es5-master
...
spec:
...
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es5-master
securityContext:
privileged: false
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
image: quay.io/pires/docker-elasticsearch-kubernetes:5.6.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "myes5db"
- name: "NUMBER_OF_MASTERS"
value: "2"
- name: NODE_MASTER
value: "true"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: "ES_JAVA_OPTS"
value: "-Xms256m -Xmx256m"
- name: "NETWORK_HOST"
value: "_eth0_"
ports:
- containerPort: 9300
name: transport
protocol: TCP
livenessProbe:
tcpSocket:
port: 9300
volumeMounts:
- name: storage
mountPath: /data
volumes:
- emptyDir:
medium: ""
name: "storage"
The error that I am getting is:
java.io.IOException: Invalid string; unexpected character: 253 hex: fd
at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:372) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.<init>(ThreadContext.java:362) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.<init>(ThreadContext.java:352) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext.readHeaders(ThreadContext.java:186) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1372) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[transport-netty4-5.6.0.jar:5.6.0]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at
I am running Es version 1.7 and that's why I renamed this new one to elasticsearch5. I hope this naming is not the cause of the problem.
I initially didn't have eth0 for NETWORK_HOST , reviewing Troubleshooting par from the readme doc, I added in but now getting 253 hex: fd error.
Other network host values didnt work.
I really appreciate any ideas regarding that.
I faced this issue when I tried hitting a lower version of elastic search with the higher version of elastic search compiling in my IntelliJ idea. I had elastic search 1.5 running in my machine and was trying to hit it using elastic search 6.5.1 dependencies from IntelliJ idea. I hope this will help.