I am trying to run elasticsearch on kubernetes following
https://github.com/pires/kubernetes-elasticsearch-cluster
The yaml file that I am using to deploy on the cluster looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: es5-master
...
spec:
...
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es5-master
securityContext:
privileged: false
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
image: quay.io/pires/docker-elasticsearch-kubernetes:5.6.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "myes5db"
- name: "NUMBER_OF_MASTERS"
value: "2"
- name: NODE_MASTER
value: "true"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: "ES_JAVA_OPTS"
value: "-Xms256m -Xmx256m"
- name: "NETWORK_HOST"
value: "_eth0_"
ports:
- containerPort: 9300
name: transport
protocol: TCP
livenessProbe:
tcpSocket:
port: 9300
volumeMounts:
- name: storage
mountPath: /data
volumes:
- emptyDir:
medium: ""
name: "storage"
The error that I am getting is:
java.io.IOException: Invalid string; unexpected character: 253 hex: fd
at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:372) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.<init>(ThreadContext.java:362) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.<init>(ThreadContext.java:352) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext.readHeaders(ThreadContext.java:186) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1372) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[transport-netty4-5.6.0.jar:5.6.0]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at
I am running Es version 1.7 and that's why I renamed this new one to elasticsearch5. I hope this naming is not the cause of the problem.
I initially didn't have eth0 for NETWORK_HOST , reviewing Troubleshooting par from the readme doc, I added in but now getting 253 hex: fd error.
Other network host values didnt work.
I really appreciate any ideas regarding that.
I faced this issue when I tried hitting a lower version of elastic search with the higher version of elastic search compiling in my IntelliJ idea. I had elastic search 1.5 running in my machine and was trying to hit it using elastic search 6.5.1 dependencies from IntelliJ idea. I hope this will help.
Related
My password placeholder in Application.yaml in spring boot project:
password: {DB_PASSWORD}
My secret file:
apiVersion: v1
data:
DB_PASSWORD: QXBwX3NhXzA1X2pzZHVlbmRfMzIx
kind: Secret
type: Opaque
metadata:
name: test-secret
My Deployment config file part:
spec:
containers:
- envFrom:
- configMapRef:
name: gb-svc-rpt-dtld-cc
image: >-
artifactory.global.standardchartered.com/colt/gb-svc-reports-dataloader-cc/gb-svc-reports-dataloader-cc-develop#sha256:c8b7e210c18556155d8314eb41965fac57c1c9560078e3f14bf7407dbde564fb
imagePullPolicy: Always
name: gb-svc-rpt-dtld-cc
ports:
- containerPort: 8819
protocol: TCP
volumeMounts:
- mountPath: /etc/secret
name: secret-test
volumes:
- name: secret-test
secret:
defaultMode: 420
secretName: test-secret
I'm able to see the secrets added in /etc/secret path also. But it is not getting referred in placeholders and getting error while server startup.
Could not resolve placeholder 'DB_PASSWORD' in value "${DB_PASSWORD}"
Note: Same code works if i add the secret as environment variable in deployment config
As I understand from your question you are trying to mount secret to a pod as an environment variable. In kubernetes secrets are able to mount as a volume (which you did in the attached code) and as env variable (as you like to do)
For that you should use:
spec:
containers:
- env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: DB_PASSWORD
name: test-secret
image: "fedora:29"
name: my_app
We want to build a Spring Boot-based project using Maven. We found the Maven Task on the Tekton Hub and already have a running Pipeline. In a shortened version our pipeline.yml looks like this:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
spec:
params:
- name: SOURCE_URL
type: string
description: A git repo url where the source code resides.
- name: SOURCE_REVISION
description: The branch, tag or SHA to checkout.
default: ""
workspaces:
- name: maven-settings
- name: source-workspace
tasks:
- name: fetch-repository
taskRef:
name: git-clone
workspaces:
- name: output
workspace: source-workspace
params:
- name: url
value: "$(params.SOURCE_URL)"
- name: revision
value: "$(params.SOURCE_REVISION)"
- name: subdirectory
value: ""
- name: deleteExisting
value: "true"
- name: maven
taskRef:
name: maven
runAfter:
- fetch-repository
params:
- name: GOALS
value:
- package
workspaces:
- name: source
workspace: source-workspace
- name: maven-settings
workspace: maven-settings
And a PipelineRun is defined as:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
spec:
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: maven-settings
emptyDir: {}
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: source-pvc
params:
- name: SOURCE_URL
value: https://gitlab.com/jonashackt/microservice-api-spring-boot
- name: SOURCE_REVISION
value: 3c4131f8566ef157244881bacc474543ef96755d
The source-pvc PersistentVolumeClaim is defined as:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: source-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
Our project is being build fine, but the Task downloads all the project's Maven dependencies over and over again when we start another PipelineRun:
The Tekton Hub's Maven Task https://hub.tekton.dev/tekton/task/maven doesn't seem to support using a cache. How can we cache nevertheless?
There's an easy way to accomplish caching using Tekto Hub's Maven Task. Instead of specifying an empty directory in the maven-settings workspace with emptyDir: {} you need to create a new subPath inside your already defined source-pvc PersistentVolumeClaim. Also link the persistentVolumeClaim the same way as you already linked it for the source-workspace. Your PipelineRun now somehow looks like this:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
spec:
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: maven-settings
subPath: maven-repo-cache
persistentVolumeClaim:
claimName: source-pvc
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: source-pvc
params:
- name: SOURCE_URL
value: https://gitlab.com/jonashackt/microservice-api-spring-boot
- name: SOURCE_REVISION
value: 3c4131f8566ef157244881bacc474543ef96755d
Now the new subPath is already available via the maven-settings workspace inside the Tekton Hub's Maven Task (which doesn't implement an extra cache workspace right now). We only need to tell Maven to use the path workspaces.maven-settings.path as the cache repository.
Therefore we add -Dmaven.repo.local=$(workspaces.maven-settings.path) as a value to the GOALS parameter of the maven Task like this:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
spec:
params:
- name: SOURCE_URL
type: string
description: A git repo url where the source code resides.
- name: SOURCE_REVISION
description: The branch, tag or SHA to checkout.
default: ""
workspaces:
- name: maven-settings
- name: source-workspace
tasks:
- name: fetch-repository # This task fetches a repository from github, using the `git-clone` task you installed
taskRef:
name: git-clone
workspaces:
- name: output
workspace: source-workspace
params:
- name: url
value: "$(params.SOURCE_URL)"
- name: revision
value: "$(params.SOURCE_REVISION)"
- name: subdirectory
value: ""
- name: deleteExisting
value: "true"
- name: maven
taskRef:
name: maven
runAfter:
- fetch-repository
params:
- name: GOALS
value:
- -Dmaven.repo.local=$(workspaces.maven-settings.path)
- verify
workspaces:
- name: source
workspace: source-workspace
- name: maven-settings
workspace: maven-settings
Now after the first pipeline execution every next run should re-use the Maven repository inside the maven-settings workspace. This should also prevent the log from beeing polluted with Maven Download statements and speeds up the pipeline depending on the number of dependencies:
Our simple example builds more than twice as fast.
I'm working on a library to read secrets from a given directory that I've got easily up and running with Docker Swarm by using the /run/secrets directory as the defined place to read secrets from. I'd like to do the same for a Kubernetes deployment but looking online I see many guides that advise using various Kubernetes APIs and libraries. Is it possible to simply read from disk as it is with Docker Swarm? If so, what is the directory that these are stored in?
Please read the documentation
I see 2 practical ways to access the k8s secrets:
Mount the secret as a file
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
Expose the secret as an environmental variable
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
I am having trouble while debugging the remote spring-boot application using vscode IDE.( I am not using any docker file in development as i am using the jib-maven plugin with skaffold to deploy on k8s. I assume this should cause any issue )
Below is the snapshot for k8s yaml file ( i have deleted few parts and replaced it with dot)
apiVersion: v1
kind: Service
metadata:
annotations:
.
.
.
.
.
.
.
.
name: ABC-service
spec:
ports:
- name: http
port: 8080
targetPort: 8080
- name: grpc
port: 50051
targetPort: 50051
selector:
app: ABC-service
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ABC-service
spec:
replicas: 1
selector:
matchLabels:
app: ABC-service
template:
metadata:
labels:
app: ABC-service
spec:
containers:
- ports:
- containerPort: 8080
- containerPort: 50051
- containerPort: 50005
env:
- name: SECURESTORE_SVC_HOST
value: securestore-svc
- name: SECURESTORE_SVC_PORT
value: "50051"
- name: IAM_SVC_HOST
value: iam-service-svc
- name: IAM_SVC_PORT
value: "50051"
name: ABC-service
imagePullPolicy: Always
image: ABC/ABC-service
imagePullSecrets:
- name: ABC-dev
Once my service is deployed , I perform below steps to debug
Executing the command in terminal
kubectl port-forward ABC-service-c59667c89-z5pzp 8080:8080
Now my service is accessible via localhost:8080/Hello
Afterwards when i try to connect my vscode debugger on port 8080 using following launch.json configuration
{
"type": "java",
"name": "Debug (Attach)",
"projectName": "MyApplication",
"request": "attach",
"hostName": "localhost",
"port": 8080
}
here debugger keeps waiting to connect but never gets attached and eventually times out. I followed many tutorials but not sure what I am messing up.
In launch.json, add
"console": "internalConsole",
Try this and see if question goes away.
Tried to get the pod name inside the Tomcat container at startup. Already exposed the pod's full name as an environment variable using the Kubernetes Downward API as below in search.yaml file ( only a portion of the file attached).
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: dev
labels:
app: search
spec:
replicas: 1
selector:
matchLabels:
app: search
template:
metadata:
labels:
app: search
spec:
hostname: search-host
imagePullSecrets:
- name: regcred
containers:
- name: search-container
image: docker.test.net/search:develop-202104070845
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 8080
resources:
requests:
memory: "2048Mi"
cpu: "1"
limits:
memory: "2048Mi"
cpu: "2"
env:
- name: SERVER_GROUP
value: "SEARCH"
- name: MIN_TOMCAT_MEMORY
value: "512M"
- name: MAX_TOMCAT_MEMORY
value: "5596M"
- name: DOCKER_TIME_ZONE
value: "Asia/Colombo"
- name: AVAILABILITY_ZONE
value: "CMB"
After running the pod this environment variable is available in docker level.
Pod details
NAME READY STATUS RESTARTS AGE
search-56c9544d58-bqrxv 1/1 Running 0 4s
Pod environment variable for pod name
POD_NAME=search-56c9544d58-bqrxv
When accessed this as below in Tomcat container's java code in a jar called BootsTrap.jar and it returned as null.
String dockerPodName = System.getenv( "POD_NAME" );
Is it because the pod is not up and running before the tomcat container initialized or accessing the environment variable in Java is incorrect or is there another way of accessing pod's environment variable through java.
You are setting MY_POD_NAME as environment variable, but do the lookup for POD_NAME. Use the same name in the Java code and the deployment.
Note: Your YAML seems to have wrong indentation, I assume that this is just a copy-paste artifact. If that is not the case, it could lead to rejected changes of the deployment since the YAML is invalid.