kubernetes failed to start pod due to ContainerCannotRun - java

I am new to kubernetes. Recently set up kubernetes cluster with 1 master and 1 node.
I am able to start a docker container by running
sudo docker run <docker-image> in my node machine.
But i failed to start docker container as a pod using kubernetes yml file.
by running sudo kubectl create -f deployment.yml
I describe the pod information and saw this error message.
Last State: Terminated
Reason: ContainerCannotRun
Message: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"HOSTNAME\": executable file not found in $PATH": unknown
Exit Code: 128
docker container supposes to start a java executable.
this is my deployment file
kind: Service
apiVersion: v1
metadata:
name: service1-service
spec:
selector:
app: service1
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 26666
# Port to forward to inside the pod
targetPort: 26666
# Port accessible outside cluster
nodePort: 26666
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service1-depolyment
spec:
selector:
matchLabels:
app: service1
replicas: 1
template:
metadata:
labels:
app: service1
spec:
containers:
- name: service1
image: service1-docker-image
imagePullPolicy: Never
ports:
- containerPort: 26666
# args: ["HOSTNAME", "KUBERNETES_PORT"]
In this deployment file, I try to create a nginx and one java web applicaition service.
It is because i defined wrong apiVersion and kind ?
Any help would be appreciated.

Look at this error exec: \"HOSTNAME\": executable file not found in $PATH
I had a similar error since the container could not locate the docker "CMD" binary since I gave it the wrong path. Check the path to the file and that should do the trick.

Related

Java Spring Active profile in kubernetes Cluster

i want to start Java spring app with active profile...
I build Docker image in Gitlab CI/CD using maven wrapper ,
./mvnw compile jib:build -Dimage=image/sms-service:1
after that i deploy app in k8s....
now i want to run with active profile , what is best way? how can i define in k8s to run specific user
apiVersion: apps/v1
kind: Deployment
metadata:
name: sms-service
namespace: sms-service
spec:
selector:
matchLabels:
app: sms-service
replicas: 4 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: sms-service
spec:
template:
spec:
containers:
- name: sms-service
image: image/sms-service:1
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: sms-service
Set the SPRING_PROFILES_ACTIVE environment variable to the profile(s) you want to run.
You can set it in the deployment yaml or at build time in your image but usually better to add it to deployment.
Create a new file, named configmap.yaml under the k8s config folder and add the following lines:
apiVersion: v1
kind: ConfigMap
metadata:
name: blabla
namespace: bla
data:
application.yaml: |
spring:
profiles:
active: prod (here goes the profile)
This tells Kubernetes to set this configuration when starting the container

Kubernetes deployment error with Java/Micronaut: ERR_CONNECTION_REFUSED

I am trying to deploy an app having 3 services - frontend (Angular), backend 1 (Java/Micronaut), and backend 2 (Java/Micronaut).
My frontend works properly but the Java apps are not working.
Sometimes, I observed it started after 20 min. of deploying a Java app, but this time it does not work even after 1 hr.
Deployment, pod service - all are in running state in Kubernetes, but when I try to hit the URL I see below error:
deployment.yaml for java app
apiVersion: apps/v1
kind: Deployment
metadata:
name: authentication-deploy
labels:
name: authentication-deploy
app: supply-chain-app
spec:
replicas: 1
selector:
matchLabels:
name: authentication-pod
app: supply-chain-app
template:
metadata:
name: authentication-pod
labels:
name: authentication-pod
app: supply-chain-app
spec:
containers:
- name: authentication
image: cawishika/authentication-service:1.1
ports:
- containerPort: 80
service.yaml for java app
apiVersion: v1
kind: Service
metadata:
name: authentication-service
labels:
name: authentication-service
app: supply-chain-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30006
selector:
name: authentication-pod
app: supply-chain-app
Docker file
FROM adoptopenjdk/openjdk11:latest
EXPOSE 8002
ADD target/authentication-service-0.1.jar authentication-service-0.1.jar
ENTRYPOINT ["java", "-jar", "/authentication-service-0.1.jar"]
kubectl logs podname
Your Dockerfile is exposing port 8002 (EXPOSE 8002), but your app is started on port 8080.
Additionally, your Kubernetes configuration is pointing to port 80 of your pod.
You should set it so that all three configurations use the same port.

How to do Java Remote debugging with kubernetes pod? [duplicate]

I try to remote debug the application in attached mode with host: 192.168.99.100 and port 5005, but it tells me that it is unable to open the debugger port. The IP is 192.268.99.100 (the cluster is hosted locally via minikube).
Output of kubectl describe service catalogservice
Name: catalogservice
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=catalogservice
Type: NodePort
IP: 10.98.238.198
Port: web 31003/TCP
TargetPort: 8080/TCP
NodePort: web 31003/TCP
Endpoints: 172.17.0.6:8080
Port: debug 5005/TCP
TargetPort: 5005/TCP
NodePort: debug 32003/TCP
Endpoints: 172.17.0.6:5005
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
This is the pods service.yml:
apiVersion: v1
kind: Service
metadata:
name: catalogservice
spec:
type: NodePort
selector:
app: catalogservice
ports:
- name: web
protocol: TCP
port: 31003
nodePort: 31003
targetPort: 8080
- name: debug
protocol: TCP
port: 5005
nodePort: 32003
targetPort: 5005
And in here I expose the containers port
spec:
containers:
- name: catalogservice
image: elps/myimage
ports:
- containerPort: 8080
name: app
- containerPort: 5005
name: debug
The way I build the image:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8082
ADD /target/catalogservice-0.0.1-SNAPSHOT.jar catalogservice-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n", "-jar", "catalogservice-0.0.1-SNAPSHOT.jar"]
When I execute nmap -p 5005 192.168.99.100 I receive
PORT STATE SERVICE
5005/tcp closed avt-profile-2
When I execute nmap -p 32003 192.168.99.100 I receive
PORT STATE SERVICE
32003/tcp closed unknown
When I execute nmap -p 31003 192.168.99.100 I receive
PORT STATE SERVICE
31003/tcp open unknown
When I execute kubectl get services I receive
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
catalogservice NodePort 10.108.195.102 <none> 31003:31003/TCP,5005:32003/TCP 14m
minikube service customerservice --url returns
http://192.168.99.100:32004
As an alternative to using a NodePort in a Service you could also use kubectl port-forward to access the debug port in your Pod.
kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
You need to expose the debug port in the Deployment yaml for the Pod
spec:
containers:
...
ports:
...
- containerPort: 5005
Then get the name of your Pod via
kubectl get pods
and then add a port-forwarding to that Pod
kubectl port-forward podname 5005:5005
In IntelliJ you will be able to connect to
Host: localhost
Port: 5005
Alternatively, you can use the Cloud Code Intellij plugin.
Also, if you use Fabric8, it provides the fabric8:debug goal.
There was a slip in the yaml you first posted as:
- containerPort: 5050
name: debug
Should be:
- containerPort: 5005
name: debug
You also need to use the external port of 32003 when configuring the IntelliJ debugger. With those changes it should work.
You may also want to think about how to make it more flexible. In the past when I've done this I've used a different form for the docker start command that allows you to turn remote debug on and off by an environment variable called REMOTE_DEBUG, which for you would be:
CMD if [ "x$REMOTE_DEBUG" = "xfalse" ] ; then java $JAVA_OPTS -jar catalogservice-0.0.1-SNAPSHOT.jar ; else java $JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n -jar catalogservice-0.0.1-SNAPSHOT.jar ; fi
You'll probably find you want to set the env var $JAVA_OPTS to limit jvm memory use to avoid issues in k8s.

Dockerized Spring Boot app not using mounted Kubernetes ConfigMap (application.properties)

I have a problem where in my dockerized Spring Boot application is not using the application.properties I stored in a configMap.
However, I can see and confirm that my configMap has been mounted properly in the right directory of my Spring Boot app when I enter the pod's shell.
Note that I have an application.properties by default wherein Kubernetes mounts / overwrites it later on.
It seems that the Spring Boot uses the first application.properties and when k8s overwrites it, apparently, it doesn't use it.
It seems that, apparently, what happens is:
run the .jar file inside the Dockerized Spring Boot app
use the first/default application.properties file on runtime
Kubernetes proceeds to mount the configMap
mount / overwrite success, but how will Spring Boot use this one since it's already running?
Here is the Dockerfile of my Spring Boot / Docker image for reference:
FROM maven:3.5.4-jdk-8-alpine
# Copy whole source code to the docker image
# Note of .dockerignore, this ensures that folders such as `target` is not copied
WORKDIR /usr/src/myproject
COPY . /usr/src/myproject/
RUN mvn clean package -DskipTests
WORKDIR /usr/src/my-project-app
RUN cp /usr/src/myproject/target/*.jar ./my-project-app.jar
EXPOSE 8080
CMD ["java", "-jar", "my-project-app.jar"]
Here's my Kubernetes deployment .yaml file for reference:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-project-api
namespace: my-cluster
labels:
app: my-project-api
spec:
replicas: 1
selector:
matchLabels:
app: my-project-api
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: my-project-api
spec:
containers:
- name: my-project-api
image: "my-project:latest"
imagePullPolicy: Always
env:
.
.
.
volumeMounts:
- name: my-project-config
mountPath: /usr/src/my-project/my-project-service/src/main/resources/config/application.properties
ports:
- containerPort: 8080
name: my-project-api
protocol: TCP
volumes:
# Name of the volume
- name: my-project-config
# Get a ConfigMap with this name and attach to this volume
configMap:
name: my-project-config
And my configMap for reference:
kind: ConfigMap
apiVersion: v1
data:
application.properties: |-
# This comment means that this is coming from k8s ConfigMap. Nice!
server.port=8999
.
.
.
.
metadata:
name: my-project-config
namespace: my-cluster
Any help is greatly appreciated... Thank you so much.. :)
The thing is that /src/main/resources/application.properties that your application uses is the one that is inside the jar file by default. If you open your jar, you should see it there.
That being said, your expectations to mount a /src/main/resources directory where your jar is are not going to be fulfilled, unfortunately.
These are the docs you should be looking at.
I won't go into much detail as it's explained pretty good in the docs but I will say that you are better off explicitly declaring your config location so that new people on the project know from where the config is coming from right off the bat.
You can do something like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-project-api
labels:
app: my-project-api
spec:
selector:
matchLabels:
app: my-project-api
template:
metadata:
labels:
app: my-project-api
spec:
containers:
- name: my-project-api
image: "my-project:latest"
imagePullPolicy: Always
env:
- name: JAVA_OPTS
value: "-Dspring.config.location=/opt/config"
.
.
.
volumeMounts:
- name: my-project-config
mountPath: /opt/config
ports:
- containerPort: 8080
volumes:
- name: my-project-config
configMap:
name: my-project-config
Hope that helps,
Cheers!
I did slightly differently. I made sure I have mounted application.properties at config/. i.e; below is my example mounted application.properties (below commands show the values in pod - i.e; after kubectl exec -it into the pod)
/ # pwd
/
/ # cat config/application.properties
logback.access.enabled=false
management.endpoints.web.exposure.include=health, loggers, beans, configprops, env
Basically, the trick is based on the link in the above answer. Below is an excerpt from the link in which it does say application.properties will be picked from config/. So, I made sure my environment (dev, test, prod) specific config map was mounted at config/. Do note there is precedence for the below list (per the link: locations higher in the list override lower items)
A /config subdir of the current directory.
The current directory
A classpath /config package
The classpath root
Below is the config map definition (just pasted data section)
data:
application.properties: |+
logback.access.enabled={{.Values.logacbkAccessEnabled}}
management.endpoints.web.exposure.include=health, loggers, beans, configprops, env
And you can also see from actuator/env endpoint SpringBootApp did pick those values.
{
"name": "Config resource 'file [config/application.properties]' via location 'optional:file:./config/'",
"properties": {
"logback.access.enabled": {
"value": "false",
"origin": "URL [file:config/application.properties] - 1:24"
},
"management.endpoints.web.exposure.include": {
"value": "health, loggers, beans, configprops, env",
"origin": "URL [file:config/application.properties] - 2:43"
}
}
},

Deploy WAR in Tomcat on Kubernetes

I need to create a Multibranch Jenkins job to deploy a .war file in Tomcat that should run on Kubernetes. Basically, I need the following:
A way to install Tomcat on Kubernetes platform.
Deploy my war file on this newly installed Tomcat.
I need to make use of Dockerfile to make this happen.
PS: I am very new to Kubernetes and Docker stuff and need basic details as well. I tried finding tutorials but couldn't get any satisfactory article.
Any help will be highly highly appreciated.
Docker part
You can use the tomcat docker official image
In your Dockerfile just copy your war file in /usr/local/tomcat/webapps/ directory :
FROM tomcat
COPY app.war /usr/local/tomcat/webapps/
Build it :
docker build --no-cache -t <REGISTRY>/<IMAGE>:<TAG> .
Once your image is built, push it into a Docker registry of your choice.
docker push <REGISTRY>/<IMAGE>:<TAG>
Kubernetes part
1) Here is a simple kubernetes Deployment for your tomcat image
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: <REGISTRY>/<IMAGE>:<TAG>
ports:
- containerPort: 8080
This Deployment definition will create a pod based on your tomcat image.
Put it in a yml file and execute kubectl create -f yourfile.yml to create it.
2) Create a Service :
kind: Service
apiVersion: v1
metadata:
name: tomcat-service
spec:
selector:
app: tomcat
ports:
- protocol: TCP
port: 80
targetPort: 8080
You can now access your pod inside the cluster with http://tomcat-service.your-namespace/app (because your war is called app.war)
3) If you have Ingress controller, you can create an Ingress ressource to expose the application outside the cluster :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tomcat-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app
backend:
serviceName: tomcat-service
servicePort: 80
Now access the application using http://ingress-controller-ip/app

Categories