I have this problem that's driving me insane. I have two deployment and two service yaml files created by kompose convert from a docker-compose. The app that I'm trying to run in Google Cloud is a Spring Boot web app with a mariadb backend. After I apply the four yamls with kubectl, I expose the frontend deployment (on port 8081) by running
TL;DR for anyone coming to this question via search:
OP's service was a ClusterIP and not LoadBalancer. Setting this as LoadBalancer still did not fix issue. Checking logs of pod determined code was unable to connect to DB, so never actually started up successfully.
Output from OP:
kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.72.0.1 <none> 443/TCP 147m
load-balancer LoadBalancer 10.72.15.246 34.69.204.138 80:30870/TCP 86s
mysqldb ClusterIP 10.72.3.186 <none> 3308/TCP 3m20s
web-app ClusterIP 10.72.13.41 <none> 8081/TCP 3m19s
Please share those in your question, making sure to prepend "```" before and after each yaml to preserve formatting
Now, a few points.
You specified you have service yamls. If you have a yaml that describes a service such as a LoadBalancer, you shouldn't need to kubectl expose afterwards, as your yaml should have done that for you.
Assuming the service named "load-balancer" is the one you have created via your yamls, the IP:port combination you should be using is 34.69.204.138:80. What IP have you been trying to access? Are you trying to access this IP and port? Or a different one?
UPDATE
Based on the pasted yamls, I see this:
In your docker-compose yaml:
web-app:
build: .
image: mihaialexandruteodor/featherwriter
ports:
- "8081:8081"
expose:
- "8081"
This is exposing port 8081 and connecting it to the underlying container.
This is reflected in the service yaml:
apiVersion: v1
kind: Service
metadata:
...
name: web-app
spec:
ports:
- name: "8081"
port: 8081
targetPort: 8081
selector:
io.kompose.service: web-app
status:
loadBalancer: {}
However, I do not see a service called "web-app" in your listing. It's possible therefore, you may have deployed it into a different namespace.
Try kubectl get svc --all-namespaces and see where the service "web-app" is. Find the IP from that, the port should be 8081 and you can then do x.x.x.x:8081 to access the service.
UPDATE 2
The web-app service is of type ClusterIP (documentation) which cannot be accessed outside of the cluster, you need to change the service to be a LoadBalancer type, or use port-forwarding.
To make the service a LoadBalancer, change the service yaml as follows (documentation here):
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.1 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
name: mysqldb
spec:
ports:
- name: "3308"
port: 3308
targetPort: 3306
selector:
io.kompose.service: mysqldb
type: LoadBalancer
This will provision a service that will have an external IP you can use.
Alternatively, use port forwarding to connect a local port with the port being listened on by the service:
kubectl port-forward -n {namespace} svc/web-app 8081:8081
Then you can use localhost:8081 to connect to your service. This option does not require an externally-accessible endpoint, but you will need to run the port forward command (and have it active) each time you want to access the service via the localhost endpoint.
If you want to be able to access the service from somewhere outside of your cluster, that is not your local machine, and is not within the same cluster, you will need to use a LoadBalancer service type.
UPDATE 3
Right, I can't build that Dockerfile as I do not have the src folder, but I can run the image from mihaialexandruteodor/featherwriter, and can see it is indeed listening on 8081
Tomcat initialized with port(s): 8081 (http)
so the next thing to see is see if there's any issues with the pod functionality itself. First check the pod status:
kubectl get pods -n {namespace}
The pod should be called web-app-xxxxx where xxxxx is a random sequence of letters and numbers.
Is the web-app pod running? Does it have a restart counter that is not zero like some of the pods in my prometheus namespace:
$ kubectl get pods -n prometheus
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 3h51m
prometheus-grafana-66cb8bcf4f-428d8 3/3 Running 0 3h51m
prometheus-kube-prometheus-operator-749fc8899b-dnvft 1/1 Running 0 3h51m
prometheus-kube-state-metrics-77698656df-btq4k 1/1 Running 20 3h51m
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 3h51m
prometheus-prometheus-node-exporter-jj9z5 1/1 Running 30 3h51m
prometheus-prometheus-node-exporter-lbk6p 1/1 Running 0 3h51m
prometheus-prometheus-node-exporter-vqfhk 1/1 Running 20 3h51m
Next get the logs from the pods like so:
kubectl logs -n {namespace} web-app-xxxxx
See if you can find any errors.
My hunch, given that we've connected everything through on 8081 and Tomcat is indeed running on 8081, is that the spring app is crashing repeatedly and Kubernetes is restarting it, the app then fails again, and it tries again over and over, eventually failing into a CrashLoopBackOff state where Kubernetes will delay restarting by a longer period.
Related
I created a microservices infrastructure on Kubernetes (version: 1.20.9-gke.1001) on Google Cloud Platform using the Spring Cloud.
First I created the following deployments: Eureka (service discovery), Zuul (API Gateway), Zipkin (Distributed tracing system), User Service and Auth Service.
Then I created the following services: eureka-service with “Cluster IP” type which allows other pods to connect to Eureka, zipkin-service with “Cluster IP” type which allows other pods to connect to Zipkin and loadbalancer-service with “External load balancer” type which is connected to the Zuul.
Finally I tried to create an Ingress using the attached yaml file but at every request I tried to execute, I received the following error: “response 404 (backend NotFound), service rules for the path non-existent”. While if I try to invoke the APIs using the external IP of the loadbalancer-service the backend works correctly.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: project-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: project.test.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: loadbalancer-service
port:
number: 8765
Do you have any idea why the Ingress is not working?
Also I would need to expose the services with HTTPS, could you kindly explain to me how to use an existing security certificate in the Ingress?
Thanks, this is my first experience with Kubernetes and of course any advice on how to improve the infrastructure is welcome.
I am newbie to Kubernetes and had a long time configuring my application to be hosted on Kubernetes cluster hosted on AWS EKS.
Status-quo: I am pretty sure that the service of type LoadBalancer is up and running. It has its pod and all the stuff running. The application is simple Java application with input. You can try accessing it by pulling an image from Docker Hub via:
docker run -i ardulat/mckinsey
Question: how can I run the Java application (not Spring, not REST) that is being hosted on Kubernetes cluster?
Already tried:
curl -v <EXTERNAL-IP>:<PORT> that outputs:
* Trying 3.134.148.191...
* TCP_NODELAY set
* Connected to a8154210d09da11ea9c3806983848f2f-1085657314.us-east-2.elb.amazonaws.com (3.134.148.191) port 8080 (#0)
> GET / HTTP/1.1
> Host: a8154210d09da11ea9c3806983848f2f-1085657314.us-east-2.elb.amazonaws.com:8080
> User-Agent: curl/7.63.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host a8154210d09da11ea9c3806983848f2f-1085657314.us-east-2.elb.amazonaws.com left intact
curl: (52) Empty reply from server
nc -v <EXTERNAL-IP> <PORT> that outputs:
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif en0
src 172.20.22.42 port 63865
dst 3.13.128.24 port 8080
rank info not available
TCP aux info available
Connection to a8154210d09da11ea9c3806983848f2f-1085657314.us-east-2.elb.amazonaws.com port 8080 [tcp/http-alt] succeeded!
Therefore, I assume that connection works and the service is up and running except I am trying to connect to the Java (.jar) application in the wrong way. Do you have any suggestions?
You should change your dockerfile and change CMD to ENTRYPOINT which is nicely explained here.
I would also recommend reading Define a Command and Arguments for a Container.
CMD sets default command and/or parameters, which can be overwritten from command line when docker container runs.
ENTRYPOINT configures a container that will run as an executable.
Your dockerfile might look like this:
FROM java:8
WORKDIR /
ADD Anuar.jar Anuar.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","Anuar.jar"]
Your service might look like this:
apiVersion: v1
kind: Service
metadata:
name: javaservice
labels:
app: javaservice
spec:
type: LoadBalancer
selector:
app: javaservice
ports:
- protocol: TCP
port: 8080
name: http
Also it's important which LoadBalancer you want to use as on AWS there is Classic Load Balancer which is default and Network Load Balancer. You can read more about it on Internal load balancer and check the AWS documentation for Load Balancing.
Amazon EKS supports the Network Load Balancer and the Classic Load Balancer through the Kubernetes service of type LoadBalancer. The configuration of your load balancer is controlled by annotations that are added to the manifest for your service.
By default, Classic Load Balancers are used for LoadBalancer type services. To use the Network Load Balancer instead, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
For more information about using Network Load Balancer with Kubernetes, see Network Load Balancer support on AWS in the Kubernetes documentation.
By default, services of type LoadBalancer create public-facing load balancers. To use an internal load balancer, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
For internal load balancers, your Amazon EKS cluster must be configured to use at least one private subnet in your VPC. Kubernetes examines the route table for your subnets to identify whether they are public or private. Public subnets have a route directly to the internet using an internet gateway, but private subnets do not.
I deployed a Spring Boot app on AWS Elastic Beanstalk. I am facing a 502 Bad Gateway error. I cannot find anything useful from the logs
/var/log/nginx/error.log
2019/02/10 02:12:54 [error] 3257#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: ...., server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5000/", host: "...."
I do not have any html files for front-end. I just want to upload deploy the api to share the documentation from swagger ui.
It's because server is listening to 5000, Adding "server.port=5000" to application.properties fixed the issue.
This happens because the application load balancer by default points to the Port 80 of the nginx server in EC2 instance. The nginx is configured to forward requests to Port 5000 by default, whereas out application server runs on Port 8080.
Default Nginx Configuration
Expected Nginx Configuration
This can be fixed using an environment property named PORT and pointing it to 8080
Go to configuration > Environment Properties and add the property
Refer AWS Document: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-nginx.html
Another option to fix this is to point application load balancer to the application server port(8080) directly instead of the nginx(80.
You can configure 8080 as the process port.
Another way to fix this would be to set port to 5000 in the spring boot application by using server.port property.
My issue was my Java version didn't match the platform I'm running with Elastic Beanstalk, even tho my server.port was on 5000. My Java version was 11, and my platform was only Java 8 for Amazon Linux. So changing it to 8 in my base pom.xml fixed it.
im trying to install the Elastic APM with Elasticsearch, Kibana and the APM server as 3 services with docker-compose. Now im getting confused on how to set the IPs in the app-server.yml file with the documentation APM Server Configuration. The file should look like this:
apm-server:
host: localhost:8200
output:
elasticsearch:
hosts: ElasticsearchAddress:9200
I tried to set ElasticsearchAddress to localhost or 127.0.0.1 but I always get errors like
Failed to connect: Get http://127.0.0.1:9200: dial tcp 127.0.0.1:9200: getsockopt: connection refused or Failed to connect: Get http://localhost:9200: dial tcp [::1]:9200: connect: cannot assign requested address. I also tried it with several other ips.
Does anyone know how to configure the app server correctly or are there any docker-compose files to do the installation correctly?
Thanks for ur help
If you are starting all the services with single docker compose file, the app-server.yaml should have the value like this
output:
elasticsearch:
hosts: elasticsearch:9200
The "hosts: elasticsearch:9200" should be service name of the elasticsearch you mentioned in the docker-compose. Like in the followiing
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
When you bring up containers using compose, each container has its own networking stack (so they can each talk to themselves on localhost, but they need an ip address or dns name to talk to a different container!).
Compose by default connects each of the containers to a default network and gives each a dns name with the name of the service.
If your compose file looks like
services:
apm:
image: apm_image
elasticsearch:
image: elasticsearch:latest
A process in the apm container could access elasticsearch at http://elasticsearch:9200
I have a Tomcat 7.0 webapp running inside a docker container on AWS Elastic Beanstalk (EB) (I followed the tutorial here).
When I browse to my EB url myapplication.elasticbeanstalk.com, I get a 502 Bad Gateway by served by nginx. So its immediately clear that my port 80 is not forwarding to my container. When I browse to myapplication.elasticbeanstalk.com:8888 (another port I exposed in my Dockerfile) the connection is refused (ERR_CONNECTION_REFUSED). So I SSH'ed into the AWS instance and checked the docker logs, which show that my Tomcat server has started successfully, yet obviously hasn't processed any requests.
Does anyone have any idea my port 8888 appears not to be forwarding to my container?
Executing the command (on the AWS instance):
sudo docker ps -a
gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c353e236da7a aws_beanstalk/current-app:latest "catalina.sh run" 28 minutes ago Up 13 minutes 80/tcp, 8080/tcp, 8888/tcp sharp_leakey
which shows port 80, 8080, and 8888 as being open on the docker container.
My Dockerfile is fairly simple:
FROM tomcat:7.0
EXPOSE 8080
EXPOSE 8888
EXPOSE 80
and my Dockerrun.aws.json file is:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "myusername/mycontainer-repo"
},
"Authentication": {
"Bucket": "mybucket",
"Key": "docker/.dockercfg"
},
"Ports": [
{
"ContainerPort": "8888"
}
]
}
Does anyone see where I could be going wrong?
I'm not even sure where to look at this point.
Also, my AWS security group for the instance is open on port 80, 8080, and 8888.
Any advice would be greatly appreciated! I'm at a loss here.
Update 1:
Minor update, although I am still having trouble.
After SSH'ing into my AWS EB instance, I inspected the Docker container to grab the IP of the container:
sudo docker inspect c353e236da7a
which gave me the IP as 172.17.0.6.
Then, again from the AWS instance, I ran a curl command:
curl 172.17.0.6:8080/homepage
which works, and returns the HTML of homepage! However, curl 172.17.0.6:8888/homepage does not work (so I'm not sure what the "ContainerPort" : "8888" means in the Dockerrun.aws.json file then).
However, I still have the question, why aren't my :8080 requests being forwarded to the container Tomcat webserver? As above, myapplication.elasticbeanstalk.com:8080/homepage still receives a connection refused (ERR_CONNECTION_REFUSED).
myapplication.elasticbeanstalk.com
Is a load balancer, not your instance. Elastic beanstalk launches a load balancer to autoscale your instances. Therefore when you are connecting to myapplication.elasticbeanstalk.com:8888 You are actually connecting to an instance that has only port 80 open. The load balancer then fowards traffic to an instance listening on port 8080.
You should be able to access your web application by just using the url without a port: myapplication.elasticbeanstalk.com
The reason this doesn't work is because you told your docker container to use port 8080, but told Beanstalk to forward to port 8888. Sure, all your ports are open, but tomcat is only running on port 8080.
The ports section in the dockerrun.aws.json doesnt tell your app which port to run on, it tells the load balancer which port to forward to.
Ports – (required when you specify the Image key) Lists the ports to expose on the Docker container. AWS Elastic Beanstalk uses ContainerPort value to connect the Docker container to the reverse proxy running on the host.
You can specify multiple container ports, but AWS Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
as seen here.
Or, in other words, the 8888 that you told beanstalk to forward to is working correctly, but your app is actually running on port 8080. You should change the dockerrun.aws.json to use the port 8080 instead.
I'm done this using fixing of nginx's listen ports.
So, you have to add .ebextensions directory into root of your app and put your config file here (in my example it's 00-bypass-nginx-proxy.config):
files:
"/tmp/change_nginx_port.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# change listen port from 80 to 8761
sed -i '7s/.*/ listen 8761;/' /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy.conf
# restart nginx
service nginx restart
container_commands:
00setup-nginx:
command: "/tmp/change_nginx_port.sh"
Your service now will be available on port 8761. Pay attention to sed part of script, there is hardcoded line number which could be differ on your environment.