I am deploying a simple web application. I divided it into 3 pods:front end, back end, and postgres db. I successfully deployed my front end and back end to google kubernetes service and they works as expected. But for my postgresql db server, I used the following yamls. The postgres image is created by me using standard postgres images from dockerhub. I created some tables, and inserted some data and pushed to DockerHub. My backend is not able to make connection to my db. I think I might need to change my Java connection code.Not sure on using localhost. It works without any issue on my local Eclipse Jee and Tomcat.
//my pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgres-app-pod
labels:
name: postgres-app-pod
app: demo-geo-app
spec:
containers:
- name: postgres
image: myrepo/example:v1
ports:
- containerPort: 5432
//my service.yaml
apiVersion: v1
kind: Service
metadata:
name: db
labels:
name: db-service
app: demo-geo-app
spec:
ports:
- port: 5432
targetPort: 5432
selector:
name: postgres-pod
app: demo-geo-app
//from my java backend, I access my db server this way.
String dbURL = "jdbc:postgresql://localhost:5432/Location?user=postgres&password=mysecretpassword";
there are two issues to be fixed.
the service selector key:value should match with pod labels
replace localhost with postgresql service dns
As per comments you have an error on your connection string, localhost is referring to the same pod where you are running the Java code, you need to change to db as the same name you put on the service yaml to work.
I recommend you to use a deployment instead of Pod on the type, but in that case that you are trying to deploy a database you need to use a StatefulSet please review the documentation
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Also, I recommend you to check https://helm.sh you have a lot of chart ready to use instead of having to code from scratch a service like a database.
https://github.com/helm/charts/tree/master/stable/postgresql
On that chart you have all the necessary yaml ready including the PVC provisioning.
Related
I am trying to dockeried two springboot applications by docker compose. user_service needs to send the rest template request to product_service in order to get all product information and the url request is http://localhost:8080/product .without being docker containers there is no problem for 2 applications to communicate but after i made them docker containers when i want to send the request from user_service to product_service there is a connection refused error even though i add them in the same network. here is my docker compose file
version: "3.7"
services:
product_service:
build: /productservice/
restart: always
ports:
- "8080:8080"
networks:
- bridge
user_service:
build: /userservice/
restart: always
ports:
- "7074:7074"
networks:
- bridge
networks:
bridge:
driver: bridge
After a lot of trouble i find the solution for that. if you have 2 spring-boot services that are communicating with each other through rest template or http client you need first to change the localhost to what ever your services name is in the docker-compose file in my case it is http://product_service. an other problem that i face was the face it was an error on my URL. later on i find out i shouldn't have _ on my URL so i change my application service name to product on docker file so finally the URL i set up for my rest template is http://product .
one more thing i should add here is if for communicating between your micro services you want to use https you need to set up SSL certification but if you use http you can skip that.
I have this problem that's driving me insane. I have two deployment and two service yaml files created by kompose convert from a docker-compose. The app that I'm trying to run in Google Cloud is a Spring Boot web app with a mariadb backend. After I apply the four yamls with kubectl, I expose the frontend deployment (on port 8081) by running
TL;DR for anyone coming to this question via search:
OP's service was a ClusterIP and not LoadBalancer. Setting this as LoadBalancer still did not fix issue. Checking logs of pod determined code was unable to connect to DB, so never actually started up successfully.
Output from OP:
kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.72.0.1 <none> 443/TCP 147m
load-balancer LoadBalancer 10.72.15.246 34.69.204.138 80:30870/TCP 86s
mysqldb ClusterIP 10.72.3.186 <none> 3308/TCP 3m20s
web-app ClusterIP 10.72.13.41 <none> 8081/TCP 3m19s
Please share those in your question, making sure to prepend "```" before and after each yaml to preserve formatting
Now, a few points.
You specified you have service yamls. If you have a yaml that describes a service such as a LoadBalancer, you shouldn't need to kubectl expose afterwards, as your yaml should have done that for you.
Assuming the service named "load-balancer" is the one you have created via your yamls, the IP:port combination you should be using is 34.69.204.138:80. What IP have you been trying to access? Are you trying to access this IP and port? Or a different one?
UPDATE
Based on the pasted yamls, I see this:
In your docker-compose yaml:
web-app:
build: .
image: mihaialexandruteodor/featherwriter
ports:
- "8081:8081"
expose:
- "8081"
This is exposing port 8081 and connecting it to the underlying container.
This is reflected in the service yaml:
apiVersion: v1
kind: Service
metadata:
...
name: web-app
spec:
ports:
- name: "8081"
port: 8081
targetPort: 8081
selector:
io.kompose.service: web-app
status:
loadBalancer: {}
However, I do not see a service called "web-app" in your listing. It's possible therefore, you may have deployed it into a different namespace.
Try kubectl get svc --all-namespaces and see where the service "web-app" is. Find the IP from that, the port should be 8081 and you can then do x.x.x.x:8081 to access the service.
UPDATE 2
The web-app service is of type ClusterIP (documentation) which cannot be accessed outside of the cluster, you need to change the service to be a LoadBalancer type, or use port-forwarding.
To make the service a LoadBalancer, change the service yaml as follows (documentation here):
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.1 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
name: mysqldb
spec:
ports:
- name: "3308"
port: 3308
targetPort: 3306
selector:
io.kompose.service: mysqldb
type: LoadBalancer
This will provision a service that will have an external IP you can use.
Alternatively, use port forwarding to connect a local port with the port being listened on by the service:
kubectl port-forward -n {namespace} svc/web-app 8081:8081
Then you can use localhost:8081 to connect to your service. This option does not require an externally-accessible endpoint, but you will need to run the port forward command (and have it active) each time you want to access the service via the localhost endpoint.
If you want to be able to access the service from somewhere outside of your cluster, that is not your local machine, and is not within the same cluster, you will need to use a LoadBalancer service type.
UPDATE 3
Right, I can't build that Dockerfile as I do not have the src folder, but I can run the image from mihaialexandruteodor/featherwriter, and can see it is indeed listening on 8081
Tomcat initialized with port(s): 8081 (http)
so the next thing to see is see if there's any issues with the pod functionality itself. First check the pod status:
kubectl get pods -n {namespace}
The pod should be called web-app-xxxxx where xxxxx is a random sequence of letters and numbers.
Is the web-app pod running? Does it have a restart counter that is not zero like some of the pods in my prometheus namespace:
$ kubectl get pods -n prometheus
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 3h51m
prometheus-grafana-66cb8bcf4f-428d8 3/3 Running 0 3h51m
prometheus-kube-prometheus-operator-749fc8899b-dnvft 1/1 Running 0 3h51m
prometheus-kube-state-metrics-77698656df-btq4k 1/1 Running 20 3h51m
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 3h51m
prometheus-prometheus-node-exporter-jj9z5 1/1 Running 30 3h51m
prometheus-prometheus-node-exporter-lbk6p 1/1 Running 0 3h51m
prometheus-prometheus-node-exporter-vqfhk 1/1 Running 20 3h51m
Next get the logs from the pods like so:
kubectl logs -n {namespace} web-app-xxxxx
See if you can find any errors.
My hunch, given that we've connected everything through on 8081 and Tomcat is indeed running on 8081, is that the spring app is crashing repeatedly and Kubernetes is restarting it, the app then fails again, and it tries again over and over, eventually failing into a CrashLoopBackOff state where Kubernetes will delay restarting by a longer period.
I`m learning Docker. I build my image from Dockerfile in the counter app. And I am using mysql as a database. DockerCompose file is using one db and two container from the same app image. Mysql db has two different schemas. My goal is to use separate app services with different ports(e.g. 9000 and 9001), and they have own schemas. When I call localhost:9000/index it shows first counter and when I call localhost:9000/index it shows second counter.
But problem is that both of them use first schema, and so result being same counter. How can I isolate schemas?
Compose-file ->
version: '3.1'
services:
mysql:
image: mysql
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- mysql_data:/var/lib/mysql
hello-docker:
image: hello-docker:0.0.2
restart: always
environment:
DB_CONNECTION_IP: mysql
DB_SCHEMA_NAME: hello-counter
ports:
- "9000:9000"
volumes:
- mysql_data:/var/lib/mysql
hello-docker2:
image: hello-docker:0.0.2
restart: always
environment:
DB_CONNECTION_IP: mysql
DB_SCHEMA_NAME: hello_counter2
ports:
- "9001:9000"
volumes:
mysql_data:
application.yaml ->
spring:
datasource:
url: &connectionUrl jdbc:mysql://${DB_CONNECTION_IP:localhost}:${DB_CONNECTION_PORT:3306}/${DB_SCHEMA_NAME}?allowPublicKeyRetrieval=true&createDatabaseIfNotExist=true&useSSL=false&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=UTF-8
username: root
password: password
driver-class-name: com.mysql.cj.jdbc.Driver
jpa:
hibernate.ddl-auto: validate
generate-ddl: true
show-sql: true
properties.hibernate.format_sql: true
server:
port: 9000
It is better having separate docker compose for each application and its database.
And if you want only one docker compose for both applications, you can define two separate services for mysql with different exposed schemas and ports and refer to each of them in your applications.
This would be same as your application that you have defined two services for it.
In addition here:
When I call localhost:9000/index it shows first counter and when I
call localhost:9000/index it shows second counter.
You referred to same application, It seems you mean:
localhost:9000/index
and
localhost:9001/index
The easiest way to accomplish this is to run a separate database per service. You can put these in a single Compose file if you'd like. (Or as #EskandarAbedini suggests in their answer you can run a separate Compose file per service, though this can get unwieldy if you have a significant stack based on largely the same code base.)
version: '3.8'
services:
mysql1:
image: mysql
volumes:
- mysql_data1:/var/lib/mysql
hello-docker1:
image: hello-docker:0.0.2
environment:
DB_CONNECTION_IP: mysql1
ports:
- "9000:9000"
mysql2:
image: mysql
volumes:
- mysql_data2:/var/lib/mysql
hello-docker2:
image: hello-docker:0.0.2
environment:
DB_CONNECTION_IP: mysql2
ports:
- "9001:9000"
volumes:
mysql_data1:
mysql_data2:
Note that both pairs of containers run the same images, but the MySQL containers have separated storage, and the application containers publish different host ports for the same container port. You'd have to separately run tasks like database migration for each container.
In principle nothing stops you from running multiple databases or schemata in a single MySQL container, provided you execute the correct SQL CREATE DATABASE call. You'd need a custom SQL init script to set this up; you couldn't do it with just environment-variable configuration.
I created a microservices infrastructure on Kubernetes (version: 1.20.9-gke.1001) on Google Cloud Platform using the Spring Cloud.
First I created the following deployments: Eureka (service discovery), Zuul (API Gateway), Zipkin (Distributed tracing system), User Service and Auth Service.
Then I created the following services: eureka-service with “Cluster IP” type which allows other pods to connect to Eureka, zipkin-service with “Cluster IP” type which allows other pods to connect to Zipkin and loadbalancer-service with “External load balancer” type which is connected to the Zuul.
Finally I tried to create an Ingress using the attached yaml file but at every request I tried to execute, I received the following error: “response 404 (backend NotFound), service rules for the path non-existent”. While if I try to invoke the APIs using the external IP of the loadbalancer-service the backend works correctly.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: project-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: project.test.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: loadbalancer-service
port:
number: 8765
Do you have any idea why the Ingress is not working?
Also I would need to expose the services with HTTPS, could you kindly explain to me how to use an existing security certificate in the Ingress?
Thanks, this is my first experience with Kubernetes and of course any advice on how to improve the infrastructure is welcome.
im trying to install the Elastic APM with Elasticsearch, Kibana and the APM server as 3 services with docker-compose. Now im getting confused on how to set the IPs in the app-server.yml file with the documentation APM Server Configuration. The file should look like this:
apm-server:
host: localhost:8200
output:
elasticsearch:
hosts: ElasticsearchAddress:9200
I tried to set ElasticsearchAddress to localhost or 127.0.0.1 but I always get errors like
Failed to connect: Get http://127.0.0.1:9200: dial tcp 127.0.0.1:9200: getsockopt: connection refused or Failed to connect: Get http://localhost:9200: dial tcp [::1]:9200: connect: cannot assign requested address. I also tried it with several other ips.
Does anyone know how to configure the app server correctly or are there any docker-compose files to do the installation correctly?
Thanks for ur help
If you are starting all the services with single docker compose file, the app-server.yaml should have the value like this
output:
elasticsearch:
hosts: elasticsearch:9200
The "hosts: elasticsearch:9200" should be service name of the elasticsearch you mentioned in the docker-compose. Like in the followiing
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
When you bring up containers using compose, each container has its own networking stack (so they can each talk to themselves on localhost, but they need an ip address or dns name to talk to a different container!).
Compose by default connects each of the containers to a default network and gives each a dns name with the name of the service.
If your compose file looks like
services:
apm:
image: apm_image
elasticsearch:
image: elasticsearch:latest
A process in the apm container could access elasticsearch at http://elasticsearch:9200