docker-compose: no connection between containers - java

I have 3 microservices, and I run them with docker.
Dockerfile of each of them.
Frontend:
FROM node:alpine
LABEL maintainer="2262288#gmail.com"
WORKDIR /usr/app/front
EXPOSE 3000
COPY ./ ./
RUN npm install
CMD ["npm", "start"]
Backend 1 (back):
FROM openjdk:8-jdk-alpine
LABEL maintainer="2262288#gmail.com"
VOLUME /tmp
EXPOSE 8099
ARG JAR_FILE=build/libs/auth-0.0.3.jar
ADD ${JAR_FILE} digital.jar
ENTRYPOINT ["java","-jar","/digital.jar"]
Backend 2 (message):
FROM openjdk:8-jdk-alpine
LABEL maintainer="2262288#gmail.com"
VOLUME /tmp
EXPOSE 8082
ARG JAR_FILE=build/libs/sender-0.0.1.jar
ADD ${JAR_FILE} sender.jar
ENTRYPOINT ["java","-jar","/sender.jar"]
Frontend send REST-request to backend1, than, backend1 send REST-request to backend2 (message).
I published it on hub & run on external server in docker-compose:
version: '3.7'
services:
web:
image: account/front:0.0.1
restart: on-failure
ports:
- 80:3000
back:
image: account/back:0.0.3
restart: on-failure
ports:
- 8099:8099
message:
image: account/message:0.0.1
restart: on-failure
ports:
- 8082:8082
Backend services run on ports:
message_1_e8eb3b2d2477 | 2019-09-24 09:34:00.882 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8082 (http) with context path ''
back_1_1982cc6e57f7 | 2019-09-24 09:34:07.403 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8099 (http) with context path ''
As we see, each service run on its own port.
Than, I try to send request to front -> back -> message. back send request to message & reseive answer:
java.net.ConnectException: Operation timed out (Connection timed out)
Than, requests to message service not reach it.
When I send request directly with Postman, it works.
What's wrong?
UPD.
request from front to back:
http://81.100.122.90:8099/auth/register
body:
{"username":"ksgcf","password":"123","firstName":"John","lastName":"Doe","email":"398456234785#gmail.com"}
request from back to message (IP changed):
String url = "http://81.100.122.90:8082/email";
EmailMessageDto request = new EmailMessageDto(
dto.getEmail(),
"slava_rossii#list.ru",
"Email confirmation",
"Press link: http://dig.lamb.ru/confirm?username="
+ registrationToken.getUsername() + "&token=" + registrationToken.getToken()
);
So, I see this message when docker-compose run for the first time:
Creating network "project_default" with the default driver

First, when you use docker-compose all services are available via there names. So you can access message from back like this
$ docker-compose exec back ping message
PING message (172.24.0.3) 56(84) bytes of data.
64 bytes from message (172.24.0.3): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from message (172.24.0.3): icmp_seq=2 ttl=64 time=0.068 ms
Second, check port bindings. You have to bind 0.0.0.0 (not localhost which is default for most of the services and frameworks) to access to the service from other containers via network. It's same you get ordinary virtual machines.
You can check port availability with telnet
As example I'm checking is postresql available on 5432 from container called superset
$ docker-compose exec superset telnet postgres 5432
Trying 172.24.0.3...
Connected to postgres.
Escape character is '^]'.

Related

How to do Java Remote debugging with kubernetes pod? [duplicate]

I try to remote debug the application in attached mode with host: 192.168.99.100 and port 5005, but it tells me that it is unable to open the debugger port. The IP is 192.268.99.100 (the cluster is hosted locally via minikube).
Output of kubectl describe service catalogservice
Name: catalogservice
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=catalogservice
Type: NodePort
IP: 10.98.238.198
Port: web 31003/TCP
TargetPort: 8080/TCP
NodePort: web 31003/TCP
Endpoints: 172.17.0.6:8080
Port: debug 5005/TCP
TargetPort: 5005/TCP
NodePort: debug 32003/TCP
Endpoints: 172.17.0.6:5005
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
This is the pods service.yml:
apiVersion: v1
kind: Service
metadata:
name: catalogservice
spec:
type: NodePort
selector:
app: catalogservice
ports:
- name: web
protocol: TCP
port: 31003
nodePort: 31003
targetPort: 8080
- name: debug
protocol: TCP
port: 5005
nodePort: 32003
targetPort: 5005
And in here I expose the containers port
spec:
containers:
- name: catalogservice
image: elps/myimage
ports:
- containerPort: 8080
name: app
- containerPort: 5005
name: debug
The way I build the image:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8082
ADD /target/catalogservice-0.0.1-SNAPSHOT.jar catalogservice-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n", "-jar", "catalogservice-0.0.1-SNAPSHOT.jar"]
When I execute nmap -p 5005 192.168.99.100 I receive
PORT STATE SERVICE
5005/tcp closed avt-profile-2
When I execute nmap -p 32003 192.168.99.100 I receive
PORT STATE SERVICE
32003/tcp closed unknown
When I execute nmap -p 31003 192.168.99.100 I receive
PORT STATE SERVICE
31003/tcp open unknown
When I execute kubectl get services I receive
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
catalogservice NodePort 10.108.195.102 <none> 31003:31003/TCP,5005:32003/TCP 14m
minikube service customerservice --url returns
http://192.168.99.100:32004
As an alternative to using a NodePort in a Service you could also use kubectl port-forward to access the debug port in your Pod.
kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
You need to expose the debug port in the Deployment yaml for the Pod
spec:
containers:
...
ports:
...
- containerPort: 5005
Then get the name of your Pod via
kubectl get pods
and then add a port-forwarding to that Pod
kubectl port-forward podname 5005:5005
In IntelliJ you will be able to connect to
Host: localhost
Port: 5005
Alternatively, you can use the Cloud Code Intellij plugin.
Also, if you use Fabric8, it provides the fabric8:debug goal.
There was a slip in the yaml you first posted as:
- containerPort: 5050
name: debug
Should be:
- containerPort: 5005
name: debug
You also need to use the external port of 32003 when configuring the IntelliJ debugger. With those changes it should work.
You may also want to think about how to make it more flexible. In the past when I've done this I've used a different form for the docker start command that allows you to turn remote debug on and off by an environment variable called REMOTE_DEBUG, which for you would be:
CMD if [ "x$REMOTE_DEBUG" = "xfalse" ] ; then java $JAVA_OPTS -jar catalogservice-0.0.1-SNAPSHOT.jar ; else java $JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n -jar catalogservice-0.0.1-SNAPSHOT.jar ; fi
You'll probably find you want to set the env var $JAVA_OPTS to limit jvm memory use to avoid issues in k8s.

Docker postgres on local machine: connection to localhost:5432 refused. Check that the hostname and port are correct [duplicate]

I am building my first Springboot 2.0 application. I am trying to put my Springboot application into one docker container and my PostgresDB into another container.
My Dockerfile
FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD springboot-api-demo-0.1*.jar app.jar
RUN sh -c 'touch /app.jar'
EXPOSE 9443
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/urandom -jar /app.jar" ]
My docker-compose.yml file
version: "2.1"
services:
springboot-api-demo:
image: "fw/springboot-api-demo"
mem_limit: 1024m
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=local
- AWS_REGION=local
- ENVIRONMENT=local
- AUTH_ENABLED=false
postgres:
container_name: pgdb
image: postgres:9.6-alpine
environment:
- 'POSTGRES_ROOT_PASSWORD=postgres'
- 'POSTGRES_USER=postgres'
- 'POSTGRES_PASSWORD=postgres'
ports:
- "54321:5432"
I am using Springboot JPA Data 2.0 with below config data in my application.properties
spring.datasource.url= jdbc:postgresql://localhost:54321/java_learning
spring.datasource.username=postgres
spring.datasource.password=postgres
I can test that Both of the Images are up. Also from docker log and docker events, I see that postgres Container is running fine, even I can access it and also created a DB too.
But springboot container started but i died because it could not connect to postgress and throwing error below.
Unable to obtain connection from database: The connection attempt
failed
Note that my host machine already has Postgres on port 5432 thats why I did a port mapping ofr 54321:5432 on my postgres container. Here is Proof :) -
➜ springboot-api-demo git:(master) ✗ lsof -i:54321
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 44345 shailendra.singh 18u IPv4 0xf62897fbdd69e31d 0t0 TCP *:54321 (LISTEN)
com.docke 44345 shailendra.singh 21u IPv6 0xf62897fbdd119975 0t0 TCP localhost:54321 (LISTEN)
➜ springboot-api-demo git:(master) ✗ lsof -i:5432
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
postgres 715 shailendra.singh 5u IPv6 0xf62897fbb43e03b5 0t0 TCP localhost:postgresql (LISTEN)
postgres 715 shailendra.singh 6u IPv4 0xf62897fbbaeea9bd 0t0 TCP localhost:postgresql (LISTEN)
I am not sure what is the problem. But my Springboot application is not able to connect my postgres container which is running fine with proper creadentials.
Try with :
spring.datasource.url= jdbc:postgresql://pgdb:5432/java_learning
The postgres database is not running on localhost, it's running in the other container which has an other IP (yet unknown).
Thanksfully, docker-compose automatically create a network shared among all the containers in the docker-compose.yml (unless explicitly said to do not), as a result you can magically use the service name as an hostname.
Also, you have a typo in the port, Postgres use 5432 by default, not 54321
You are pointing your application towards localhost, but this is not shared between containers.
To access another container you have to refer to its hostname.
you should use the following datasource url:
spring.datasource.url=jdbc:postgresql://pgdb:5432/java_learning
See this simple tutorial about connecting to a container from another container with docker compose: https://docs.docker.com/compose/gettingstarted/
You're missing networking configuration in your docker-compose.yml specification. By using "networks" you can effectively communicate between containers by their service name (using dns, the service name as the hostname).
Here is an updated docker-compose.yml:
version: "2.1"
services:
springboot-api-demo:
image: "fw/springboot-api-demo"
mem_limit: 1024m
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=local
- AWS_REGION=local
- ENVIRONMENT=local
- AUTH_ENABLED=false
networks:
- mynet
postgres:
container_name: pgdb
image: postgres:9.6-alpine
environment:
- 'POSTGRES_ROOT_PASSWORD=postgres'
- 'POSTGRES_USER=postgres'
- 'POSTGRES_PASSWORD=postgres'
ports:
- "54321:5432"
networks:
- mynet
networks:
mynet:
driver: bridge
Your database url should look like spring.datasource.url=jdbc:postgresql://postgres:5432/java_learning (notice the hostname, postgres, is equal to that of the service name.
Apart from the above solutions provided JDK 11 java container with the mentioned configuration (connecting postgres via IP, localhost, servicename .. with postgres container exposed to LAN) still doesn't work. Upgrade to JDK latest version (17 currently) works for me - do consider this also when you use JDK 11 and trying java container (docker) communicating with postgres container.

Error connecting Node.js web client to Java gRPC server

I have a gRPC server written in Java and I'm currently trying to create a web client, with React. However, I can't seem to manage the connection between the envoy proxy to which the client is connecting and the actual server.
I would expect to receive the same message as with the Java client, but I get the error "Http response at 400 or 500 level", receiving an empty response with the web client, while the Java server doesn't even get the request.
The server runs on port 8080, and the envoy proxy is configured on port 9090, which is the one used by the web client.
Dockerfile:
FROM envoyproxy/envoy-dev:latest
COPY ./envoy.yaml /etc/envoy/envoy.yaml
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l trace --log-path /tmp/envoy_info.log
envoy.yaml:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9090 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: m_service
cors:
allow_origin:
- "*"
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
expose_headers: grpc-status,grpc-message
enabled: true
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: m_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts:
socket_address:
address: localhost
port_value: 8080
The commands I use for building and running the docker container are docker build -t m-server ., and docker run -p 9090:9090 -td m-server /bin/bash and the proto classes for the front-end are loaded statically.
If there's any more code that'd be useful to post, please let me know. Any advice is appreciated, thank you!
For me the solution was to change the command passed to run the container, thus docker run -p 9090:9090 -td m-server /bin/bash becoming docker run -d -p 9090:9090 -p 9901:9901 m-server. The main difference was putting -d instead of -td and the second port mapping is for the envoy server.
I am just learning Docker and from what I understood from the documentation, the explanation would be that I was running the container in detached mode, but with a pseudo-tty allocated, which is used in foreground mode. I've seen it here but the purpose was slightly different and at the time I misunderstood it as only keeping the container running was not what I needed.
Changing 'localhost' to '0.0.0.0', as suggested in this answer is also important.
Looks like Envoy is not forwarding the request to your Java server. Envoy has an admin interface https://www.envoyproxy.io/docs/envoy/latest/operations/admin . That and the Envoy log files should help troubleshoot this.
socket_address:
address: localhost
This is the problem. Your envoy tries to forward to itself if it's running as dockerized image, because localhost is not your docker host machine for running container (where grpc server is running) , but actually localhost of running container. Use docker compose, port mapping or external network. Good luck

Microservice can't reach config service in docker-compose

I currently have two microservices:
- service - port 8080, this microservice tries to fetch config from the other microservice.
- config - port 8888, this microservice is supposed to provide config.
For some reason my service is unable to fetch configuration from config microservice.
My config microservice should work because I can curl localhost:8888/service/default on my machine I receive:
{"name":"service","profiles":["default"],"label":null,"version":null,"state":null,"propertySources":[{"name":"classpath:/shared/service.yml","source":{"server.port":8080,"spring.security.user.password":"admin"}},{"name":"classpath:/shared/service.yaml","source":{"server.port":8080,"spring.security.user.password":"admin"}}]}
Error received (full)
service | 2019-06-06 21:31:06.721 INFO 1 --- [main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://config:8888
service | 2019-06-06 21:31:06.894 INFO 1 --- [main] c.c.c.ConfigServicePropertySourceLocator : Connect Timeout Exception on Url - http://config:8888. Will be trying the next url if available
service | 2019-06-06 21:31:06.904 ERROR 1 --- [main] o.s.boot.SpringApplication : Application run failed
Docker-compose.yaml
version: '3.7'
services:
config:
container_name: config
build: ./config
ports:
- 8888:8888
service:
container_name: service
build: ./service
ports:
- 8080:8080
depends_on:
- config
Service Dockerfile:
FROM openjdk:8-jdk-alpine
ADD target/service.jar /app.jar
CMD [ "java", "-Xmx200m", "-jar", "/app.jar" ]
EXPOSE 8080
Service bootstrap.yaml
spring:
application:
name: service
cloud:
config:
uri: http://config:8888
fail-fast: true
service.yaml (has service configuration)
server:
port: 8080
spring:
security:
user:
password: admin # doesnt set since no connection
Config Dockerfile
FROM openjdk:8-jdk-alpine
ADD target/config.jar /app.jar
CMD [ "java", "-Xmx200m", "-jar", "/app.jar" ]
EXPOSE 8888
Config application.yaml
spring:
application:
name: config
profiles:
active: composite
cloud:
config:
server:
composite:
- type: native
search-locations: classpath:/shared
server:
port: 8888
shared/service.yaml (has service configuration)
server:
port: 8080
spring:
security:
user:
password: admin # doesnt set since no connection
Any ideas?
I found some similar issues, although they only had issues with their URI, mine is set correctly.
Microservice can not reach to Config Server on Docker Compose
Docker - SpringConfig - Connection refused to ConfigServer
When one service depends on another you have to make sure that the latter is fully started before connecting to it.
In your case, most probably, config is started but not ready (context started) at the time service is run. As #Ganesh Karewad and #asolanki pointed out, a solution is to implement a reconnection logic. Another solution is to make sure config is initialized and accepting connections before you run service.
You can achieve that with a script that waits until the config app is up. In alternative you could configure the config container with a health check command and after you start it, wait until it is marked as healthy. Then you can run the service container.
Similar issue discussed here and here
Hope that helps.

Accessing host postgresql from docker container (docker on linux)

I have a spring-boot java application running in a docker container on my linux host machine.
I have a postgresql instance installed on the host that I want to connect to from the running container.
I've tried multiple different approaches (--network="host" is not what I want).
My Dockerfile looks like this:
FROM openjdk:13-ea-9-jdk-alpine3.9
EXPOSE 8080
CMD mkdir /opt/StatisticalRestService
COPY target/StatisticalRestService-0.0.1-SNAPSHOT.jar
/opt/StatisticalRestService/
COPY DockerConfig/application.yml /opt/StatisticalRestService/
RUN chmod 777 /opt/StatisticalRestService/StatisticalRestService-0.0.1-SNAPSHOT.jar \
&& ls -l /opt/StatisticalRestService/StatisticalRestService-0.0.1-SNAPSHOT.jar \
&& INTERNAL_HOST_IP=$(ip route show default | awk '/default/ {print $3}') \
&& echo "$INTERNAL_HOST_IP host.docker.internal" >> /etc/hosts \
&& chmod +r /etc/hosts \
&& cat /etc/hosts
ENTRYPOINT [ "java", "-jar", "-Dspring.config.location=/opt/StatisticalRestService/application.yml", "/opt/StatisticalRestService/StatisticalRestService-0.0.1-SNAPSHOT.jar" ]
application.yml:
spring:
application:
name: StatisticalRestService
jpa:
database: POSTGRESQL
show-sql: true
hibernate:
ddl-auto: create-drop
datasource:
platform: postgres
#url: jdbc:postgresql://host.docker.internal:5432/StatisticalRestService
url: jdbc:postgresql://172.17.0.1:5432/StatisticalRestService
username: statEntityUser
password: test123
driverClassName: org.postgresql.Driver
I have configured postresql's setting listen_addressess = '*' and the following entry is in the pg_hba.conf:
host all all 172.17.0.0/16 md5
host all all 192.168.1.0/24 md5
ifconfig docker0:
arizon#tuxpad:~/Utveckling/StatisticalRestService$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:3bff:fe4f:ed34 prefixlen 64 scopeid 0x20<link>
ether 02:42:3b:4f:ed:34 txqueuelen 0 (Ethernet)
RX packets 28 bytes 1506 (1.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 198 bytes 25515 (25.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
This is the build output:
arizon#tuxpad:~/Utveckling/StatisticalRestService$ sudo docker build . -t arizon/statisticalrestservice:1.0.0-SNAPSHOT
Sending build context to Docker daemon 223.7MB
Step 1/7 : FROM openjdk:13-ea-9-jdk-alpine3.9
---> 6a6c49978498
Step 2/7 : EXPOSE 8080
---> Running in df7ebc70e950
Removing intermediate container df7ebc70e950
---> 417e50a9f5fd
Step 3/7 : CMD mkdir /opt/StatisticalRestService
---> Running in f33ca0acddf7
Removing intermediate container f33ca0acddf7
---> 59ae394176f3
Step 4/7 : COPY target/StatisticalRestService-0.0.1-SNAPSHOT.jar /opt/StatisticalRestService/
---> 4fbcfeb039f8
Step 5/7 : COPY DockerConfig/application.yml /opt/StatisticalRestService/
---> 244d31fc4755
Step 6/7 : RUN chmod 777 /opt/StatisticalRestService/StatisticalRestService-0.0.1-SNAPSHOT.jar && ls -l /opt/StatisticalRestService/StatisticalRestService-0.0.1-SNAPSHOT.jar && INTERNAL_HOST_IP=$(ip route show default | awk '/default/ {print $3}') && echo "$INTERNAL_HOST_IP host.docker.internal" >> /etc/hosts && chmod +r /etc/hosts && cat /etc/hosts
---> Running in 241f43aebbdc
-rwxrwxrwx 1 root root 35266534 Mar 16 19:52 /opt/StatisticalRestService/StatisticalRestService-0.0.1-SNAPSHOT.jar
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 241f43aebbdc
172.17.0.1 host.docker.internal
Removing intermediate container 241f43aebbdc
---> 5c6c53d8011d
Step 7/7 : ENTRYPOINT [ "java", "-jar", "-Dspring.config.location=/opt/StatisticalRestService/application.yml", "/opt/StatisticalRestService/StatisticalRestService-0.0.1-SNAPSHOT.jar" ]
---> Running in 213a87164e8f
Removing intermediate container 213a87164e8f
---> 802cd987771f
Successfully built 802cd987771f
Successfully tagged arizon/statisticalrestservice:1.0.0-SNAPSHOT
When I run this with the datasource url pointed to host.docker.internal, i get unknownHostException, despite the output from the /etc/hosts file confirming it's there. From what I understand, there might be an issue with /etc/nsswitch.conf under alpine. I've tried adding the file and pasting this line from my host:
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname
to no avail.
When I run it with the datasource url pointed to 172.17.0.1:5432, I get connection timed out.
I verified access to psql from my host by pointing pgadmin to the 192.168 ip to verify that listen_addresses = '*' works:
host all all 192.168.1.0/24 md5
which it does. It's a different entry tho.
Docker version:
Client:
Version: 18.09.2
API version: 1.39
Go version: go1.10.4
Git commit: 6247962
Built: Tue Feb 26 23:52:23 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 6247962
Built: Wed Feb 13 00:24:14 2019
OS/Arch: linux/amd64
Experimental: false
Postgresql version:
arizon#tuxpad:~/Utveckling/StatisticalRestService$ dpkg --list | grep postgresql
ii postgresql-10 10.6-0ubuntu0.18.04.1 amd64 object-relational SQL database, version 10 server
So, TL;DR: Two questions:
1. How do I get host.docker.internal to work on docker under linux?
2. How do I connect my containerized application to my host postgresql instance?
I solved this not in the way I intended when I asked the question but It's solved.
I ended up creating a postgres container too with a volume to keep the persisted data persitent.
I made a Dockerfile for postgres that looks like this:
FROM postgres:10-alpine
RUN mkdir /docker-entrypoint-initdb.d/
#COPY initdb.sql /docker-entrypoint-initdb.d/
COPY my-postgres.conf /usr/local/share/postgresql/postgresql.conf
#ENV POSTGRES_USER statEntityUser
#ENV POSTGRES_PASSWORD test123
#ENV POSTGRES_DB StatisticalRestService
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 5432
CMD [ "postgres" ]
The initdb.sql script is just creating the same thing that the commented environment variables are, the user, password and database. Sql-scripts placed in that folder is run by the entry point script when the database is started and ready (it's an out-of-the-box feature in the docker image I derive from). The reason they are commented out is that it's in the docker-compose file (see below). The postgresql.conf is basically the template that is included in the container but with listen_addresses = '*' uncommented.
I also made a docker-compose.yml to run both of these containers in a good way together:
version: "3"
services:
statistical-rest-service:
build: ./StatisticalRestService
ports:
- 8081:8080
depends_on:
- postgres
networks:
- statisticsNet
postgres:
container_name: postgres
build: ./Postgres
ports:
- 5433:5432
volumes:
- postgres-volume:/var/lib/postgresql/data
command: postgres -c 'config_file=/usr/local/share/postgresql/postgresql.conf'
networks:
- statisticsNet
environment:
POSTGRES_USER: statEntityUser
POSTGRES_PASSWORD: test123
POSTGRES_DB: StatisticalRestService
networks:
statisticsNet:
volumes:
postgres-volume:
I'm not sure if you have to create the volume before hand or if it's included in docker-compose but if you need to, it's just docker volume create postgres-volume.
Postgres documentation on how to use the image and/or derive from it: Postgres on docker hub
NOTE: When you start the container with an appointed volume, make some mistake and shut it down, when you start it again, it will not mess with the existing database on the volume. You might get into a position where you have "dangling volumes" that are stale versions of old run-time containers that you've killed and removed but they can produce unexpected behavior (for me, the user and database wasn't created because of this).
You can clear them with this command: docker volume rm $(docker volume ls -qf dangling=true) (or run the command inside $() to list eventual dangling volumes.
Since I am creating a dedicated docker network in the docker-compose file, the containers can find each other by name (note container_name). That makes the connection url in the application.yml for my java like this: url: jdbc:postgresql://postgres:5432/StatisticalRestService
I hope someone is helped by this :)

Categories