I created a microservice environment, more precisely 5 services, where they are connected to each other and access the same database (PostgreSQL). After development, I started to create the docker images for the services. All images have been created, however, I can not put postgreSQL in the docker environment, since it is already running on the machine in localhost, and other applications depend on it, so I can not migrate to the docker environment. I would like to know if it is possible for my applications to access the database that is outside the environment?
Below, my docker-compose
version: '2'
services:
server:
image: microservices/server:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
expose:
- "8080"
ports:
- "8080:8080"
networks:
- microservices
security-server:
image: microservices/security-server:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
expose:
- "8081"
ports:
- "8081:8081"
networks:
- microservices
restart: "always"
api-gateway:
image: microservices/api-gateway:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
expose:
- "9999"
ports:
- "9999:9999"
networks:
- microservices
restart: "always"
imovel:
image: microservices/imovel:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "8082"
ports:
- "8082:8082"
networks:
- microservices
restart: "always"
imovel2:
image: microservices/imovel:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "9098"
ports:
- "9098:9098"
networks:
- microservices
restart: "always"
cliente:
image: microservices/cliente:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "8083"
ports:
- "8083:8083"
networks:
- microservices
restart: "always"
networks:
microservices:
driver: bridge
In the link quoted, his problem was that postgres wasn't accepting connections from outside. My problem is more of the beginning, where should I start configuring the connection?
You can specify extra_hosts in the compose format and pass in your host's IP address as an environment variable.
extra_hosts:
- "my_host:${HOST_IP}"
https://docs.docker.com/compose/compose-file/#extra_hosts
Related
I have a few microservices runins on spring boot, each of them have they own postgres database. When I run docker-compose up on development windows mashin it works correctly. But when I deploy it on linux VPS host, spring can not connect to database. Individually (without environment variables) the images are launched on VPS.
Here is my docker-compose.yml
version: "3.9"
services:
gateway:
image: gateway:latest
container_name: gateway
ports:
- "8080:8080"
depends_on:
- postgres
environment:
- DATASOURCE_URL=jdbc:postgresql://postgres:5432/gateway_db
- DATASOURCE_USERNAME=postgres
- DATASOURCE_PASSWORD=postgres
- JPA_HIBERNATE_DDL_AUTO=update
restart: unless-stopped
networks:
- postgres
postgres:
image: postgres:14.4
container_name: gateway_db
environment:
POSTGRES_DB: "gateway_db"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
volumes:
- pgdata:/var/lib/postgresql/data
expose:
- 5432
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U postgres -d postgres" ]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
restart: unless-stopped
networks:
- postgres
volumes:
pgdata:
networks:
postgres:
driver: bridge
And this is my application.properties
spring.datasource.driver-class-name=org.postgresql.Driver
spring.sql.init.mode=always
spring.sql.init.platform=postgres
spring.datasource.url=${DATASOURCE_URL}
spring.datasource.username=${DATASOURCE_USERNAME}
spring.datasource.password=${DATASOURCE_PASSWORD}
spring.jpa.database=postgresql
spring.jpa.hibernate.ddl-auto=${JPA_HIBERNATE_DDL_AUTO}
So, here's spring logs from docker container on host machine
I would be very grateful for any help!
I just started using docker-compose and I am enjoying it.
I recently just created my first docker-compose file that simply connects sonarqube and postgres. Inside my docker-compose.yml file, whenever I define the database service with any other name besides "db", my docker-compose does not run successfully.
This is the error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jdk.internal.loader.ClassLoaders$AppClassLoader#277050dc-org.sonar.db.DefaultDatabase': Initialization of bean failed; nested exception is java.lang.IllegalStateException: Fail to connect to database
This is the code in my docker compose file:
version: "3"
services:
sonarqube:
image: sonarqube
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
Is there anything special about the name "db"? Are there any conventions/rules for defining services in docker-compose?
Thank you.
You have to change the service name also within the sonarqube's connections string.
here, replace the string db with how you renamed the service for postgres (these have to match):
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
# ^ here
It is needed, because docker-compose registers hostnames (defined by service names) for the stack so they are always dynamically accessible.
Update your docker-compose with depends_on property to let docker know that db should be created firstly:
version: "3"
services:
sonarqube:
image: sonarqube
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
depends_on:
- db
db:
image: postgres
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
I have the following docker compose file for starting elastic search with kibana:
version: "3.5"
services:
es-master:
container_name: es-master
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
hostname: es-master
restart: always
ports:
- 9201:9201
- 9301:9301
environment:
- ELASTICSEARCH_HOSTS=http://es-master:9201
- "ES_JAVA_OPTS=-Xms4g -Xmx4g"
- bootstrap.memory_lock=true
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es-net
volumes:
- /data/es/config/config.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /data/es/data:/usr/share/elasticsearch/data
- /data/es/logs:/usr/share/elasticsearch/logs
- /data/es/plugins:/usr/share/elasticsearch/plugins
- /data/es/kibana/data:/usr/share/kibana/data
kibana:
container_name: kibana-container
image: docker.elastic.co/kibana/kibana:7.8.1
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://es-master:9201
- path.repo=/nfs_es/back_up
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /data/es/kibana/data:/usr/share/kibana/data
depends_on:
- es-master
networks:
- es-net
networks:
es-net:
name: es-net
Basically I need to create a directory /nfs_es/back_up/ on the container before elastic search starts up and map it to the directory /nfs_es/back_up/ which I have already created on the current server. Can this be achievable with docker compose?
I have created a local cluster on my machine to try the deployment of a Storm topology, but I have a strange problem. So when I execute the topology in local mode all works fine but when I execute in the remote mode it seems not work as you can see in the screenshot below:
So at this point, I can't figure out where the problem is, I also have checked if the Kafka producer works, and this works fine because this topology uses a Kafka spout. Thanks a lot for your help.
This is the storm.yml
storm.log4j2.conf.dir: "log4j2"
storm.zookeeper.servers:
- "127.0.0.1"
nimbus.seeds: ["127.0.0.1"]
supervisor.slots.ports:
- 6700
This is the stack.yml file
version: '3'
services:
nimbus:
image: storm:2.1.0
container_name: nimbus
command: storm nimbus -c storm.zookeeper.servers="[\"zookeeper\"]" -c nimbus.seeds="[\"nimbus\"]"
depends_on:
- zookeeper
links:
- zookeeper
restart: always
ports:
- "6627:6627"
- "8000:8000"
volumes:
- ./TopologyJar:/TopologyJar
zookeeper:
image: zookeeper
container_name: zookeeper
restart: always
ports:
- "2181:2181"
# storm-cli:
# image: storm:2.1.0
# container_name: storm-cli
# depends_on:
# - zookeeper
# - nimbus
# links:
# - zookeeper
# - nimbus
#
# #The following two comands
# #are used for showing an I/O terminal aka Shell
## stdin_open: true
## tty: true
storm-ui:
image: storm:2.1.0
container_name: storm-ui
command: storm ui -c nimbus.seeds="[\"nimbus\"]" -c storm.zookeeper.servers="[\"zookeeper\"]"
depends_on:
- nimbus
- zookeeper
links:
- nimbus
- zookeeper
restart: always
ports:
- "8080:8080"
supervisor:
image: storm:2.1.0
command: storm supervisor -c nimbus.seeds="[\"nimbus\"]" -c storm.zookeeper.servers="[\"zookeeper\"]"
container_name: supervisor
depends_on:
- nimbus
- zookeeper
- redis
links:
- nimbus
- zookeeper
- redis
restart: always
redis:
image: redis
container_name: redis
restart: always
ports:
- "6379:6379"
#web iu for manage redis
redis-commander:
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
depends_on:
- redis
I have been using the kafka image provided by spotify to run kafka locally. I'm currently trying to use it with cp-kafka-rest and schema-registry images.
I need help resolving this issue:
ERROR (Log Group: kafka_rest_1_609fd108dcf4)
[main-SendThread(zookeeper:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server zookeeper:2181, unexpected error, closing socket connection and attempting reconnect
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064)
Docker Compose
version: '3.5'
services:
kafka:
image: 'spotify/kafka'
hostname: kafka
environment:
- ADVERTISED_HOST=kafka
- ADVERTISED_PORT=9092
ports:
- "9092:9092"
- "2181:2181"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- one
kafka_rest:
image: 'confluentinc/cp-kafka-rest:5.1.0'
hostname: kafka_rest
environment:
- KAFKA_REST_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_REST_LISTENERS=http://0.0.0.0:8082
- KAFKA_REST_SCHEMA_REGISTRY_URL=http:schema-registry:8081
- KAFKA_REST_HOST_NAME=kafka-rest
networks:
- one
schema_registry:
hostname: schema-registry
image: 'confluentinc/cp-schema-registry:5.1.0'
environment:
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry
- SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
networks:
- one
networks:
one:
name: rest_network
You have no zookepeer container - it is actually your "kafka" service image that includes both Zookeeper and Kafka servers, so zookeeper:2181 should rather be kafka:2181
However, I would recommend not using the spotify images, as they are significantly outdated
You can find a fully functional Docker Compose example of the entire Confluent 5.1.0 Platform on Github
Here is the revelvant configuration you are looking for
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.1.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
image: confluentinc/cp-kafka-rest:5.1.0
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:9092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
You need to add zookeeper service to your docker compose file
zookeeper:
image: confluent/zookeeper
ports:
- "2181:2181"
environment:
zk_id: "1"
network_mode: "host"