docker-compose service names - java

I just started using docker-compose and I am enjoying it.
I recently just created my first docker-compose file that simply connects sonarqube and postgres. Inside my docker-compose.yml file, whenever I define the database service with any other name besides "db", my docker-compose does not run successfully.
This is the error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jdk.internal.loader.ClassLoaders$AppClassLoader#277050dc-org.sonar.db.DefaultDatabase': Initialization of bean failed; nested exception is java.lang.IllegalStateException: Fail to connect to database
This is the code in my docker compose file:
version: "3"
services:
sonarqube:
image: sonarqube
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
Is there anything special about the name "db"? Are there any conventions/rules for defining services in docker-compose?
Thank you.

You have to change the service name also within the sonarqube's connections string.
here, replace the string db with how you renamed the service for postgres (these have to match):
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
# ^ here
It is needed, because docker-compose registers hostnames (defined by service names) for the stack so they are always dynamically accessible.

Update your docker-compose with depends_on property to let docker know that db should be created firstly:
version: "3"
services:
sonarqube:
image: sonarqube
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
depends_on:
- db
db:
image: postgres
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:

Related

ERROR : service 'environment' must be a mapping not an array

version: "3.1"
services:
elasticsearch:
image: elasticsearch:7.4.2
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
I try to up this docker-compose.yml file but it return this error
ERROR: In file 'C:\Users\ozan8\IdeaProjects\spring_examples\spring_elasticsearch\src\main\resources\docker-compose.yml', service 'environment' m
ust be a mapping not an array.
You need to make sure your yml file is valid and in the right location as the comment said
version: "3.1"
services:
elasticsearch:
image:
elasticsearch:7.4.2
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node

Cannot run or connect Portgresql container on Docker

In my Windows 10 machine I have a Java app and create Postgresql images on Docker using the following configuration:
docker-compose.yml:*
version: '2.0'
services:
postgresql:
image: postgres:11
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=demo
- POSTGRES_PASSWORD=******
- POSTGRES_DB=demo_test
And I use the following command to compose images:
cd postgresql
docker-compose up -d
Although pgadmin container is working on Docker, postgres container is generally restarting state and sometines seems to be running state for a second. When I look at that container log, I see I encounter the following errors:
2021-03-16 09:00:18.526 UTC [82] FATAL: data directory "/data/postgres" has wrong ownership
2021-03-16 09:00:18.526 UTC [82] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
*initdb: removing contents of data directory "/data/postgres"
running bootstrap script ... The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
I have tried to apply several workaround suggestions e.g. PostgreSQL with docker ownership issue, but none of them is working. So, how can I fix this problem?
Update: Here is last status of my docker-compoese.yml file:
version: '2.0'
services:
postgresql:
image: postgres:11
container_name: "my-pg"
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=demo
- POSTGRES_PASSWORD=******
- POSTGRES_DB=demo_test
volumes:
- psql:/var/lib/postgresql/data
volumes:
psql:
As I already stated in my comment I'd suggest using a named volume.
Here's my docker-compose.yml for Postgres 12:
version: "3"
services:
postgres:
image: "postgres:12"
container_name: "my-pg"
ports:
- 5432:5432
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
POSTGRES_DB: "mydb"
volumes:
- psql:/var/lib/postgresql/data
volumes:
psql:
Then I created the psql volume via docker volume create psql (so just a volume without any actual path mapping).

How to connect to specific local MongoDB instance in Spring Boot Dockerised application?

I have developed simple Spring Boot Application that performs CRUD operations using MongoDB as database. I have deployed that application in Docker but I get null values while doing GET Request for any items stored in MongoDB. Some of the files required for Docker are provided below:
Dockerfile:
VOLUME /tmp
ADD build/libs/Spring-Boot-MongoDB-0.0.1-SNAPSHOT.jar SpringMongoApp.jar
ENTRYPOINT ["java", "-Dspring.data.mongodb.uri=mongodb://mongo:27018/otp","-jar","/SpringMongoApp.jar"]
docker-compose.yml:
version: "3"
services:
api-database:
image: mongo:3.2.4
container_name: "springboot-mongo-app"
ports:
- "27018:27017"
environment:
MONGO_INITDB_ROOT_DATABASE: otp
networks:
- test-network
api:
image: springboot-api
ports:
- "8080:8080"
depends_on:
- api-database
networks:
- test-network
networks:
test-network:
driver: bridge
application.properties:
spring.data.mongodb.host=api-database
When I checked the MongoDb Docker container using container ID, it is automatically getting connected to test database but not to otp database which I have mentioned in environment section of docker-compose.yml file.
The problem is that your docker container is not persistent, the database will be erased and re-created again each time you run your docker container.
If you add VOLUME to persist /data/db, you will get desired result.
I assume your have directory data/db in the same place where you have stored docker-compose.yml. You may setup custom directory (i.e /tmp/data/db)
Try this one:
docker-compose.yml:
version: "3"
services:
api-database:
image: mongo:3.2.4
container_name: "springboot-mongo-app"
ports:
- "27018:27017"
volumes:
- "./data/db:/data/db"
environment:
MONGO_INITDB_ROOT_DATABASE: otp
networks:
- test-network
api:
image: springboot-api
ports:
- "8080:8080"
depends_on:
- api-database
networks:
- test-network
networks:
test-network:
driver: bridge
Note: First time, it will be empty database. If you create collections, insert records, etc... it will be saved in ./data/db
I think instead of MONGO_INITDB_ROOT_DATABASE you should use MONGO_INITDB_DATABASE

Data volume on Elasticsearch cluster

I have a simple elasticsearch cluster and I've seen into the documentation that the master node has to access to the elastic data volume.
But the fact is that if two nodes use the same data volume this error occured : Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/escluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
I've tried many configuration but can't find how to share the volume between my different nodes.
Docker-compose
version: '2'
services:
esmaster:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
volumes_from:
- esdata
environment:
cluster.name: "escluster"
node.data: "false"
http.enabled: "false"
node.master: "true"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esmaster
cap_add:
- IPC_LOCK
networks:
- elk
esclient:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200"
- "9300"
environment:
cluster.name: "escluster"
node.data: "false"
http.enabled: "true"
node.master: "false"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esclient
cap_add:
- IPC_LOCK
networks:
- elk
depends_on:
- esmaster
esdata:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
environment:
cluster.name: "escluster"
node.data: "true"
http.enabled: "false"
node.master: "false"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esdata
cap_add:
- IPC_LOCK
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx6g -Xms6g"
networks:
- elk
depends_on:
- esmaster
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- esmaster
networks:
elk:
driver: bridge
You don't want to share your data directory. Each Elasticsearch instance should use it's own data directory.
You can find a working docker-compose file in the section of https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-prod-mode — two nodes, each using their own Docker named volume.

Docker-compose up : Connection refused

I'm using Docker-compose in my project when i'm trying to inside a project to connect to a host in my VBox "localhost:22000" it causes an Exception connection refused
version: '2'
services:
mongodb:
image: mongo
ports:
- "27017:27017"
command: mongod --smallfiles
rabbitmq:
image: rabbitmq:3.5.3-management
ports:
- "5672:5672"
- "15672:15672"
broker:
image: java:openjdk-8u91-jdk
depends_on:
- mongodb
- rabbitmq
working_dir: /app
volumes:
- ./blockchain-rabbitmq/target/:/app
command: java -jar /app/blockchain-rabbitmq-0.0.1.jar
ports:
- "8484:8484"
links:
- rabbitmq
- mongodb
environment:
SPRING_DATA_MONGODB_URI: mongodb://mongodb/ethereum
RABBIT_HOST: rabbitmq
SPRING_RABBITMQ_HOST: rabbitmq
nfb:
image: java:openjdk-8u91-jdk
depends_on:
- mongodb
- broker
working_dir: /app
volumes:
- ./coordinator/target/:/app
command: java -jar /app/coordinator-0.0.1.jar
ports:
- "8383:8383"
links:
- mongodb
environment:
SPRING_DATA_MONGODB_URI: mongodb://mongodb/ethereum
is there a way to expose some hosts or port ( i got this kind of exception before that when i'm dealing with mongo and rabbit mq and i resolved it buy using links and env vars )

Categories