ERROR : service 'environment' must be a mapping not an array - java

version: "3.1"
services:
elasticsearch:
image: elasticsearch:7.4.2
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
I try to up this docker-compose.yml file but it return this error
ERROR: In file 'C:\Users\ozan8\IdeaProjects\spring_examples\spring_elasticsearch\src\main\resources\docker-compose.yml', service 'environment' m
ust be a mapping not an array.

You need to make sure your yml file is valid and in the right location as the comment said
version: "3.1"
services:
elasticsearch:
image:
elasticsearch:7.4.2
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node

Related

docker-compose service names

I just started using docker-compose and I am enjoying it.
I recently just created my first docker-compose file that simply connects sonarqube and postgres. Inside my docker-compose.yml file, whenever I define the database service with any other name besides "db", my docker-compose does not run successfully.
This is the error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jdk.internal.loader.ClassLoaders$AppClassLoader#277050dc-org.sonar.db.DefaultDatabase': Initialization of bean failed; nested exception is java.lang.IllegalStateException: Fail to connect to database
This is the code in my docker compose file:
version: "3"
services:
sonarqube:
image: sonarqube
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
Is there anything special about the name "db"? Are there any conventions/rules for defining services in docker-compose?
Thank you.
You have to change the service name also within the sonarqube's connections string.
here, replace the string db with how you renamed the service for postgres (these have to match):
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
# ^ here
It is needed, because docker-compose registers hostnames (defined by service names) for the stack so they are always dynamically accessible.
Update your docker-compose with depends_on property to let docker know that db should be created firstly:
version: "3"
services:
sonarqube:
image: sonarqube
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
depends_on:
- db
db:
image: postgres
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:

Create directory with docker compose for elastic search and map it to the one on server

I have the following docker compose file for starting elastic search with kibana:
version: "3.5"
services:
es-master:
container_name: es-master
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
hostname: es-master
restart: always
ports:
- 9201:9201
- 9301:9301
environment:
- ELASTICSEARCH_HOSTS=http://es-master:9201
- "ES_JAVA_OPTS=-Xms4g -Xmx4g"
- bootstrap.memory_lock=true
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es-net
volumes:
- /data/es/config/config.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /data/es/data:/usr/share/elasticsearch/data
- /data/es/logs:/usr/share/elasticsearch/logs
- /data/es/plugins:/usr/share/elasticsearch/plugins
- /data/es/kibana/data:/usr/share/kibana/data
kibana:
container_name: kibana-container
image: docker.elastic.co/kibana/kibana:7.8.1
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://es-master:9201
- path.repo=/nfs_es/back_up
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /data/es/kibana/data:/usr/share/kibana/data
depends_on:
- es-master
networks:
- es-net
networks:
es-net:
name: es-net
Basically I need to create a directory /nfs_es/back_up/ on the container before elastic search starts up and map it to the directory /nfs_es/back_up/ which I have already created on the current server. Can this be achievable with docker compose?

(root) Additional property monitor is not allowed - Docker compose

i wanted to run this java app through docker:
https://github.com/ByteHamster/PSE
the docker-compose.yml file looks like:
simulation:
build: .
dockerfile: simulationDockerfile
environment:
- DISPLAY
expose:
- 12868
- 12869
- 12870
- 12871
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
monitor:
build: .
dockerfile: monitorDockerfile
environment:
- DISPLAY
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
links:
- simulation
when i run docker-compose build i get this error message:
(root) Additional property monitor is not allowed
what is the valid yml to make this program run?
thanks guys
version: '2'
services:
simulation:
build:
context: .
dockerfile: simulationDockerfile
environment:
- DISPLAY
expose:
- 12868
- 12869
- 12870
- 12871
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
monitor:
build:
context: .
dockerfile: monitorDockerfile
environment:
- DISPLAY
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
links:
- simulation
had to make the following changes so make it work
thanks #David Maze

How to connect to specific local MongoDB instance in Spring Boot Dockerised application?

I have developed simple Spring Boot Application that performs CRUD operations using MongoDB as database. I have deployed that application in Docker but I get null values while doing GET Request for any items stored in MongoDB. Some of the files required for Docker are provided below:
Dockerfile:
VOLUME /tmp
ADD build/libs/Spring-Boot-MongoDB-0.0.1-SNAPSHOT.jar SpringMongoApp.jar
ENTRYPOINT ["java", "-Dspring.data.mongodb.uri=mongodb://mongo:27018/otp","-jar","/SpringMongoApp.jar"]
docker-compose.yml:
version: "3"
services:
api-database:
image: mongo:3.2.4
container_name: "springboot-mongo-app"
ports:
- "27018:27017"
environment:
MONGO_INITDB_ROOT_DATABASE: otp
networks:
- test-network
api:
image: springboot-api
ports:
- "8080:8080"
depends_on:
- api-database
networks:
- test-network
networks:
test-network:
driver: bridge
application.properties:
spring.data.mongodb.host=api-database
When I checked the MongoDb Docker container using container ID, it is automatically getting connected to test database but not to otp database which I have mentioned in environment section of docker-compose.yml file.
The problem is that your docker container is not persistent, the database will be erased and re-created again each time you run your docker container.
If you add VOLUME to persist /data/db, you will get desired result.
I assume your have directory data/db in the same place where you have stored docker-compose.yml. You may setup custom directory (i.e /tmp/data/db)
Try this one:
docker-compose.yml:
version: "3"
services:
api-database:
image: mongo:3.2.4
container_name: "springboot-mongo-app"
ports:
- "27018:27017"
volumes:
- "./data/db:/data/db"
environment:
MONGO_INITDB_ROOT_DATABASE: otp
networks:
- test-network
api:
image: springboot-api
ports:
- "8080:8080"
depends_on:
- api-database
networks:
- test-network
networks:
test-network:
driver: bridge
Note: First time, it will be empty database. If you create collections, insert records, etc... it will be saved in ./data/db
I think instead of MONGO_INITDB_ROOT_DATABASE you should use MONGO_INITDB_DATABASE

Data volume on Elasticsearch cluster

I have a simple elasticsearch cluster and I've seen into the documentation that the master node has to access to the elastic data volume.
But the fact is that if two nodes use the same data volume this error occured : Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/escluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
I've tried many configuration but can't find how to share the volume between my different nodes.
Docker-compose
version: '2'
services:
esmaster:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
volumes_from:
- esdata
environment:
cluster.name: "escluster"
node.data: "false"
http.enabled: "false"
node.master: "true"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esmaster
cap_add:
- IPC_LOCK
networks:
- elk
esclient:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200"
- "9300"
environment:
cluster.name: "escluster"
node.data: "false"
http.enabled: "true"
node.master: "false"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esclient
cap_add:
- IPC_LOCK
networks:
- elk
depends_on:
- esmaster
esdata:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
environment:
cluster.name: "escluster"
node.data: "true"
http.enabled: "false"
node.master: "false"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esdata
cap_add:
- IPC_LOCK
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx6g -Xms6g"
networks:
- elk
depends_on:
- esmaster
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- esmaster
networks:
elk:
driver: bridge
You don't want to share your data directory. Each Elasticsearch instance should use it's own data directory.
You can find a working docker-compose file in the section of https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-prod-mode — two nodes, each using their own Docker named volume.

Categories