I have created a local cluster on my machine to try the deployment of a Storm topology, but I have a strange problem. So when I execute the topology in local mode all works fine but when I execute in the remote mode it seems not work as you can see in the screenshot below:
So at this point, I can't figure out where the problem is, I also have checked if the Kafka producer works, and this works fine because this topology uses a Kafka spout. Thanks a lot for your help.
This is the storm.yml
storm.log4j2.conf.dir: "log4j2"
storm.zookeeper.servers:
- "127.0.0.1"
nimbus.seeds: ["127.0.0.1"]
supervisor.slots.ports:
- 6700
This is the stack.yml file
version: '3'
services:
nimbus:
image: storm:2.1.0
container_name: nimbus
command: storm nimbus -c storm.zookeeper.servers="[\"zookeeper\"]" -c nimbus.seeds="[\"nimbus\"]"
depends_on:
- zookeeper
links:
- zookeeper
restart: always
ports:
- "6627:6627"
- "8000:8000"
volumes:
- ./TopologyJar:/TopologyJar
zookeeper:
image: zookeeper
container_name: zookeeper
restart: always
ports:
- "2181:2181"
# storm-cli:
# image: storm:2.1.0
# container_name: storm-cli
# depends_on:
# - zookeeper
# - nimbus
# links:
# - zookeeper
# - nimbus
#
# #The following two comands
# #are used for showing an I/O terminal aka Shell
## stdin_open: true
## tty: true
storm-ui:
image: storm:2.1.0
container_name: storm-ui
command: storm ui -c nimbus.seeds="[\"nimbus\"]" -c storm.zookeeper.servers="[\"zookeeper\"]"
depends_on:
- nimbus
- zookeeper
links:
- nimbus
- zookeeper
restart: always
ports:
- "8080:8080"
supervisor:
image: storm:2.1.0
command: storm supervisor -c nimbus.seeds="[\"nimbus\"]" -c storm.zookeeper.servers="[\"zookeeper\"]"
container_name: supervisor
depends_on:
- nimbus
- zookeeper
- redis
links:
- nimbus
- zookeeper
- redis
restart: always
redis:
image: redis
container_name: redis
restart: always
ports:
- "6379:6379"
#web iu for manage redis
redis-commander:
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
depends_on:
- redis
Related
I have a few microservices runins on spring boot, each of them have they own postgres database. When I run docker-compose up on development windows mashin it works correctly. But when I deploy it on linux VPS host, spring can not connect to database. Individually (without environment variables) the images are launched on VPS.
Here is my docker-compose.yml
version: "3.9"
services:
gateway:
image: gateway:latest
container_name: gateway
ports:
- "8080:8080"
depends_on:
- postgres
environment:
- DATASOURCE_URL=jdbc:postgresql://postgres:5432/gateway_db
- DATASOURCE_USERNAME=postgres
- DATASOURCE_PASSWORD=postgres
- JPA_HIBERNATE_DDL_AUTO=update
restart: unless-stopped
networks:
- postgres
postgres:
image: postgres:14.4
container_name: gateway_db
environment:
POSTGRES_DB: "gateway_db"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
volumes:
- pgdata:/var/lib/postgresql/data
expose:
- 5432
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U postgres -d postgres" ]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
restart: unless-stopped
networks:
- postgres
volumes:
pgdata:
networks:
postgres:
driver: bridge
And this is my application.properties
spring.datasource.driver-class-name=org.postgresql.Driver
spring.sql.init.mode=always
spring.sql.init.platform=postgres
spring.datasource.url=${DATASOURCE_URL}
spring.datasource.username=${DATASOURCE_USERNAME}
spring.datasource.password=${DATASOURCE_PASSWORD}
spring.jpa.database=postgresql
spring.jpa.hibernate.ddl-auto=${JPA_HIBERNATE_DDL_AUTO}
So, here's spring logs from docker container on host machine
I would be very grateful for any help!
I have been using the kafka image provided by spotify to run kafka locally. I'm currently trying to use it with cp-kafka-rest and schema-registry images.
I need help resolving this issue:
ERROR (Log Group: kafka_rest_1_609fd108dcf4)
[main-SendThread(zookeeper:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server zookeeper:2181, unexpected error, closing socket connection and attempting reconnect
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064)
Docker Compose
version: '3.5'
services:
kafka:
image: 'spotify/kafka'
hostname: kafka
environment:
- ADVERTISED_HOST=kafka
- ADVERTISED_PORT=9092
ports:
- "9092:9092"
- "2181:2181"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- one
kafka_rest:
image: 'confluentinc/cp-kafka-rest:5.1.0'
hostname: kafka_rest
environment:
- KAFKA_REST_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_REST_LISTENERS=http://0.0.0.0:8082
- KAFKA_REST_SCHEMA_REGISTRY_URL=http:schema-registry:8081
- KAFKA_REST_HOST_NAME=kafka-rest
networks:
- one
schema_registry:
hostname: schema-registry
image: 'confluentinc/cp-schema-registry:5.1.0'
environment:
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry
- SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
networks:
- one
networks:
one:
name: rest_network
You have no zookepeer container - it is actually your "kafka" service image that includes both Zookeeper and Kafka servers, so zookeeper:2181 should rather be kafka:2181
However, I would recommend not using the spotify images, as they are significantly outdated
You can find a fully functional Docker Compose example of the entire Confluent 5.1.0 Platform on Github
Here is the revelvant configuration you are looking for
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.1.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
image: confluentinc/cp-kafka-rest:5.1.0
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:9092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
You need to add zookeeper service to your docker compose file
zookeeper:
image: confluent/zookeeper
ports:
- "2181:2181"
environment:
zk_id: "1"
network_mode: "host"
I created a microservice environment, more precisely 5 services, where they are connected to each other and access the same database (PostgreSQL). After development, I started to create the docker images for the services. All images have been created, however, I can not put postgreSQL in the docker environment, since it is already running on the machine in localhost, and other applications depend on it, so I can not migrate to the docker environment. I would like to know if it is possible for my applications to access the database that is outside the environment?
Below, my docker-compose
version: '2'
services:
server:
image: microservices/server:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
expose:
- "8080"
ports:
- "8080:8080"
networks:
- microservices
security-server:
image: microservices/security-server:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
expose:
- "8081"
ports:
- "8081:8081"
networks:
- microservices
restart: "always"
api-gateway:
image: microservices/api-gateway:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
expose:
- "9999"
ports:
- "9999:9999"
networks:
- microservices
restart: "always"
imovel:
image: microservices/imovel:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "8082"
ports:
- "8082:8082"
networks:
- microservices
restart: "always"
imovel2:
image: microservices/imovel:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "9098"
ports:
- "9098:9098"
networks:
- microservices
restart: "always"
cliente:
image: microservices/cliente:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "8083"
ports:
- "8083:8083"
networks:
- microservices
restart: "always"
networks:
microservices:
driver: bridge
In the link quoted, his problem was that postgres wasn't accepting connections from outside. My problem is more of the beginning, where should I start configuring the connection?
You can specify extra_hosts in the compose format and pass in your host's IP address as an environment variable.
extra_hosts:
- "my_host:${HOST_IP}"
https://docs.docker.com/compose/compose-file/#extra_hosts
I have a simple elasticsearch cluster and I've seen into the documentation that the master node has to access to the elastic data volume.
But the fact is that if two nodes use the same data volume this error occured : Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/escluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
I've tried many configuration but can't find how to share the volume between my different nodes.
Docker-compose
version: '2'
services:
esmaster:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
volumes_from:
- esdata
environment:
cluster.name: "escluster"
node.data: "false"
http.enabled: "false"
node.master: "true"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esmaster
cap_add:
- IPC_LOCK
networks:
- elk
esclient:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200"
- "9300"
environment:
cluster.name: "escluster"
node.data: "false"
http.enabled: "true"
node.master: "false"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esclient
cap_add:
- IPC_LOCK
networks:
- elk
depends_on:
- esmaster
esdata:
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
environment:
cluster.name: "escluster"
node.data: "true"
http.enabled: "false"
node.master: "false"
ES_JAVA_OPTS: "-Xmx2g -Xms2g"
discovery.zen.ping.unicast.hosts: esdata
cap_add:
- IPC_LOCK
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx6g -Xms6g"
networks:
- elk
depends_on:
- esmaster
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- esmaster
networks:
elk:
driver: bridge
You don't want to share your data directory. Each Elasticsearch instance should use it's own data directory.
You can find a working docker-compose file in the section of https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-prod-mode — two nodes, each using their own Docker named volume.
I'm using Docker-compose in my project when i'm trying to inside a project to connect to a host in my VBox "localhost:22000" it causes an Exception connection refused
version: '2'
services:
mongodb:
image: mongo
ports:
- "27017:27017"
command: mongod --smallfiles
rabbitmq:
image: rabbitmq:3.5.3-management
ports:
- "5672:5672"
- "15672:15672"
broker:
image: java:openjdk-8u91-jdk
depends_on:
- mongodb
- rabbitmq
working_dir: /app
volumes:
- ./blockchain-rabbitmq/target/:/app
command: java -jar /app/blockchain-rabbitmq-0.0.1.jar
ports:
- "8484:8484"
links:
- rabbitmq
- mongodb
environment:
SPRING_DATA_MONGODB_URI: mongodb://mongodb/ethereum
RABBIT_HOST: rabbitmq
SPRING_RABBITMQ_HOST: rabbitmq
nfb:
image: java:openjdk-8u91-jdk
depends_on:
- mongodb
- broker
working_dir: /app
volumes:
- ./coordinator/target/:/app
command: java -jar /app/coordinator-0.0.1.jar
ports:
- "8383:8383"
links:
- mongodb
environment:
SPRING_DATA_MONGODB_URI: mongodb://mongodb/ethereum
is there a way to expose some hosts or port ( i got this kind of exception before that when i'm dealing with mongo and rabbit mq and i resolved it buy using links and env vars )