Zookeeper connection failing cp-rest-proxy with spotify kafka image - java

I have been using the kafka image provided by spotify to run kafka locally. I'm currently trying to use it with cp-kafka-rest and schema-registry images.
I need help resolving this issue:
ERROR (Log Group: kafka_rest_1_609fd108dcf4)
[main-SendThread(zookeeper:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server zookeeper:2181, unexpected error, closing socket connection and attempting reconnect
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064)
Docker Compose
version: '3.5'
services:
kafka:
image: 'spotify/kafka'
hostname: kafka
environment:
- ADVERTISED_HOST=kafka
- ADVERTISED_PORT=9092
ports:
- "9092:9092"
- "2181:2181"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- one
kafka_rest:
image: 'confluentinc/cp-kafka-rest:5.1.0'
hostname: kafka_rest
environment:
- KAFKA_REST_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_REST_LISTENERS=http://0.0.0.0:8082
- KAFKA_REST_SCHEMA_REGISTRY_URL=http:schema-registry:8081
- KAFKA_REST_HOST_NAME=kafka-rest
networks:
- one
schema_registry:
hostname: schema-registry
image: 'confluentinc/cp-schema-registry:5.1.0'
environment:
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry
- SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
networks:
- one
networks:
one:
name: rest_network

You have no zookepeer container - it is actually your "kafka" service image that includes both Zookeeper and Kafka servers, so zookeeper:2181 should rather be kafka:2181
However, I would recommend not using the spotify images, as they are significantly outdated
You can find a fully functional Docker Compose example of the entire Confluent 5.1.0 Platform on Github
Here is the revelvant configuration you are looking for
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.1.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
image: confluentinc/cp-kafka-rest:5.1.0
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:9092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'

You need to add zookeeper service to your docker compose file
zookeeper:
image: confluent/zookeeper
ports:
- "2181:2181"
environment:
zk_id: "1"
network_mode: "host"

Related

Docker-compose: There is not are not services registered in Eureka Server

I am trying to run all microservices from a docker-compose file and I am facing issues when running the containers due to the issue that have between the Eureka discovery server and api-gateway with the other services. There is a way that can make the discover-server (Eureka) communicate with the other services? Many thanks in advance.
version: "3"
services:
discovery-server:
image: renosbardis/discovery-service:latest
container_name: discovery-server
ports:
- "8761:8761"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- test-network
api-gateway:
image: renosbardis/api-gateway:latest
container_name: api-gateway
ports:
- "8888:8888"
environment:
- SPRING_PROFILES_ACTIVE=docker
- LOGGING_LEVEL_ORG_SPRINGFRAMEWORK_SECURITY= TRACE
depends_on:
- discovery-server
networks:
- test-network
accounts-service:
image: renosbardis/accounts-service:latest
container_name: accounts-service
ports:
- "8081:8081"
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
environment:
AMQP_URL: 'amqp://rabbitmq?connection_attempts=5&retry_delay=5'
RABBITMQ_DEFAULT_USER: "guest"
RABBITMQ_DEFAULT_PASS: "guest"
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
customers-service:
image: renosbardis/customer-service:latest
container_name: customers-service
ports:
- "8083:8083"
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
transactions-service:
image: renosbardis/transaction-service:latest
container_name: transactions-service
ports:
- "8084:8084"
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
notification-service:
image: renosbardis/notification-service:latest
container_name: notification-service
ports:
- "8085:8085"
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
networks:
test-network:
driver: bridge
There is nothing wrong with the docker-compose configuration file, the first thing you need to check is that the api-gateway can access the discovery-server properly
You can go into the containers and ping or telnet to test whether the network between the containers is connected.
use docker exec -it name /bin/bash
The startup log shows that your api-gateway configuration is incorrect, as it should be
discover-server:8761//eureka no localhost
api-gateway | 2022-12-28 02:25:27.549 INFO 1 --- [nfoReplicator-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused (Connection refused) stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
edit 1
I pulled your api-gateway image and started
And executed
docker cp api-gateway:/app/resources/application.properties ./
Here is the configuration that went wrong
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka
You just have to change it to
eureka.client.serviceUrl.defaultZone=http://discovery-server:8761/eureka
edit 2
Add the following configuration to api-gatway configuration file to see if it works
eureka.client.fetch-registry=true
eureka.client.registry-with-eureka=true
edit 3
I find that most of your problems are configuration file errors in the mirror
This section uses customer-service mirroring as an example
Its port is set to 0
hostname is localhost
defaultZone is localhost:8761/eureka
This is the configuration file of customer-service that I modified
# Server port
server.port = 8083
eureka.instance.hostname = customer-service
spring.application.name = customer-service
# Memory Database for development Environment
spring.h2.console.enabled=true
spring.h2.console.path=/h2-console
spring.datasource.url=jdbc:h2:mem:customerdb;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.jpa.hibernate.hibernate.dialect=org.hibernate.dialect.H2Dialect
eureka.client.service-url.defaultZone=http://discovery-server:8761/eureka
eureka.client.fetch-registry= true
eureka.client.register-with-eureka= true
This is my visit 0.0.0.0:8888/api/v1/customer/2 the returned json all working properly
{
"customerID": 2,
"name": "John",
"surname": "Doe",
"balance": 50
}

Create directory with docker compose for elastic search and map it to the one on server

I have the following docker compose file for starting elastic search with kibana:
version: "3.5"
services:
es-master:
container_name: es-master
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
hostname: es-master
restart: always
ports:
- 9201:9201
- 9301:9301
environment:
- ELASTICSEARCH_HOSTS=http://es-master:9201
- "ES_JAVA_OPTS=-Xms4g -Xmx4g"
- bootstrap.memory_lock=true
ulimits:
memlock:
soft: -1
hard: -1
networks:
- es-net
volumes:
- /data/es/config/config.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /data/es/data:/usr/share/elasticsearch/data
- /data/es/logs:/usr/share/elasticsearch/logs
- /data/es/plugins:/usr/share/elasticsearch/plugins
- /data/es/kibana/data:/usr/share/kibana/data
kibana:
container_name: kibana-container
image: docker.elastic.co/kibana/kibana:7.8.1
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://es-master:9201
- path.repo=/nfs_es/back_up
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /data/es/kibana/data:/usr/share/kibana/data
depends_on:
- es-master
networks:
- es-net
networks:
es-net:
name: es-net
Basically I need to create a directory /nfs_es/back_up/ on the container before elastic search starts up and map it to the directory /nfs_es/back_up/ which I have already created on the current server. Can this be achievable with docker compose?

Apache Storm topology deployment problems

I have created a local cluster on my machine to try the deployment of a Storm topology, but I have a strange problem. So when I execute the topology in local mode all works fine but when I execute in the remote mode it seems not work as you can see in the screenshot below:
So at this point, I can't figure out where the problem is, I also have checked if the Kafka producer works, and this works fine because this topology uses a Kafka spout. Thanks a lot for your help.
This is the storm.yml
storm.log4j2.conf.dir: "log4j2"
storm.zookeeper.servers:
- "127.0.0.1"
nimbus.seeds: ["127.0.0.1"]
supervisor.slots.ports:
- 6700
This is the stack.yml file
version: '3'
services:
nimbus:
image: storm:2.1.0
container_name: nimbus
command: storm nimbus -c storm.zookeeper.servers="[\"zookeeper\"]" -c nimbus.seeds="[\"nimbus\"]"
depends_on:
- zookeeper
links:
- zookeeper
restart: always
ports:
- "6627:6627"
- "8000:8000"
volumes:
- ./TopologyJar:/TopologyJar
zookeeper:
image: zookeeper
container_name: zookeeper
restart: always
ports:
- "2181:2181"
# storm-cli:
# image: storm:2.1.0
# container_name: storm-cli
# depends_on:
# - zookeeper
# - nimbus
# links:
# - zookeeper
# - nimbus
#
# #The following two comands
# #are used for showing an I/O terminal aka Shell
## stdin_open: true
## tty: true
storm-ui:
image: storm:2.1.0
container_name: storm-ui
command: storm ui -c nimbus.seeds="[\"nimbus\"]" -c storm.zookeeper.servers="[\"zookeeper\"]"
depends_on:
- nimbus
- zookeeper
links:
- nimbus
- zookeeper
restart: always
ports:
- "8080:8080"
supervisor:
image: storm:2.1.0
command: storm supervisor -c nimbus.seeds="[\"nimbus\"]" -c storm.zookeeper.servers="[\"zookeeper\"]"
container_name: supervisor
depends_on:
- nimbus
- zookeeper
- redis
links:
- nimbus
- zookeeper
- redis
restart: always
redis:
image: redis
container_name: redis
restart: always
ports:
- "6379:6379"
#web iu for manage redis
redis-commander:
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
depends_on:
- redis

Access database that is outside the docker environment

I created a microservice environment, more precisely 5 services, where they are connected to each other and access the same database (PostgreSQL). After development, I started to create the docker images for the services. All images have been created, however, I can not put postgreSQL in the docker environment, since it is already running on the machine in localhost, and other applications depend on it, so I can not migrate to the docker environment. I would like to know if it is possible for my applications to access the database that is outside the environment?
Below, my docker-compose
version: '2'
services:
server:
image: microservices/server:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
expose:
- "8080"
ports:
- "8080:8080"
networks:
- microservices
security-server:
image: microservices/security-server:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
expose:
- "8081"
ports:
- "8081:8081"
networks:
- microservices
restart: "always"
api-gateway:
image: microservices/api-gateway:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
expose:
- "9999"
ports:
- "9999:9999"
networks:
- microservices
restart: "always"
imovel:
image: microservices/imovel:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "8082"
ports:
- "8082:8082"
networks:
- microservices
restart: "always"
imovel2:
image: microservices/imovel:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "9098"
ports:
- "9098:9098"
networks:
- microservices
restart: "always"
cliente:
image: microservices/cliente:latest
mem_limit: 1073741824 # RAM 1GB
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- server
- security-server
- api-gateway
expose:
- "8083"
ports:
- "8083:8083"
networks:
- microservices
restart: "always"
networks:
microservices:
driver: bridge
In the link quoted, his problem was that postgres wasn't accepting connections from outside. My problem is more of the beginning, where should I start configuring the connection?
You can specify extra_hosts in the compose format and pass in your host's IP address as an environment variable.
extra_hosts:
- "my_host:${HOST_IP}"
https://docs.docker.com/compose/compose-file/#extra_hosts

Docker-compose up : Connection refused

I'm using Docker-compose in my project when i'm trying to inside a project to connect to a host in my VBox "localhost:22000" it causes an Exception connection refused
version: '2'
services:
mongodb:
image: mongo
ports:
- "27017:27017"
command: mongod --smallfiles
rabbitmq:
image: rabbitmq:3.5.3-management
ports:
- "5672:5672"
- "15672:15672"
broker:
image: java:openjdk-8u91-jdk
depends_on:
- mongodb
- rabbitmq
working_dir: /app
volumes:
- ./blockchain-rabbitmq/target/:/app
command: java -jar /app/blockchain-rabbitmq-0.0.1.jar
ports:
- "8484:8484"
links:
- rabbitmq
- mongodb
environment:
SPRING_DATA_MONGODB_URI: mongodb://mongodb/ethereum
RABBIT_HOST: rabbitmq
SPRING_RABBITMQ_HOST: rabbitmq
nfb:
image: java:openjdk-8u91-jdk
depends_on:
- mongodb
- broker
working_dir: /app
volumes:
- ./coordinator/target/:/app
command: java -jar /app/coordinator-0.0.1.jar
ports:
- "8383:8383"
links:
- mongodb
environment:
SPRING_DATA_MONGODB_URI: mongodb://mongodb/ethereum
is there a way to expose some hosts or port ( i got this kind of exception before that when i'm dealing with mongo and rabbit mq and i resolved it buy using links and env vars )

Categories