Kafka Broker Application Disconnecting with EOFException - java

I'm running an application that connects to the kafka broker using confluentinc/cp-server:7.2.1 docker image.
When I try to run a larger application it just disconnect both application and I need to restart broker.
The message of application is this:
WARN 16756 --- [RMADE-738-0-C-1]
org.apache.kafka.clients.NetworkClient : [Consumer
clientId=consumer01] Connection to node 1 (localhost/127.0.0.1:9092)
could not be established. Broker may not be available.
INFO 16756 --- [RMFDE-603-0-C-1]
org.apache.kafka.clients.NetworkClient : [Consumer
clientId=consumer01] Node 1 disconnected.
And the message of broker (docker image) is this:
broker | INFO Skipping goal violation detection due to previous new
broker change
(com.linkedin.kafka.cruisecontrol.detector.GoalViolationDetector)
[2023] DEBUG [SocketServer listenerType=ZK_BROKER, nodeId=1]
Connection with /172.18.0.1
(channelId=172.18.0.5:9092-172.18.0.1:51368-43) disconnected
(org.apache.kafka.common.network.Selector)
java.io.EOFException at
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97)
at
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:797)
at
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:700)
at
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:783)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:635)
at org.apache.kafka.common.network.Selector.poll(Selector.java:519)
at kafka.network.Processor.poll(SocketServer.scala:1463) at
kafka.network.Processor.run(SocketServer.scala:1307) at
java.base/java.lang.Thread.run(Thread.java:829) at
org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:64)
[2023] DEBUG [SocketServer listenerType=ZK_BROKER, nodeId=1]
Connection with /172.18.0.1
(channelId=172.18.0.5:9092-172.18.0.1:50998-29) disconnected
(org.apache.kafka.common.network.Selector)
java.io.IOException: Broken pipe at
java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method) at
java.base/sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79) at
java.base/sun.nio.ch.IOUtil.write(IOUtil.java:50) at
java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:462)
at
org.apache.kafka.common.network.PlaintextTransportLayer.write(PlaintextTransportLayer.java:143)
at
org.apache.kafka.common.network.PlaintextTransportLayer.write(PlaintextTransportLayer.java:159)
at
org.apache.kafka.common.network.ByteBufferSend.writeTo(ByteBufferSend.java:62)
at
org.apache.kafka.common.network.NetworkSend.writeTo(NetworkSend.java:41)
at
org.apache.kafka.common.network.KafkaChannel.write(KafkaChannel.java:728)
at org.apache.kafka.common.network.Selector.write(Selector.java:753)
at
org.apache.kafka.common.network.Selector.attemptWrite(Selector.java:743)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:652)
at org.apache.kafka.common.network.Selector.poll(Selector.java:519)
at kafka.network.Processor.poll(SocketServer.scala:1463) at
kafka.network.Processor.run(SocketServer.scala:1307) at
java.base/java.lang.Thread.run(Thread.java:829) at
org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:64)
My docker-compose.yml file:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:7.2.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
KAFKA_HEAP_OPTS: -Xms6G -Xmx6G
JAVA_OPTS: -Xms6G -Xmx6G
JVM_OPTS: Xmx6g -Xms6g -XX:MaxPermSize=1024m
KAFKA_LOG4J_ROOT_LOGLEVEL: DEBUG
KAFKA_TOOLS_LOG4J_LOGLEVEL: DEBUG
schema-registry:
image: confluentinc/cp-schema-registry:7.2.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081

Related

Docker-compose: There is not are not services registered in Eureka Server

I am trying to run all microservices from a docker-compose file and I am facing issues when running the containers due to the issue that have between the Eureka discovery server and api-gateway with the other services. There is a way that can make the discover-server (Eureka) communicate with the other services? Many thanks in advance.
version: "3"
services:
discovery-server:
image: renosbardis/discovery-service:latest
container_name: discovery-server
ports:
- "8761:8761"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- test-network
api-gateway:
image: renosbardis/api-gateway:latest
container_name: api-gateway
ports:
- "8888:8888"
environment:
- SPRING_PROFILES_ACTIVE=docker
- LOGGING_LEVEL_ORG_SPRINGFRAMEWORK_SECURITY= TRACE
depends_on:
- discovery-server
networks:
- test-network
accounts-service:
image: renosbardis/accounts-service:latest
container_name: accounts-service
ports:
- "8081:8081"
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
environment:
AMQP_URL: 'amqp://rabbitmq?connection_attempts=5&retry_delay=5'
RABBITMQ_DEFAULT_USER: "guest"
RABBITMQ_DEFAULT_PASS: "guest"
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
customers-service:
image: renosbardis/customer-service:latest
container_name: customers-service
ports:
- "8083:8083"
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
transactions-service:
image: renosbardis/transaction-service:latest
container_name: transactions-service
ports:
- "8084:8084"
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
notification-service:
image: renosbardis/notification-service:latest
container_name: notification-service
ports:
- "8085:8085"
environment:
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- discovery-server
- api-gateway
networks:
- test-network
networks:
test-network:
driver: bridge
There is nothing wrong with the docker-compose configuration file, the first thing you need to check is that the api-gateway can access the discovery-server properly
You can go into the containers and ping or telnet to test whether the network between the containers is connected.
use docker exec -it name /bin/bash
The startup log shows that your api-gateway configuration is incorrect, as it should be
discover-server:8761//eureka no localhost
api-gateway | 2022-12-28 02:25:27.549 INFO 1 --- [nfoReplicator-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused (Connection refused) stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
edit 1
I pulled your api-gateway image and started
And executed
docker cp api-gateway:/app/resources/application.properties ./
Here is the configuration that went wrong
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka
You just have to change it to
eureka.client.serviceUrl.defaultZone=http://discovery-server:8761/eureka
edit 2
Add the following configuration to api-gatway configuration file to see if it works
eureka.client.fetch-registry=true
eureka.client.registry-with-eureka=true
edit 3
I find that most of your problems are configuration file errors in the mirror
This section uses customer-service mirroring as an example
Its port is set to 0
hostname is localhost
defaultZone is localhost:8761/eureka
This is the configuration file of customer-service that I modified
# Server port
server.port = 8083
eureka.instance.hostname = customer-service
spring.application.name = customer-service
# Memory Database for development Environment
spring.h2.console.enabled=true
spring.h2.console.path=/h2-console
spring.datasource.url=jdbc:h2:mem:customerdb;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.jpa.hibernate.hibernate.dialect=org.hibernate.dialect.H2Dialect
eureka.client.service-url.defaultZone=http://discovery-server:8761/eureka
eureka.client.fetch-registry= true
eureka.client.register-with-eureka= true
This is my visit 0.0.0.0:8888/api/v1/customer/2 the returned json all working properly
{
"customerID": 2,
"name": "John",
"surname": "Doe",
"balance": 50
}

how to docker-compose spring-boot with kafka? [duplicate]

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 10 months ago.
Execution is working fine but I think I am missing something, because when I use the REST API it will show:
(/127.0.0.1:9092) could not be established. Broker may not be available
You may see this below. By the way I am new to Docker and Kafka.
I can't sent or get data to /GET/POST/PUT/DELETE since I use Docker.
My reference when i created this setup: https://github.com/codegard/kafka-docker/blob/master/docker-compose.yml
//docker-compose.yaml
version: '3'
services:
#----------------------------------------------------------------
productmicroservice:
image: productmicroservice:latest
container_name: productmicroservice
depends_on:
- product-mysqldb
- kafka
restart: always
build:
context: ./
dockerfile: Dockerfile
ports:
- "9001:8091"
environment:
- MYSQL_HOST=product-mysqldb
- MYSQL_USER=oot
- MYSQL_PASSWORD=root
- MYSQL_PORT=3306
- "SPRING_PROFILES_ACTIVE=${ACTIVE_PROFILE}"
#----------------------------------------------------------------
product-mysqldb:
image: mysql:8.0.28
restart: unless-stopped
container_name: product-mysqldb
ports:
- "3307:3306"
cap_add:
- SYS_NICE
environment:
MYSQL_DATABASE: dbpoc
MYSQL_ROOT_PASSWORD: root
#----------------------------------------------------------------
zookeeper:
image: elevy/zookeeper:latest
container_name: zookeeper
ports:
- "2181:2181"
#----------------------------------------------------------------
kafka:
image: wurstmeister/kafka:2.11-2.0.0
container_name: kafka
restart: on-failure
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
//appication.yaml
spring.kafka.bootstrap-servers=127.0.0.1:9092
product.kafkaServer= ${spring.kafka.bootstrap-servers}
spring.kafka.properties.security.protocol=PLAINTEXT
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
topic.name=producttopic
spring.jpa.properties.hibernate.check_nullability=true
spring:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:3306}/dbpoc
username: root
password: root
jpa:
hibernate:
naming:
implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
hibernate.ddl-auto: update
generate-ddl: false
show-sql: false
properties:
hibernate:
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
mvc:
throw-exception-if-no-handler-found: true
web:
resources:
add-mappings: false
sql:
init:
mode: always
continue-on-error: true
server:
port: 8091
// .env
ACTIVE_PROFILE=dev
//Dockerfile
FROM openjdk:8-alpine
ADD target/*.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
//topic that i created
./kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic producttopic
//trying to send data manualy in kafka
//working
//producer
bash-4.4# ./kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic producttopic
>hellow
>
//comsumer
bash-4.4# ./kafka-console-consumer.sh --topic producttopic --from-beginning --bootstrap-server 127.0.0.1:9092
hellow
//sending data to from my rest api
//this is wrong but i dont know what is the reason
productmicroservice | 2022-05-04 13:08:28.283 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
productmicroservice | 2022-05-04 13:08:28.283 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker 127.0.0.1:9092 (id: -1 rack: null) disconnected
productmicroservice | 2022-05-04 13:08:29.343 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
//fix reference below comment
//change made
//application.yaml
spring.kafka.topic.name=producttopic
topic.name=${spring.kafka.topic.name}
spring.kafka.bootstrap-servers=kafka:9092
product.kafkaServer= ${spring.kafka.bootstrap-servers}
//docker-compose.yaml
//added to kafka environment
KAFKA_CREATE_TOPICS: "producttopic:1:1"
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENERS: INSIDE://kafka:9092,OUTSIDE://0.0.0.0:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://localhost:9093
//manual get data in topic by consumer
./kafka-console-consumer.sh --topic producttopic --from-beginning --bootstrap-server localhost:9093
//code changes in spring
//for topic
#Value("${topic.name}")
//for bootstrapserver
#Value("${product.kafkaServer}")
//must note
wurstmeister/kafka:2.11-2.0.0
I use "2.11-2.0.0" since its compatible with jdk 8 where latest give me error and my project required jdk 8
You are using 127.0.0.1:9092 as the Kafka container endpoint from the Java container. From a container, localhost targets the container itself, this won't work.
Docker Compose will set a default network in which your services are reachable by their name.
Therefore I think you should change your application.yaml to:
spring.kafka.bootstrap-servers=kafka:9092
# ...

spring-boot Kafka integration in docker using docker-compose

I am trying to connect Kafka in spring boot inside Docker but it shows an error "could not be established. Broker may not be available."
Kafka and zookeeper server running successfully but I am unable to connect broker inside spring-boot.
here is my docker-compose.yml
version: "3"
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
spring_boot:
build: .
depends_on:
- kafka
application.yml from Kafka producer
kafka:
boot:
server: kafka:9092
topic:
name: myTopic-kafkasender
server:
port: 8081
Please suggest what I am doing wrong here

Cannot produce kafka message on kubernetes

I'm getting error on kafka: [2020-05-04 12:46:59,477] ERROR [KafkaApi-1001] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
And error when I'm trying to produce message: 2020-05-04 12:47:45.221 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {activate-user=LEADER_NOT_AVAILABLE}
Using docker-compose everything works fine but I'm trying to move it also to k8s. I started that process with kompose convert tool and modify the output.
Here is a fragment of the docker-compse:
zookeeper:
container_name: zookeeper
image: wurstmeister/zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- "2181:2181"
mail-sender-kafka:
container_name: mail-sender-kafka
image: wurstmeister/kafka:2.12-2.2.1
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: mail-sender-kafka
KAFKA_CREATE_TOPICS: "activate-user:1:1"
ports:
- 9092:9092
depends_on:
- zookeeper
account-service:
image: szastarek/food-delivery-account-service:${TAG}
container_name: account-service
environment:
- KAFKA_URI=mail-sender-kafka:9092
depends_on:
- config-server
- account-service-db
mail-sender:
image: szastarek/food-delivery-mail-sender:${TAG}
container_name: mail-sender
environment:
- KAFKA_URI=mail-sender-kafka:9092
depends_on:
- config-server
After converting it to k8s I've got zookeeper-deployment, zookeeper-service, mail-sender-deployment, mail-sender-kafka-deployment, mail-sender-kafka-service.
I've also tried to add some env variables and for now, it looks like that:
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: mail-sender-kafka
- name: KAFKA_ADVERTISED_PORT
value: '9092'
- name: ADVERTISED_LISTENERS
value: PLAINTEXT://mail-sender-kafka:9092
- name: KAFKA_CREATE_TOPICS
value: activate-user:1:1
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
I've found one thing that probably is connected with the problem.
When I run ping mail-sender-kafka on docker I can reach myself. But if I connect to kubernetes mail-sender-kafka pod then I cannot ping myself.
After update the hosts file it works. There was something like:
172.18.0.24 mail-sender-kafka-xxxxxxx
And I changed it to
172.18.0.24 mail-sender-kafka
Any tips about how should I fix it?

Zookeeper connection failing cp-rest-proxy with spotify kafka image

I have been using the kafka image provided by spotify to run kafka locally. I'm currently trying to use it with cp-kafka-rest and schema-registry images.
I need help resolving this issue:
ERROR (Log Group: kafka_rest_1_609fd108dcf4)
[main-SendThread(zookeeper:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server zookeeper:2181, unexpected error, closing socket connection and attempting reconnect
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064)
Docker Compose
version: '3.5'
services:
kafka:
image: 'spotify/kafka'
hostname: kafka
environment:
- ADVERTISED_HOST=kafka
- ADVERTISED_PORT=9092
ports:
- "9092:9092"
- "2181:2181"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- one
kafka_rest:
image: 'confluentinc/cp-kafka-rest:5.1.0'
hostname: kafka_rest
environment:
- KAFKA_REST_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_REST_LISTENERS=http://0.0.0.0:8082
- KAFKA_REST_SCHEMA_REGISTRY_URL=http:schema-registry:8081
- KAFKA_REST_HOST_NAME=kafka-rest
networks:
- one
schema_registry:
hostname: schema-registry
image: 'confluentinc/cp-schema-registry:5.1.0'
environment:
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry
- SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
networks:
- one
networks:
one:
name: rest_network
You have no zookepeer container - it is actually your "kafka" service image that includes both Zookeeper and Kafka servers, so zookeeper:2181 should rather be kafka:2181
However, I would recommend not using the spotify images, as they are significantly outdated
You can find a fully functional Docker Compose example of the entire Confluent 5.1.0 Platform on Github
Here is the revelvant configuration you are looking for
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.1.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
image: confluentinc/cp-kafka-rest:5.1.0
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:9092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
You need to add zookeeper service to your docker compose file
zookeeper:
image: confluent/zookeeper
ports:
- "2181:2181"
environment:
zk_id: "1"
network_mode: "host"

Categories