how to docker-compose spring-boot with kafka? [duplicate] - java

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 10 months ago.
Execution is working fine but I think I am missing something, because when I use the REST API it will show:
(/127.0.0.1:9092) could not be established. Broker may not be available
You may see this below. By the way I am new to Docker and Kafka.
I can't sent or get data to /GET/POST/PUT/DELETE since I use Docker.
My reference when i created this setup: https://github.com/codegard/kafka-docker/blob/master/docker-compose.yml
//docker-compose.yaml
version: '3'
services:
#----------------------------------------------------------------
productmicroservice:
image: productmicroservice:latest
container_name: productmicroservice
depends_on:
- product-mysqldb
- kafka
restart: always
build:
context: ./
dockerfile: Dockerfile
ports:
- "9001:8091"
environment:
- MYSQL_HOST=product-mysqldb
- MYSQL_USER=oot
- MYSQL_PASSWORD=root
- MYSQL_PORT=3306
- "SPRING_PROFILES_ACTIVE=${ACTIVE_PROFILE}"
#----------------------------------------------------------------
product-mysqldb:
image: mysql:8.0.28
restart: unless-stopped
container_name: product-mysqldb
ports:
- "3307:3306"
cap_add:
- SYS_NICE
environment:
MYSQL_DATABASE: dbpoc
MYSQL_ROOT_PASSWORD: root
#----------------------------------------------------------------
zookeeper:
image: elevy/zookeeper:latest
container_name: zookeeper
ports:
- "2181:2181"
#----------------------------------------------------------------
kafka:
image: wurstmeister/kafka:2.11-2.0.0
container_name: kafka
restart: on-failure
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
//appication.yaml
spring.kafka.bootstrap-servers=127.0.0.1:9092
product.kafkaServer= ${spring.kafka.bootstrap-servers}
spring.kafka.properties.security.protocol=PLAINTEXT
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
topic.name=producttopic
spring.jpa.properties.hibernate.check_nullability=true
spring:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:3306}/dbpoc
username: root
password: root
jpa:
hibernate:
naming:
implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
hibernate.ddl-auto: update
generate-ddl: false
show-sql: false
properties:
hibernate:
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
mvc:
throw-exception-if-no-handler-found: true
web:
resources:
add-mappings: false
sql:
init:
mode: always
continue-on-error: true
server:
port: 8091
// .env
ACTIVE_PROFILE=dev
//Dockerfile
FROM openjdk:8-alpine
ADD target/*.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
//topic that i created
./kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic producttopic
//trying to send data manualy in kafka
//working
//producer
bash-4.4# ./kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic producttopic
>hellow
>
//comsumer
bash-4.4# ./kafka-console-consumer.sh --topic producttopic --from-beginning --bootstrap-server 127.0.0.1:9092
hellow
//sending data to from my rest api
//this is wrong but i dont know what is the reason
productmicroservice | 2022-05-04 13:08:28.283 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
productmicroservice | 2022-05-04 13:08:28.283 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker 127.0.0.1:9092 (id: -1 rack: null) disconnected
productmicroservice | 2022-05-04 13:08:29.343 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
//fix reference below comment
//change made
//application.yaml
spring.kafka.topic.name=producttopic
topic.name=${spring.kafka.topic.name}
spring.kafka.bootstrap-servers=kafka:9092
product.kafkaServer= ${spring.kafka.bootstrap-servers}
//docker-compose.yaml
//added to kafka environment
KAFKA_CREATE_TOPICS: "producttopic:1:1"
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENERS: INSIDE://kafka:9092,OUTSIDE://0.0.0.0:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://localhost:9093
//manual get data in topic by consumer
./kafka-console-consumer.sh --topic producttopic --from-beginning --bootstrap-server localhost:9093
//code changes in spring
//for topic
#Value("${topic.name}")
//for bootstrapserver
#Value("${product.kafkaServer}")
//must note
wurstmeister/kafka:2.11-2.0.0
I use "2.11-2.0.0" since its compatible with jdk 8 where latest give me error and my project required jdk 8

You are using 127.0.0.1:9092 as the Kafka container endpoint from the Java container. From a container, localhost targets the container itself, this won't work.
Docker Compose will set a default network in which your services are reachable by their name.
Therefore I think you should change your application.yaml to:
spring.kafka.bootstrap-servers=kafka:9092
# ...

Related

Kafka Broker Application Disconnecting with EOFException

I'm running an application that connects to the kafka broker using confluentinc/cp-server:7.2.1 docker image.
When I try to run a larger application it just disconnect both application and I need to restart broker.
The message of application is this:
WARN 16756 --- [RMADE-738-0-C-1]
org.apache.kafka.clients.NetworkClient : [Consumer
clientId=consumer01] Connection to node 1 (localhost/127.0.0.1:9092)
could not be established. Broker may not be available.
INFO 16756 --- [RMFDE-603-0-C-1]
org.apache.kafka.clients.NetworkClient : [Consumer
clientId=consumer01] Node 1 disconnected.
And the message of broker (docker image) is this:
broker | INFO Skipping goal violation detection due to previous new
broker change
(com.linkedin.kafka.cruisecontrol.detector.GoalViolationDetector)
[2023] DEBUG [SocketServer listenerType=ZK_BROKER, nodeId=1]
Connection with /172.18.0.1
(channelId=172.18.0.5:9092-172.18.0.1:51368-43) disconnected
(org.apache.kafka.common.network.Selector)
java.io.EOFException at
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97)
at
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:797)
at
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:700)
at
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:783)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:635)
at org.apache.kafka.common.network.Selector.poll(Selector.java:519)
at kafka.network.Processor.poll(SocketServer.scala:1463) at
kafka.network.Processor.run(SocketServer.scala:1307) at
java.base/java.lang.Thread.run(Thread.java:829) at
org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:64)
[2023] DEBUG [SocketServer listenerType=ZK_BROKER, nodeId=1]
Connection with /172.18.0.1
(channelId=172.18.0.5:9092-172.18.0.1:50998-29) disconnected
(org.apache.kafka.common.network.Selector)
java.io.IOException: Broken pipe at
java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method) at
java.base/sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79) at
java.base/sun.nio.ch.IOUtil.write(IOUtil.java:50) at
java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:462)
at
org.apache.kafka.common.network.PlaintextTransportLayer.write(PlaintextTransportLayer.java:143)
at
org.apache.kafka.common.network.PlaintextTransportLayer.write(PlaintextTransportLayer.java:159)
at
org.apache.kafka.common.network.ByteBufferSend.writeTo(ByteBufferSend.java:62)
at
org.apache.kafka.common.network.NetworkSend.writeTo(NetworkSend.java:41)
at
org.apache.kafka.common.network.KafkaChannel.write(KafkaChannel.java:728)
at org.apache.kafka.common.network.Selector.write(Selector.java:753)
at
org.apache.kafka.common.network.Selector.attemptWrite(Selector.java:743)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:652)
at org.apache.kafka.common.network.Selector.poll(Selector.java:519)
at kafka.network.Processor.poll(SocketServer.scala:1463) at
kafka.network.Processor.run(SocketServer.scala:1307) at
java.base/java.lang.Thread.run(Thread.java:829) at
org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:64)
My docker-compose.yml file:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:7.2.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
KAFKA_HEAP_OPTS: -Xms6G -Xmx6G
JAVA_OPTS: -Xms6G -Xmx6G
JVM_OPTS: Xmx6g -Xms6g -XX:MaxPermSize=1024m
KAFKA_LOG4J_ROOT_LOGLEVEL: DEBUG
KAFKA_TOOLS_LOG4J_LOGLEVEL: DEBUG
schema-registry:
image: confluentinc/cp-schema-registry:7.2.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081

Error connecting to Kafka running in docker container

I have configured following Kafka properties for my spring boot based library bundled inside a lib directory of an EAR deployed to Wildfly. I am able to start the spring components successfully by loading the porperty file from classpath (WEB-INF/classes)
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.gson.GsonAutoConfiguration,org.springframework.boot.autoconfigure.jms.artemis.ArtemisAutoConfiguration,\
org.springframework.boot.autoconfigure.data.web.SpringDataWebAutoConfiguration
spring.kafka.admin.client-id=iris-admin-local
spring.kafka.producer.client-id=iris-producer-local
spring.kafka.producer.retries=3
spring.kafka.producer.properties.max.block.ms=2000
spring.kafka.producer.bootstrap-servers=127.0.0.1:19092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
foo.app.kafka.executor.core-pool-size=10
foo.app.kafka.executor.max-pool-size=500
foo.app.kafka.executor.queue-capacity=1000
I run Kafka and zookeeper via docker compose, and the containers are mapped to host ports 12181 and 19092 respectively. The publish fails with the error
19:37:42,914 ERROR [org.springframework.kafka.support.LoggingProducerListener] (swiftalker-3) Exception thrown when sending a message with key='543507' and payload='com.foo.app.kanban.defect.entity.KanbanDefect#84b13' to topic alm_swift-alm:: org.apache.kafka.common.errors.TimeoutException: Topic alm_swift-alm not present in metadata after 2000 ms.
19:37:43,124 WARN [org.apache.kafka.clients.NetworkClient] (kafka-producer-network-thread | iris-producer-local-1) [Producer clientId=iris-producer-local-1] Error connecting to node 6be446692a1f:9092 (id: 1001 rack: null): java.net.UnknownHostException: 6be446692a1f
at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955)
at org.apache.kafka.clients.NetworkClient.access$600(NetworkClient.java:73)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1128)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1016)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:547)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:324)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
at java.lang.Thread.run(Thread.java:748)
Now this is after having provided spring.kafka.producer.bootstrap-servers=127.0.0.1:19092 property. What's interesting though is
CONTAINER ID NAMES PORTS CREATED STATUS
2133c81ed51d mongo 0.0.0.0:23556->27017/tcp, 0.0.0.0:23557->27018/tcp, 0.0.0.0:23558->27019/tcp 29 minutes ago Up 29 minutes
f18b86d8739e kafka-ui 0.0.0.0:18080->8080/tcp 29 minutes ago Up 29 minutes
6be446692a1f kafka 0.0.0.0:19092->9092/tcp 29 minutes ago Up 29 minutes
873304e1e6a0 zookeeper 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp 29 minutes ago Up 29 minutes
the Wildfly server error logs show the app is actually connecting to the docker container via it's container ID i.e.
6be446692a1f kafka 0.0.0.0:19092->9092/tcp
from the docker ps -a output and
Error connecting to node 6be446692a1f:9092 (id: 1001 rack: null): java.net.UnknownHostException: 6be446692a1f
I'm confused as to how is the spring boot code, despite the config property referring server over localhost and mapped port 19092, is managing to find a docker container on it's ID and default port and then trying to connect to it? How do I fix this?
Update: The docker compose
version: '3'
networks:
app-tier:
driver: bridge
services:
zookeeper:
image: 'docker.io/bitnami/zookeeper:3-debian-10'
container_name: 'zookeeper'
networks:
- app-tier
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'docker.io/bitnami/kafka:2-debian-10'
container_name: 'kafka'
ports:
- 19092:9092
networks:
- app-tier
volumes:
- 'kafka_data:/bitnami'
- /var/run/docker.sock:/var/run/docker.sock
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
database:
image: 'mongo'
container_name: 'mongo'
environment:
- MONGO_INITDB_DATABASE='swiftalk_db'
networks:
- app-tier
ports:
- 23556-23558:27017-27019
depends_on:
- kafka
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 18080:8080
networks:
- app-tier
volumes:
- 'mongo_data:/data/db'
depends_on:
- kafka
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
mongo_data:
driver: local
You've not shared your Docker Compose so I can't give you the specific fix to make, but in essence you need to configure your advertised listeners correctly. This is the value that the broker provides to the client telling it where to find it when it makes subsequent connections.
Details: https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/

spring-boot Kafka integration in docker using docker-compose

I am trying to connect Kafka in spring boot inside Docker but it shows an error "could not be established. Broker may not be available."
Kafka and zookeeper server running successfully but I am unable to connect broker inside spring-boot.
here is my docker-compose.yml
version: "3"
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
spring_boot:
build: .
depends_on:
- kafka
application.yml from Kafka producer
kafka:
boot:
server: kafka:9092
topic:
name: myTopic-kafkasender
server:
port: 8081
Please suggest what I am doing wrong here

not able to connect mysql docker container from spring boot docker container

i am getting the following error
2020-12-26 23:17:30.499 INFO 1 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.MySQL57Dialect
licensingservice_1 | Hibernate: drop table if exists licenses
licensingservice_1 | 2020-12-26 23:17:31.006 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
licensingservice_1 | 2020-12-26 23:17:32.010 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
licensingservice_1 |
licensingservice_1 | com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
licensingservice_1 |
licensingservice_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
licensingservice_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.22.jar:8.0.22]
licensingservice_1 | at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.22.jar:8.0.22]
my docker-compose yml
version : '3'
services:
licensingservice:
image: licensing/licensing-service-ms:0.0.1-SNAPSHOT
ports:
- "8080:8080"
networks:
- my-network
volumes:
- .:/vol/development
depends_on:
- mysqldbserver
mysqldbserver:
image: mysql:5.7
ports:
- "3307:3306"
networks:
- my-network
environment:
MYSQL_DATABASE: license
MYSQL_ROOT_PASSWORD: Spartans#123
container_name: mysqldb
networks:
my-network:
driver: bridge
and my application.properties
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:mysql://mysqldb:3307/license
spring.datasource.username=root
spring.datasource.password=Spartans#123
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL57Dialect
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.show-sql=true
Try connecting to port 3306 instead. You're exposing port 3306 on the database container to the host machine on port 3307, but that doesn't change anything for communication between services inside the same network.
This is explained in the Docker-Compose docs.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Additionally, you can choose to expose these ports to the outside world by defining a mapping between the host port and container port. However, this has no effect on communication between services inside the same network:
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. [...] Networked service-to-service communication uses the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.

Cannot produce kafka message on kubernetes

I'm getting error on kafka: [2020-05-04 12:46:59,477] ERROR [KafkaApi-1001] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
And error when I'm trying to produce message: 2020-05-04 12:47:45.221 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {activate-user=LEADER_NOT_AVAILABLE}
Using docker-compose everything works fine but I'm trying to move it also to k8s. I started that process with kompose convert tool and modify the output.
Here is a fragment of the docker-compse:
zookeeper:
container_name: zookeeper
image: wurstmeister/zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- "2181:2181"
mail-sender-kafka:
container_name: mail-sender-kafka
image: wurstmeister/kafka:2.12-2.2.1
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: mail-sender-kafka
KAFKA_CREATE_TOPICS: "activate-user:1:1"
ports:
- 9092:9092
depends_on:
- zookeeper
account-service:
image: szastarek/food-delivery-account-service:${TAG}
container_name: account-service
environment:
- KAFKA_URI=mail-sender-kafka:9092
depends_on:
- config-server
- account-service-db
mail-sender:
image: szastarek/food-delivery-mail-sender:${TAG}
container_name: mail-sender
environment:
- KAFKA_URI=mail-sender-kafka:9092
depends_on:
- config-server
After converting it to k8s I've got zookeeper-deployment, zookeeper-service, mail-sender-deployment, mail-sender-kafka-deployment, mail-sender-kafka-service.
I've also tried to add some env variables and for now, it looks like that:
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: mail-sender-kafka
- name: KAFKA_ADVERTISED_PORT
value: '9092'
- name: ADVERTISED_LISTENERS
value: PLAINTEXT://mail-sender-kafka:9092
- name: KAFKA_CREATE_TOPICS
value: activate-user:1:1
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
I've found one thing that probably is connected with the problem.
When I run ping mail-sender-kafka on docker I can reach myself. But if I connect to kubernetes mail-sender-kafka pod then I cannot ping myself.
After update the hosts file it works. There was something like:
172.18.0.24 mail-sender-kafka-xxxxxxx
And I changed it to
172.18.0.24 mail-sender-kafka
Any tips about how should I fix it?

Categories