connecting sprig to kafka started using docker compose for localhost development - java

After hours of trying to find a solution for this problem I decided to post it here.
I have the following docker-compose which starts zookeper, 2 kafka brokers and kafdrop
version: '2'
networks:
kafka-net:
driver: bridge
services:
zookeeper-server:
image: 'bitnami/zookeeper:latest'
networks:
- kafka-net
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka-broker-1:
image: 'bitnami/kafka:latest'
networks:
- kafka-net
ports:
- '9092:9092'
- '29092:29092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=INSIDE://kafka-broker-1:9092,OUTSIDE://localhost:29092
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
- KAFKA_CFG_LISTENERS=INSIDE://:9092,OUTSIDE://:29092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INSIDE
- KAFKA_CFG_BROKER_ID=1
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
kafka-broker-2:
image: 'bitnami/kafka:latest'
networks:
- kafka-net
ports:
- '9093:9092'
- '29093:29092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=INSIDE://kafka-broker-2:9093,OUTSIDE://localhost:29093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
- KAFKA_CFG_LISTENERS=INSIDE://:9093,OUTSIDE://:29093
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INSIDE
- KAFKA_CFG_BROKER_ID=2
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
kafdrop-web:
image: obsidiandynamics/kafdrop
networks:
- kafka-net
ports:
- '9000:9000'
environment:
- KAFKA_BROKERCONNECT=kafka-broker-1:9092,kafka-broker-2:9093
depends_on:
- kafka-broker-1
- kafka-broker-2
Then I am trying to connect my Spring command line app to Kafka. I took the quickstart example from spring kafka docs.
Properties file
spring.kafka.bootstrap-servers=localhost:29092,localhost:29093
spring.kafka.consumer.group-id=cg1
spring.kafka.consumer.auto-offset-reset=earliest
logging.level.org.springframework.kafka=debug
logging.level.org.apache.kafka=debug
And the app code
#SpringBootApplication
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
#Component
public class Runner implements CommandLineRunner {
public static Logger logger = LoggerFactory.getLogger(Runner.class);
#Autowired
private KafkaTemplate<String, String> template;
private final CountDownLatch latch = new CountDownLatch(3);
#Override
public void run(String... args) throws Exception {
this.template.send("myTopic", "foo1");
this.template.send("myTopic", "foo2");
this.template.send("myTopic", "foo3");
latch.await(60, TimeUnit.SECONDS);
logger.info("All received");
}
#KafkaListener(topics = "myTopic")
public void listen(ConsumerRecord<?, ?> cr) throws Exception {
logger.info(cr.toString());
latch.countDown();
}
}
The logs I get when I run the app
2020-04-07 01:06:45.074 DEBUG 7560 --- [ main] KafkaListenerAnnotationBeanPostProcessor : 1 #KafkaListener methods processed on bean 'runner': {public void com.vdt.learningkafka.Runner.listen(org.apache.kafka.clients.consumer.ConsumerRecord) throws java.lang.Exception=[#org.springframework.kafka.annotation.KafkaListener(autoStartup=, beanRef=__listener, clientIdPrefix=, concurrency=, containerFactory=, containerGroup=, errorHandler=, groupId=, id=, idIsGroup=true, properties=[], splitIterables=true, topicPartitions=[], topicPattern=, topics=[myTopic])]}
2020-04-07 01:06:45.284 INFO 7560 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:29092, localhost:29093]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2020-04-07 01:06:45.305 DEBUG 7560 --- [ main] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:29093 (id: -2 rack: null), localhost:29092 (id: -1 rack: null)], partitions = [], controller = null).
2020-04-07 01:06:45.411 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-closed:
2020-04-07 01:06:45.413 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-created:
2020-04-07 01:06:45.414 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication:
2020-04-07 01:06:45.414 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-reauthentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication-no-reauth:
2020-04-07 01:06:45.415 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-authentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-reauthentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name reauthentication-latency:
2020-04-07 01:06:45.416 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent-received:
2020-04-07 01:06:45.416 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent:
2020-04-07 01:06:45.417 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-received:
2020-04-07 01:06:45.418 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name select-time:
2020-04-07 01:06:45.419 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name io-time:
2020-04-07 01:06:45.428 INFO 7560 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.3.1
2020-04-07 01:06:45.428 INFO 7560 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 18a913733fb71c01
2020-04-07 01:06:45.428 INFO 7560 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1586210805426
2020-04-07 01:06:45.430 DEBUG 7560 --- [ main] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Kafka admin client initialized
2020-04-07 01:06:45.433 DEBUG 7560 --- [ main] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586210925432) with a timeout 120000 ms from now.
2020-04-07 01:06:45.434 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: -2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.442 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--2.bytes-sent
2020-04-07 01:06:45.443 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--2.bytes-received
2020-04-07 01:06:45.443 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--2.latency
2020-04-07 01:06:45.445 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -2
2020-04-07 01:06:45.569 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node -2. Fetching API versions.
2020-04-07 01:06:45.569 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -2.
2020-04-07 01:06:45.576 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
2020-04-07 01:06:45.577 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node -2 disconnected.
2020-04-07 01:06:45.578 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29092 (id: -1 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.578 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-sent
2020-04-07 01:06:45.579 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-received
2020-04-07 01:06:45.579 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.latency
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions.
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1.
2020-04-07 01:06:45.586 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Recorded API versions for node -1: (Produce(0): 0 to 8 [usable: 7], Fetch(1): 0 to 11 [usable: 11], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 9 [usable: 8], LeaderAndIsr(4): 0 to 4 [usable: 2], StopReplica(5): 0 to 2 [usable: 1], UpdateMetadata(6): 0 to 6 [usable: 5], ControlledShutdown(7): 0 to 3 [usable: 2], OffsetCommit(8): 0 to 8 [usable: 7], OffsetFetch(9): 0 to 6 [usable: 5], FindCoordinator(10): 0 to 3 [usable: 2], JoinGroup(11): 0 to 6 [usable: 5], Heartbeat(12): 0 to 4 [usable: 3], LeaveGroup(13): 0 to 4 [usable: 2], SyncGroup(14): 0 to 4 [usable: 3], DescribeGroups(15): 0 to 5 [usable: 3], ListGroups(16): 0 to 3 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 5 [usable: 3], DeleteTopics(20): 0 to 4 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 2 [usable: 1], OffsetForLeaderEpoch(23): 0 to 3 [usable: 3], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 2 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], ElectPreferredLeaders(43): 0 to 2 [usable: 0], IncrementalAlterConfigs(44): 0 to 1 [usable: 0], UNKNOWN(45): 0, UNKNOWN(46): 0, UNKNOWN(47): 0)
2020-04-07 01:06:45.594 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Updating cluster metadata to Cluster(id = gSypiCeoSlyuSR4ks5qwwA, nodes = [localhost:29092 (id: 1 rack: null), localhost:29093 (id: 2 rack: null)], partitions = [], controller = localhost:29093 (id: 2 rack: null))
2020-04-07 01:06:45.595 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: 2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.596 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-2.bytes-sent
2020-04-07 01:06:45.597 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-2.bytes-received
2020-04-07 01:06:45.597 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-2.latency
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node 2. Fetching API versions.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 2.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node 2 disconnected.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Requesting metadata update.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29092 (id: 1 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.599 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-1.bytes-sent
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-1.bytes-received
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-1.latency
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node 1. Fetching API versions.
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 1.
2020-04-07 01:06:45.602 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Recorded API versions for node 1: (Produce(0): 0 to 8 [usable: 7], Fetch(1): 0 to 11 [usable: 11], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 9 [usable: 8], LeaderAndIsr(4): 0 to 4 [usable: 2], StopReplica(5): 0 to 2 [usable: 1], UpdateMetadata(6): 0 to 6 [usable: 5], ControlledShutdown(7): 0 to 3 [usable: 2], OffsetCommit(8): 0 to 8 [usable: 7], OffsetFetch(9): 0 to 6 [usable: 5], FindCoordinator(10): 0 to 3 [usable: 2], JoinGroup(11): 0 to 6 [usable: 5], Heartbeat(12): 0 to 4 [usable: 3], LeaveGroup(13): 0 to 4 [usable: 2], SyncGroup(14): 0 to 4 [usable: 3], DescribeGroups(15): 0 to 5 [usable: 3], ListGroups(16): 0 to 3 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 5 [usable: 3], DeleteTopics(20): 0 to 4 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 2 [usable: 1], OffsetForLeaderEpoch(23): 0 to 3 [usable: 3], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 2 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], ElectPreferredLeaders(43): 0 to 2 [usable: 0], IncrementalAlterConfigs(44): 0 to 1 [usable: 0], UNKNOWN(45): 0, UNKNOWN(46): 0, UNKNOWN(47): 0)
2020-04-07 01:06:45.605 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Updating cluster metadata to Cluster(id = gSypiCeoSlyuSR4ks5qwwA, nodes = [localhost:29092 (id: 1 rack: null), localhost:29093 (id: 2 rack: null)], partitions = [], controller = localhost:29093 (id: 2 rack: null))
2020-04-07 01:06:45.639 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: 2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node 2. Fetching API versions.
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 2.
2020-04-07 01:06:45.641 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
The configuration from docker seems to be fine since Kafdrop successfully connects brokers and also, from what I get from the logs, the spring application manages to connect to the brokers but immediately after that the connection is getting closed.

To connect a Docker-based application with Kafka, also running inside the Docker container, you have to refer the the name (alias for its address) of the container with Kafka:
spring.kafka.bootstrap-servers=kafka-broker-1:29092,kafka-broker-2:29093
If you try to connect the application inside the Docker container to localhost:29092, it tries to connect to the localhost of the very same container and not the outer network.
The configuration from docker seems to be fine since Kafdrop successfully connects brokers
Yes, check the kafdrop-web inside the docker-compose.yml and take a look how it connects to the Kafka broker. There are used the names of the containers:
KAFKA_BROKERCONNECT=kafka-broker-1:9092,kafka-broker-2:9093

Related

Spring application cannot connect to a Kafka broker

I am trying to build my first application using Kafka as a messaging system, but I have some problems with running it. I am using Docker images of zookeeper and Kafka from wurstmeister with this docker-compose.yml:
version: '3.8'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
restart: unless-stopped
kafka:
image: wurstmeister/kafka
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "toAll:1:1"
restart: unless-stopped
docker-compose up gives me following output:
kafka_1 | [2022-06-13 22:08:10,071] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2022-06-13 22:08:10,073] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2022-06-13 22:08:10,074] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2022-06-13 22:08:10,076] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2022-06-13 22:08:10,089] INFO Log directory /kafka/kafka-logs-44d30bad178c not found, creating it. (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,121] INFO Loading logs from log dirs ArraySeq(/kafka/kafka-logs-44d30bad178c) (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,123] INFO Attempting recovery for all logs in /kafka/kafka-logs-44d30bad178c since no clean shutdown file was found (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,128] INFO Loaded 0 logs in 7ms. (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,128] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,130] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1 | [2022-06-13 22:08:10,429] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
kafka_1 | [2022-06-13 22:08:10,433] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1 | [2022-06-13 22:08:10,465] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1049] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
kafka_1 | [2022-06-13 22:08:10,489] INFO [broker-1049-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)
kafka_1 | [2022-06-13 22:08:10,504] INFO [ExpirationReaper-1049-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,505] INFO [ExpirationReaper-1049-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,506] INFO [ExpirationReaper-1049-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,506] INFO [ExpirationReaper-1049-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,518] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_1 | [2022-06-13 22:08:10,552] INFO Creating /brokers/ids/1049 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_1 | [2022-06-13 22:08:10,567] INFO Stat of the created znode at /brokers/ids/1049 is: 1191,1191,1655158090560,1655158090560,1,0,0,72057715199770624,202,0,1191
kafka_1 | (kafka.zk.KafkaZkClient)
kafka_1 | [2022-06-13 22:08:10,567] INFO Registered broker 1049 at path /brokers/ids/1049 with addresses: PLAINTEXT://127.0.0.1:9092, czxid (broker epoch): 1191 (kafka.zk.KafkaZkClient)
kafka_1 | [2022-06-13 22:08:10,623] INFO [ExpirationReaper-1049-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,627] INFO [ExpirationReaper-1049-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,628] INFO [ExpirationReaper-1049-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,640] INFO [GroupCoordinator 1049]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2022-06-13 22:08:10,644] INFO [GroupCoordinator 1049]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2022-06-13 22:08:10,664] INFO [ProducerId Manager 1049]: Acquired new producerId block (brokerId:1049,blockStartProducerId:25000,blockEndProducerId:25999) by writing to Zk with path version 26 (kafka.coordinator.transaction.ProducerIdManager)
kafka_1 | [2022-06-13 22:08:10,665] INFO [TransactionCoordinator id=1049] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2022-06-13 22:08:10,668] INFO [TransactionCoordinator id=1049] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2022-06-13 22:08:10,668] INFO [Transaction Marker Channel Manager 1049]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_1 | [2022-06-13 22:08:10,688] INFO [ExpirationReaper-1049-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2022-06-13 22:08:10,704] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_1 | [2022-06-13 22:08:10,722] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1049] Starting socket server acceptors and processors (kafka.network.SocketServer)
kafka_1 | [2022-06-13 22:08:10,735] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1049] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
kafka_1 | [2022-06-13 22:08:10,735] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1049] Started socket server acceptors and processors (kafka.network.SocketServer)
kafka_1 | [2022-06-13 22:08:10,739] INFO Kafka version: 2.8.1 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2022-06-13 22:08:10,739] INFO Kafka commitId: 839b886f9b732b15 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2022-06-13 22:08:10,740] INFO Kafka startTimeMs: 1655158090735 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2022-06-13 22:08:10,744] INFO [KafkaServer id=1049] started (kafka.server.KafkaServer)
zookeeper_1 | 2022-06-13 22:08:10,770 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#596] - Got user-level KeeperException when processing sessionid:0x100001c35cf0000 type:multi cxid:0x3b zxid:0x4ab txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka_1 | [2022-06-13 22:08:10,798] INFO [broker-1049-to-controller-send-thread]: Recorded new controller, from now on will use broker 127.0.0.1:9092 (id: 1049 rack: null) (kafka.server.BrokerToControllerRequestThread)
kafka_1 | creating topics: toAll:1:1
zookeeper_1 | 2022-06-13 22:08:19,653 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /172.20.0.3:60284
zookeeper_1 | 2022-06-13 22:08:19,655 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /172.20.0.3:60284
zookeeper_1 | 2022-06-13 22:08:19,659 [myid:] - INFO [SyncThread:0:ZooKeeperServer#694] - Established session 0x100001c35cf0001 with negotiated timeout 30000 for client /172.20.0.3:60284
zookeeper_1 | 2022-06-13 22:08:19,803 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#487] - Processed session termination for sessionid: 0x100001c35cf0001
zookeeper_1 | 2022-06-13 22:08:19,808 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /172.20.0.3:60284 which had sessionid 0x100001c35cf0001
So, as those said: from now on will use broker 127.0.0.1:9092 and creating topics: toAll:1:1 i am guessing it should be ready to run my application, which is:
#SpringBootApplication
#Slf4j
public class KafkaTestApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaTestApplication.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("toAll").build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
String mess = "Test message";
template.send("toAll", mess);
log.info(String.format("Message sent: %s", mess));
};
}
}
But when I run it, I get this output:
2022-06-14 00:25:25.148 INFO 1108 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:9092]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2022-06-14 00:25:25.254 INFO 1108 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.1.1
2022-06-14 00:25:25.255 INFO 1108 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 97671528ba54a138
2022-06-14 00:25:25.255 INFO 1108 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1655159125253
2022-06-14 00:25:27.299 INFO 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node -1 disconnected.
2022-06-14 00:25:27.301 WARN 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2022-06-14 00:25:29.454 INFO 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node -1 disconnected.
2022-06-14 00:25:29.454 WARN 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2022-06-14 00:25:31.724 INFO 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node -1 disconnected.
2022-06-14 00:25:31.724 WARN 1108 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
and last two lines repeats for a while to finally throw an exception.
I am fighting with this issue for two days, trying some other setups for Kafka, like kafka_listeners, kafka_advertised_listeners and so on... Some of them make Kafka not running properly, some change IPs, but non of them make my app gives different output.
Do you some ideas what can cause this problem and how might it be fixed?
Thanks a lot in advance.
Look at the output of docker ps
If you don't see 0.0.0.0:9092->9092/tcp, then you've not forwarded the host to the necessary KAFKA_ADVERTISED_PORT
Setting this way in compose makes Docker pick a random host port to map 9092 to inside the container, and therefore localhost:9092 isn't a valid open port on your host.
ports:
- "9092"
Related Connect to Kafka running in Docker

Topic is not assigned a partition

So I am running my kafka consumer. Im just curious why my topic is not anymore placed inside the partition assigned. I believe when it shows that it is assigned, it means that I am able to listen to that topic. But now, logs are showing that there are no assigned partition for my topic. Am I still able to successfully listen to the topic or not?
Here are the logs:
Successfully logged in.
2021-10-01 02:43:14.668 INFO 35404 --- [_user#TEST.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=kafka_user#TEST.COM]: TGT refresh thread started.
2021-10-01 02:43:14.668 INFO 35404 --- [_user#TEST.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=kafka_user#TEST.COM]: TGT valid starting at: Fri Oct 01 02:43:14 CST 2021
2021-10-01 02:43:14.668 INFO 35404 --- [_user#TEST.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=kafka_user#TEST.COM]: TGT expires: Fri Oct 01 12:43:14 CST 2021
2021-10-01 02:43:14.668 INFO 35404 --- [_user#TEST.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=kafka_user#TEST.COM]: TGT refresh sleeping until: Fri Oct 01 11:06:34 CST 2021
2021-10-01 02:43:14.689 INFO 35404 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.3.1
2021-10-01 02:43:14.689 INFO 35404 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 18a913733fb71c01
2021-10-01 02:43:14.690 INFO 35404 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1633027394689
2021-10-01 02:43:14.692 INFO 35404 --- [ main] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Subscribed to topic(s): MY_TOPIC
2021-10-01 02:43:14.695 INFO 35404 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2021-10-01 02:43:14.706 INFO 35404 --- [ main] s.i.k.i.KafkaMessageDrivenChannelAdapter : started org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter#6ad2c882
2021-10-01 02:43:14.712 INFO 35404 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Context refreshed
2021-10-01 02:43:14.751 INFO 35404 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Found 1 custom documentation plugin(s)
2021-10-01 02:43:14.843 INFO 35404 --- [ main] s.d.s.w.s.ApiListingReferenceScanner : Scanning for api listing references
2021-10-01 02:43:15.646 INFO 35404 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8090 (http) with context path ''
2021-10-01 02:43:15.648 INFO 35404 --- [ main] .s.c.n.e.s.EurekaAutoServiceRegistration : Updating port to 8090
2021-10-01 02:43:17.051 WARN 35404 --- [container-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Connection to node -3 (broker.port3.com/10.235.27.73:6667) could not be established. Broker may not be available.
2021-10-01 02:43:17.388 INFO 35404 --- [ main] com.ap.Application : Started Application in 62.57 seconds (JVM running for 143.114)
2021-10-01 02:43:17.816 INFO 35404 --- [container-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Cluster ID: qa_IFa70SgeMIT5JIcDhHA
2021-10-01 02:43:18.213 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Discovered group coordinator broker.port1.com:6667 (id: 2147482644 rack: null)
2021-10-01 02:43:18.219 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Revoking previously assigned partitions []
2021-10-01 02:43:18.220 INFO 35404 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : my-group-service-dev: partitions revoked: []
2021-10-01 02:43:18.220 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] (Re-)joining group
2021-10-01 02:43:18.621 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] (Re-)joining group
2021-10-01 02:43:19.816 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Successfully joined group with generation 68
2021-10-01 02:43:19.819 INFO 35404 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=my-group-service-dev] Setting newly assigned partitions:
2021-10-01 02:43:19.821 INFO 35404 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : my-group-service-dev: partitions assigned: []

Missing events/items when combining findAll, flatMap and Schedulers.boundedElastic()

I have written the following test while exploring spring webflux.
It's quite simple, i thought. Writing 10000 items and reading them again.
But somehow, sometimes a few are missing.
There no is no exception thrown just sometimes missing a few.
Is this a coding error or maybe a bug in mongo / reactor?
This is a basic spring boot setup without any further customization.
Thx.
package de.eggheads.tools.fluxtest;
import static org.junit.jupiter.api.Assertions.assertEquals;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.RepeatedTest;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import de.eggheads.tools.fluxtest.persistence.DataA;
import de.eggheads.tools.fluxtest.persistence.DataB;
import de.eggheads.tools.fluxtest.repository.DataARepository;
import de.eggheads.tools.fluxtest.repository.DataBRepository;
import de.eggheads.tools.fluxtest.service.DataService;
import lombok.extern.slf4j.Slf4j;
#Slf4j
#SpringBootTest
public class DataServiceTests {
#Autowired
DataARepository dataARepository;
#Autowired
DataBRepository dataBRepository;
#Autowired
DataService dataService;
#BeforeEach
void before() {
dataARepository.deleteAll().block();
for (int i = 0; i < 10000; i++) {
String value = String.valueOf(i);
dataARepository.save(new DataA(value, value)).block();
}
dataBRepository.deleteAll().block();
for (int i = 0; i < 10; i++) {
String value = String.valueOf(i);
dataBRepository.save(new DataB(value, value)).block();
}
}
#RepeatedTest(1)
void findAll() {
for (int i = 0; i < 10; i++) {
int size = dataService.findAll().size();
log.debug("size = {}", size);
assertEquals(10000, size);
}
}
}
package de.eggheads.tools.fluxtest.service;
import java.util.Collection;
import org.springframework.stereotype.Service;
import de.eggheads.tools.fluxtest.persistence.DataA;
import de.eggheads.tools.fluxtest.repository.DataARepository;
import de.eggheads.tools.fluxtest.repository.DataBRepository;
import lombok.RequiredArgsConstructor;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
#RequiredArgsConstructor
#Service
public class DataService {
private final DataARepository dataARepository;
private final DataBRepository dataBRepository;
private Mono<DataA> map(DataA dataA) {
return Mono.just(dataA).subscribeOn(Schedulers.boundedElastic());
}
public Collection<DataA> findAll() {
return dataARepository.findAll().onErrorContinue((t, o) -> t.printStackTrace()).flatMap(this::map).collectList()
.block();
}
}
package de.eggheads.tools.fluxtest.repository;
import org.springframework.data.mongodb.repository.ReactiveMongoRepository;
import org.springframework.stereotype.Repository;
import de.eggheads.tools.fluxtest.persistence.DataA;
#Repository
public interface DataARepository extends ReactiveMongoRepository<DataA, String> {
}
<properties>
<java.version>11</java.version>
</properties>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.4</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
</dependency>
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
22:04:02.279 [main] DEBUG org.springframework.test.context.BootstrapUtils - Instantiating CacheAwareContextLoaderDelegate from class [org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate]
22:04:02.293 [main] DEBUG org.springframework.test.context.BootstrapUtils - Instantiating BootstrapContext using constructor [public org.springframework.test.context.support.DefaultBootstrapContext(java.lang.Class,org.springframework.test.context.CacheAwareContextLoaderDelegate)]
22:04:02.327 [main] DEBUG org.springframework.test.context.BootstrapUtils - Instantiating TestContextBootstrapper for test class [de.eggheads.tools.fluxtest.DataServiceTests] from class [org.springframework.boot.test.context.SpringBootTestContextBootstrapper]
22:04:02.343 [main] INFO org.springframework.boot.test.context.SpringBootTestContextBootstrapper - Neither #ContextConfiguration nor #ContextHierarchy found for test class [de.eggheads.tools.fluxtest.DataServiceTests], using SpringBootContextLoader
22:04:02.348 [main] DEBUG org.springframework.test.context.support.AbstractContextLoader - Did not detect default resource location for test class [de.eggheads.tools.fluxtest.DataServiceTests]: class path resource [de/eggheads/tools/fluxtest/DataServiceTests-context.xml] does not exist
22:04:02.348 [main] DEBUG org.springframework.test.context.support.AbstractContextLoader - Did not detect default resource location for test class [de.eggheads.tools.fluxtest.DataServiceTests]: class path resource [de/eggheads/tools/fluxtest/DataServiceTestsContext.groovy] does not exist
22:04:02.348 [main] INFO org.springframework.test.context.support.AbstractContextLoader - Could not detect default resource locations for test class [de.eggheads.tools.fluxtest.DataServiceTests]: no resource found for suffixes {-context.xml, Context.groovy}.
22:04:02.349 [main] INFO org.springframework.test.context.support.AnnotationConfigContextLoaderUtils - Could not detect default configuration classes for test class [de.eggheads.tools.fluxtest.DataServiceTests]: DataServiceTests does not declare any static, non-private, non-final, nested classes annotated with #Configuration.
22:04:02.393 [main] DEBUG org.springframework.test.context.support.ActiveProfilesUtils - Could not find an 'annotation declaring class' for annotation type [org.springframework.test.context.ActiveProfiles] and class [de.eggheads.tools.fluxtest.DataServiceTests]
22:04:02.458 [main] DEBUG org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider - Identified candidate component class: file [C:\dev\eclipse-2020-12\ws\fluxtest\target\classes\de\eggheads\tools\fluxtest\FluxtestApplication.class]
22:04:02.470 [main] INFO org.springframework.boot.test.context.SpringBootTestContextBootstrapper - Found #SpringBootConfiguration de.eggheads.tools.fluxtest.FluxtestApplication for test class de.eggheads.tools.fluxtest.DataServiceTests
22:04:02.571 [main] DEBUG org.springframework.boot.test.context.SpringBootTestContextBootstrapper - #TestExecutionListeners is not present for class [de.eggheads.tools.fluxtest.DataServiceTests]: using defaults.
22:04:02.572 [main] INFO org.springframework.boot.test.context.SpringBootTestContextBootstrapper - Loaded default TestExecutionListener class names from location [META-INF/spring.factories]: [org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener, org.springframework.boot.test.autoconfigure.webservices.client.MockWebServiceServerTestExecutionListener, org.springframework.test.context.web.ServletTestExecutionListener, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener, org.springframework.test.context.event.ApplicationEventsTestExecutionListener, org.springframework.test.context.support.DependencyInjectionTestExecutionListener, org.springframework.test.context.support.DirtiesContextTestExecutionListener, org.springframework.test.context.transaction.TransactionalTestExecutionListener, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener, org.springframework.test.context.event.EventPublishingTestExecutionListener]
22:04:02.587 [main] DEBUG org.springframework.boot.test.context.SpringBootTestContextBootstrapper - Skipping candidate TestExecutionListener [org.springframework.test.context.web.ServletTestExecutionListener] due to a missing dependency. Specify custom listener classes or make the default listener classes and their required dependencies available. Offending class: [javax/servlet/ServletContext]
22:04:02.598 [main] INFO org.springframework.boot.test.context.SpringBootTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener#2dd80673, org.springframework.test.context.event.ApplicationEventsTestExecutionListener#4af0df05, org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener#57ea113a, org.springframework.boot.test.autoconfigure.SpringBootDependencyInjectionTestExecutionListener#acdb094, org.springframework.test.context.support.DirtiesContextTestExecutionListener#674bd420, org.springframework.test.context.transaction.TransactionalTestExecutionListener#2b0f373b, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener#2ceb80a1, org.springframework.test.context.event.EventPublishingTestExecutionListener#4b45dcb8, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener#7216fb24, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener#2072acb2, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener#50ecde95, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener#35a9782c, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener#70a36a66, org.springframework.boot.test.autoconfigure.webservices.client.MockWebServiceServerTestExecutionListener#45815ffc]
22:04:02.603 [main] DEBUG org.springframework.test.context.support.AbstractDirtiesContextTestExecutionListener - Before test class: context [DefaultTestContext#5b068087 testClass = DataServiceTests, testInstance = [null], testMethod = [null], testException = [null], mergedContextConfiguration = [ReactiveWebMergedContextConfiguration#6f152006 testClass = DataServiceTests, locations = '{}', classes = '{class de.eggheads.tools.fluxtest.FluxtestApplication}', contextInitializerClasses = '[]', activeProfiles = '{}', propertySourceLocations = '{}', propertySourceProperties = '{org.springframework.boot.test.context.SpringBootTestContextBootstrapper=true}', contextCustomizers = set[org.springframework.boot.test.context.filter.ExcludeFilterContextCustomizer#1807f5a7, org.springframework.boot.test.json.DuplicateJsonObjectContextCustomizerFactory$DuplicateJsonObjectContextCustomizer#4dc27487, org.springframework.boot.test.mock.mockito.MockitoContextCustomizer#0, org.springframework.boot.test.web.client.TestRestTemplateContextCustomizer#1b66c0fb, org.springframework.boot.test.web.reactive.server.WebTestClientContextCustomizer#660acfb, org.springframework.boot.test.autoconfigure.actuate.metrics.MetricsExportContextCustomizerFactory$DisableMetricExportContextCustomizer#2d29b4ee, org.springframework.boot.test.autoconfigure.properties.PropertyMappingContextCustomizer#0, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverContextCustomizerFactory$Customizer#3b6d844d, org.springframework.boot.test.context.SpringBootTestArgs#1, org.springframework.boot.test.context.SpringBootTestWebEnvironment#78047b92], contextLoader = 'org.springframework.boot.test.context.SpringBootContextLoader', parent = [null]], attributes = map[[empty]]], class annotated with #DirtiesContext [false] with mode [null].
22:04:02.628 [main] DEBUG org.springframework.test.context.support.DependencyInjectionTestExecutionListener - Performing dependency injection for test context [[DefaultTestContext#5b068087 testClass = DataServiceTests, testInstance = de.eggheads.tools.fluxtest.DataServiceTests#5ee2b6f9, testMethod = [null], testException = [null], mergedContextConfiguration = [ReactiveWebMergedContextConfiguration#6f152006 testClass = DataServiceTests, locations = '{}', classes = '{class de.eggheads.tools.fluxtest.FluxtestApplication}', contextInitializerClasses = '[]', activeProfiles = '{}', propertySourceLocations = '{}', propertySourceProperties = '{org.springframework.boot.test.context.SpringBootTestContextBootstrapper=true}', contextCustomizers = set[org.springframework.boot.test.context.filter.ExcludeFilterContextCustomizer#1807f5a7, org.springframework.boot.test.json.DuplicateJsonObjectContextCustomizerFactory$DuplicateJsonObjectContextCustomizer#4dc27487, org.springframework.boot.test.mock.mockito.MockitoContextCustomizer#0, org.springframework.boot.test.web.client.TestRestTemplateContextCustomizer#1b66c0fb, org.springframework.boot.test.web.reactive.server.WebTestClientContextCustomizer#660acfb, org.springframework.boot.test.autoconfigure.actuate.metrics.MetricsExportContextCustomizerFactory$DisableMetricExportContextCustomizer#2d29b4ee, org.springframework.boot.test.autoconfigure.properties.PropertyMappingContextCustomizer#0, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverContextCustomizerFactory$Customizer#3b6d844d, org.springframework.boot.test.context.SpringBootTestArgs#1, org.springframework.boot.test.context.SpringBootTestWebEnvironment#78047b92], contextLoader = 'org.springframework.boot.test.context.SpringBootContextLoader', parent = [null]], attributes = map['org.springframework.test.context.event.ApplicationEventsTestExecutionListener.recordApplicationEvents' -> false]]].
22:04:02.662 [main] DEBUG org.springframework.test.context.support.TestPropertySourceUtils - Adding inlined properties to environment: {spring.jmx.enabled=false, org.springframework.boot.test.context.SpringBootTestContextBootstrapper=true}
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.4.4)
2021-04-12 22:04:02.963 INFO 18824 --- [ main] d.e.tools.fluxtest.DataServiceTests : Starting DataServiceTests using Java 11.0.9.1 on egg-note58 with PID 18824 (started by christianl in C:\dev\eclipse-2020-12\ws\fluxtest)
2021-04-12 22:04:02.965 INFO 18824 --- [ main] d.e.tools.fluxtest.DataServiceTests : No active profile set, falling back to default profiles: default
2021-04-12 22:04:03.469 INFO 18824 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data Reactive MongoDB repositories in DEFAULT mode.
2021-04-12 22:04:03.655 INFO 18824 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 182 ms. Found 2 Reactive MongoDB repository interfaces.
2021-04-12 22:04:05.646 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : note: noprealloc may hurt performance in many applications
2021-04-12 22:04:05.708 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.708+0200 I CONTROL [initandlisten] MongoDB starting : pid=18632 port=62655 dbpath=C:\Users\CHRIST~1\AppData\Local\Temp\embedmongo-db-2b40d458-5827-4772-ae88-9721f76da138 64-bit host=egg-note58
2021-04-12 22:04:05.708 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] targetMinOS: Windows Vista/Windows Server 2008
2021-04-12 22:04:05.708 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] db version v3.5.5
2021-04-12 22:04:05.708 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] git version: 98515c812b6fa893613f063dae568ff8319cbfbd
2021-04-12 22:04:05.709 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] allocator: tcmalloc
2021-04-12 22:04:05.709 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] modules: none
2021-04-12 22:04:05.709 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] build environment:
2021-04-12 22:04:05.709 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] distarch: x86_64
2021-04-12 22:04:05.709 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] target_arch: x86_64
2021-04-12 22:04:05.709 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.709+0200 I CONTROL [initandlisten] options: { net: { bindIp: "127.0.0.1", http: { enabled: false }, port: 62655 }, security: { authorization: "disabled" }, storage: { dbPath: "C:\Users\CHRIST~1\AppData\Local\Temp\embedmongo-db-2b40d458-5827-4772-ae88-9721f76da138", journal: { enabled: false }, mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
2021-04-12 22:04:05.709 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.710+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=15766M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),,log=(enabled=false),
2021-04-12 22:04:05.788 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.789+0200 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2021-04-12 22:04:05.788 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.789+0200 I CONTROL [initandlisten]
2021-04-12 22:04:05.788 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.789+0200 I CONTROL [initandlisten] ** NOTE: This is a development version (3.5.5) of MongoDB.
2021-04-12 22:04:05.789 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.789+0200 I CONTROL [initandlisten] ** Not recommended for production.
2021-04-12 22:04:05.789 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:05.789+0200 I CONTROL [initandlisten]
2021-04-12 22:04:06.151 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:06.152+0200 W FTDC [initandlisten] Failed to initialize Performance Counters for FTDC: WindowsPdhError: PdhExpandCounterPathW failed with 'Das angegebene Objekt wurde nicht auf dem Computer gefunden.' for counter '\Memory\Available Bytes'
2021-04-12 22:04:06.151 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:06.152+0200 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/Users/CHRIST~1/AppData/Local/Temp/embedmongo-db-2b40d458-5827-4772-ae88-9721f76da138/diagnostic.data'
2021-04-12 22:04:06.187 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:06.188+0200 I INDEX [initandlisten] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
2021-04-12 22:04:06.187 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:06.188+0200 I INDEX [initandlisten] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2021-04-12 22:04:06.189 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:06.190+0200 I INDEX [initandlisten] build index done. scanned 0 total records. 0 secs
2021-04-12 22:04:06.190 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:06.191+0200 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.4
2021-04-12 22:04:06.191 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:06.192+0200 I NETWORK [thread1] waiting for connections on port 62655
2021-04-12 22:04:06.191 INFO 18824 --- [ main] d.f.embed.mongo.MongodExecutable : start de.flapdoodle.embed.mongo.config.MongodConfigBuilder$ImmutableMongodConfig#16a9eb2e
2021-04-12 22:04:06.439 INFO 18824 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[localhost:62655], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
2021-04-12 22:04:07.344 INFO 18824 --- [ main] d.e.tools.fluxtest.DataServiceTests : Started DataServiceTests in 4.669 seconds (JVM running for 5.804)
2021-04-12 22:04:07.378 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:07.378+0200 I NETWORK [thread1] connection accepted from 127.0.0.1:62710 #1 (1 connection now open)
2021-04-12 22:04:07.378 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:07.378+0200 I NETWORK [thread1] connection accepted from 127.0.0.1:62711 #2 (2 connections now open)
2021-04-12 22:04:07.423 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:07.423+0200 I NETWORK [conn2] received client metadata from 127.0.0.1:62711 conn2: { driver: { name: "mongo-java-driver|reactive-streams|spring-boot", version: "4.1.2" }, os: { type: "Windows", name: "Windows 10", architecture: "amd64", version: "10.0" }, platform: "Java/AdoptOpenJDK/11.0.9.1+1" }
2021-04-12 22:04:07.423 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:07.423+0200 I NETWORK [conn1] received client metadata from 127.0.0.1:62710 conn1: { driver: { name: "mongo-java-driver|reactive-streams|spring-boot", version: "4.1.2" }, os: { type: "Windows", name: "Windows 10", architecture: "amd64", version: "10.0" }, platform: "Java/AdoptOpenJDK/11.0.9.1+1" }
2021-04-12 22:04:07.450 INFO 18824 --- [localhost:62655] org.mongodb.driver.connection : Opened connection [connectionId{localValue:2, serverValue:2}] to localhost:62655
2021-04-12 22:04:07.450 INFO 18824 --- [localhost:62655] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:1}] to localhost:62655
2021-04-12 22:04:07.451 INFO 18824 --- [localhost:62655] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=localhost:62655, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=null, roundTripTimeNanos=65840300}
2021-04-12 22:04:07.903 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:07.903+0200 I NETWORK [thread1] connection accepted from 127.0.0.1:62713 #3 (3 connections now open)
2021-04-12 22:04:07.909 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:07.909+0200 I NETWORK [conn3] received client metadata from 127.0.0.1:62713 conn3: { driver: { name: "mongo-java-driver|reactive-streams|spring-boot", version: "4.1.2" }, os: { type: "Windows", name: "Windows 10", architecture: "amd64", version: "10.0" }, platform: "Java/AdoptOpenJDK/11.0.9.1+1" }
2021-04-12 22:04:07.912 INFO 18824 --- [ntLoopGroup-2-3] org.mongodb.driver.connection : Opened connection [connectionId{localValue:3, serverValue:3}] to localhost:62655
2021-04-12 22:04:14.110 INFO 18824 --- [extShutdownHook] org.mongodb.driver.connection : Closed connection [connectionId{localValue:3, serverValue:3}] to localhost:62655 because the pool has been closed.
2021-04-12 22:04:14.112 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:14.113+0200 I - [conn3] end connection 127.0.0.1:62713 (3 connections now open)
2021-04-12 22:04:14.113 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:14.113+0200 I - [conn1] end connection 127.0.0.1:62710 (2 connections now open)
2021-04-12 22:04:14.113 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:14.113+0200 I - [conn2] end connection 127.0.0.1:62711 (1 connection now open)
2021-04-12 22:04:16.206 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.206+0200 I NETWORK [thread1] connection accepted from 127.0.0.1:62715 #4 (1 connection now open)
2021-04-12 22:04:16.206 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.206+0200 I COMMAND [conn4] terminating, shutdown command received
2021-04-12 22:04:16.206 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.206+0200 I NETWORK [conn4] shutdown: going to close listening sockets...
2021-04-12 22:04:16.206 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.206+0200 I NETWORK [conn4] closing listening socket: 540
2021-04-12 22:04:16.206 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.206+0200 I NETWORK [conn4] shutdown: going to flush diaglog...
2021-04-12 22:04:16.206 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.206+0200 I FTDC [conn4] Shutting down full-time diagnostic data capture
2021-04-12 22:04:16.212 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.212+0200 I STORAGE [conn4] WiredTigerKVEngine shutting down
2021-04-12 22:04:16.307 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.307+0200 I STORAGE [conn4] shutdown: removing fs lock...
2021-04-12 22:04:16.308 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.307+0200 I CONTROL [conn4] now exiting
2021-04-12 22:04:16.308 INFO 18824 --- [ Thread-1] o.s.b.a.mongo.embedded.EmbeddedMongo : 2021-04-12T22:04:16.307+0200 I CONTROL [conn4] shutting down with code:0

Kafka is giving: "The group member needs to have a valid member id before actually entering a consumer group"

I am using Kafka to consume messages in Java. I want to test by starting the same app multiple times on my local box. When I start up, the first time I am able to start consuming messages from the topic. When I start up a second one I get:
Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
and dont get any messages from the topic. If I try to start more of them I get the same issues.
The configuration I am using for Kafka is
spring:
kafka:
bootstrap-servers: kafka:9092
consumer:
auto-offset-reset: earliest
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
properties:
spring.deserializer.key.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.use.type.headers: false
listener:
missing-topics-fatal: false
I have two topics
#Configuration
public class KafkaTopics {
#Bean("alertsTopic")
public NewTopic alertsTopic() {
return TopicBuilder.name("XXX.alerts")
.compact()
.build();
}
#Bean("serversTopic")
public NewTopic serversTopic() {
return TopicBuilder.name("XXX.servers")
.compact()
.build();
}
}
And two listeners in different class files.
#KafkaListener(topics = SERVERS_KAFKA_TOPIC, id = "#{T(java.util.UUID).randomUUID().toString()}",
properties = {
"spring.json.key.default.type=java.lang.String",
"spring.json.value.default.type=com.devhaus.learningjungle.db.kafka.ServerInfo"
})
public void registerServer(
#Payload(required = false) ServerInfo serverInfo
)
#KafkaListener(topics = ALERTS_KAFKA_TOPIC,
id = "#{T(java.util.UUID).randomUUID().toString()}",
properties = {
"spring.json.key.default.type=com.devhaus.learningjungle.db.kafka.AlertOnKafkaKey",
"spring.json.value.default.type=com.devhaus.learningjungle.db.kafka.AlertOnKafka"
})
public void processAlert(
#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) AlertOnKafkaKey key,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partitionId,
#Header(KafkaHeaders.OFFSET) long offset,
#Payload(required = false) AlertOnKafka alert)
From my analysis. This is normal behaviour, you can change the log levels to exclude it.
The reason for this is if the server detects that the client can support member.id it will give that error back to the client. This is noted in KIP-394.
The client will then reconnect back to the server with a generated member ID.
If the answer from #Archimedes Trajano doesn't work for you (like in my case), then this happens when kafka can't pick up the consumer group id.
if you have a single consumer group, you can specify it in the properties file like this:
spring:
kafka:
bootstrap-servers: kafka:9092
consumer:
group-id: insert-your-consumer-group-id
... rest of your properties ...
or if you have multiple consumers then you can specify the groupId for each one:
#KafkaListener(topics="topic-1", groupId="group-1")
public void registerServer(#Payload(required = false) ServerInfo serverInfo)
#KafkaListener(topics="topic-2",groupId="group-2")
public void processAlert(#Payload(required = false) AlertOnKafka alert)
docs: https://docs.spring.io/spring-kafka/reference/html/#annotation-properties
I encountered the same issue, also preventing my consumer from subscribing to certain topics.
I figured that the member.id might be similar to the client-id (using Camel here), which might be in some way related to the consumer group.
What fixed it for me is a changed, this time non-changing client-id for my consuming service, leaving the same consumer group as is.
hummm....in my case, consumers will rejoin the group with a generated ID. Here is my test and result. FYI
#Test
void testSyncSend() throws ExecutionException, InterruptedException {
int id = (int)(System.currentTimeMillis()/1000);
SendResult result = producer.syncSend(id);
logger.info("[testSyncSend] id:{}, result:{}", id, result);
new CountDownLatch(1).await();
}
2021-04-20 14:32:20.980 INFO 6672 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.1
2021-04-20 14:32:20.981 INFO 6672 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 0efa8fb0f4c73d92
2021-04-20 14:32:20.981 INFO 6672 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1618900340980
2021-04-20 14:32:21.125 INFO 6672 --- [listener1-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-listener1-1, groupId=listener1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.125 INFO 6672 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.125 INFO 6672 --- [listener1-3-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-listener1-4, groupId=listener1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.125 INFO 6672 --- [listener1-2-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-listener1-3, groupId=listener1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.125 INFO 6672 --- [listener1-1-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-listener1-2, groupId=listener1] Cluster ID: RctzTn4XR4WNNVuqh25izw
2021-04-20 14:32:21.127 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Discovered group coordinator localhost:29092 (id: 2147483646 rack: null)
2021-04-20 14:32:21.127 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Discovered group coordinator localhost:29092 (id: 2147483646 rack: null)
2021-04-20 14:32:21.127 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Discovered group coordinator localhost:29092 (id: 2147483646 rack: null)
2021-04-20 14:32:21.127 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Discovered group coordinator localhost:29092 (id: 2147483646 rack: null)
2021-04-20 14:32:21.130 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.130 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.130 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.130 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.147 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-04-20 14:32:21.147 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-04-20 14:32:21.147 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-04-20 14:32:21.147 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-04-20 14:32:21.148 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.148 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.148 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.148 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] (Re-)joining group
2021-04-20 14:32:21.317 INFO 6672 --- [ main] c.z.s.cbaConnector.KafkaProducerTest : [testSyncSend] id:1618900340, result:SendResult [producerRecord=ProducerRecord(topic=test1, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = __TypeId__, value = [99, 111, 109, 46, 122, 104, 105, 108, 105, 46, 115, 109, 115, 109, 111, 100, 117, 108, 101, 46, 101, 110, 116, 105, 116, 121, 46, 77, 101, 115, 115, 97, 103, 101])], isReadOnly = true), key=null, value=Message(id=1618900340), timestamp=null), recordMetadata=test1-1#0]
2021-04-20 14:32:23.770 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Finished assignment for group at generation 2: {consumer-listener1-4-9a383250-e84d-413a-b012-85405abdcf7f=Assignment(partitions=[test1-8, test1-9]), consumer-listener1-2-3d26d9ef-b973-4d19-a930-5ba77d938680=Assignment(partitions=[test1-3, test1-4, test1-5]), consumer-listener1-1-10b6895e-264e-45bd-ba90-c71ea12b21e5=Assignment(partitions=[test1-0, test1-1, test1-2]), consumer-listener1-3-54ce965a-87cd-4e28-b0e9-b0f2c9f69423=Assignment(partitions=[test1-6, test1-7])}
2021-04-20 14:32:23.775 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Successfully joined group with generation 2
2021-04-20 14:32:23.775 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Successfully joined group with generation 2
2021-04-20 14:32:23.775 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Successfully joined group with generation 2
2021-04-20 14:32:23.776 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Successfully joined group with generation 2
2021-04-20 14:32:23.779 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Adding newly assigned partitions: test1-5, test1-4, test1-3
2021-04-20 14:32:23.779 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Adding newly assigned partitions: test1-6, test1-7
2021-04-20 14:32:23.779 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Adding newly assigned partitions: test1-0, test1-2, test1-1
2021-04-20 14:32:23.779 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Adding newly assigned partitions: test1-9, test1-8
2021-04-20 14:32:23.789 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Found no committed offset for partition test1-8
2021-04-20 14:32:23.789 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Found no committed offset for partition test1-6
2021-04-20 14:32:23.789 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Found no committed offset for partition test1-2
2021-04-20 14:32:23.789 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Found no committed offset for partition test1-1
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-1, groupId=listener1] Setting offset for partition test1-0 to the committed offset FetchPosition{offset=14, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Setting offset for partition test1-5 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-4, groupId=listener1] Setting offset for partition test1-9 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-3, groupId=listener1] Setting offset for partition test1-7 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Setting offset for partition test1-4 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.791 INFO 6672 --- [listener1-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-listener1-2, groupId=listener1] Setting offset for partition test1-3 to the committed offset FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:29092 (id: 1 rack: null)], epoch=0}}
2021-04-20 14:32:23.792 INFO 6672 --- [listener1-1-C-1] o.s.k.l.KafkaMessageListenerContainer : listener1: partitions assigned: [test1-5, test1-4, test1-3]
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-3-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-listener1-4, groupId=listener1] Resetting offset for partition test1-8 to offset 0.
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-2-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-listener1-3, groupId=listener1] Resetting offset for partition test1-6 to offset 0.
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-listener1-1, groupId=listener1] Resetting offset for partition test1-2 to offset 0.
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-listener1-1, groupId=listener1] Resetting offset for partition test1-1 to offset 0.
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-3-C-1] o.s.k.l.KafkaMessageListenerContainer : listener1: partitions assigned: [test1-9, test1-8]
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-2-C-1] o.s.k.l.KafkaMessageListenerContainer : listener1: partitions assigned: [test1-6, test1-7]
2021-04-20 14:32:23.804 INFO 6672 --- [listener1-0-C-1] o.s.k.l.KafkaMessageListenerContainer : listener1: partitions assigned: [test1-0, test1-2, test1-1]
2021-04-20 14:32:23.817 INFO 6672 --- [listener1-0-C-1] c.z.s.cbaConnector.KafkaConsumer : [KafakaConsumer][consume] thread:21 received message:{"id":1618900340}

Spring kafka : Metadata is not ready: we have not fetched metadata from the bootstrap nodes yet

I am integrating spring and Kafka but not able to establish connections to Kafka Broker.
i have two machine I setup kafka on A machine and want to connect from B machine but have some connection related problem, in trace I got those repeated line
Trying to choose nodes for [Call(callName=describeTopics, deadlineMs=1588452756281)] at 1588452636314
Metadata is not ready: we have not fetched metadata from the bootstrap nodes yet.
Unable to assign Call(callName=describeTopics, deadlineMs=1588452756281) to a node.
Client is not ready to send to 192.168.1.2:9092 (id: -1 rack: null). Must delay 9223372036854775807 ms
Entering KafkaClient#poll(timeout=100)
KafkaClient#poll retrieved 0 response(s)
Log file
2020-05-03 02:20:36.133 INFO 22603 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'taskScheduler'
2020-05-03 02:20:36.222 INFO 22603 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [192.168.1.2:9092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2020-05-03 02:20:36.230 DEBUG 22603 --- [ main] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [192.168.1.2:9092 (id: -1 rack: null)], partitions = [], controller = null).
2020-05-03 02:20:36.235 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=count, group=kafka-metrics-count, description=total number of registered metrics, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.246 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-closed:
2020-05-03 02:20:36.249 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=connection-close-total, group=admin-client-metrics, description=The total number of connections closed, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.250 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=connection-close-rate, group=admin-client-metrics, description=The number of connections closed per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.250 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-created:
2020-05-03 02:20:36.251 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=connection-creation-total, group=admin-client-metrics, description=The total number of new connections established, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.252 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=connection-creation-rate, group=admin-client-metrics, description=The number of new connections established per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.252 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication:
2020-05-03 02:20:36.252 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=successful-authentication-total, group=admin-client-metrics, description=The total number of connections with successful authentication, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.252 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=successful-authentication-rate, group=admin-client-metrics, description=The number of connections with successful authentication per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.252 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-reauthentication:
2020-05-03 02:20:36.253 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=successful-reauthentication-total, group=admin-client-metrics, description=The total number of successful re-authentication of connections, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.253 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=successful-reauthentication-rate, group=admin-client-metrics, description=The number of successful re-authentication of connections per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.253 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication-no-reauth:
2020-05-03 02:20:36.253 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=successful-authentication-no-reauth-total, group=admin-client-metrics, description=The total number of connections with successful authentication where the client does not support re-authentication, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.254 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-authentication:
2020-05-03 02:20:36.255 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=failed-authentication-total, group=admin-client-metrics, description=The total number of connections with failed authentication, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.256 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=failed-authentication-rate, group=admin-client-metrics, description=The number of connections with failed authentication per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.256 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-reauthentication:
2020-05-03 02:20:36.256 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=failed-reauthentication-total, group=admin-client-metrics, description=The total number of failed re-authentication of connections, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.257 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=failed-reauthentication-rate, group=admin-client-metrics, description=The number of failed re-authentication of connections per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.257 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name reauthentication-latency:
2020-05-03 02:20:36.257 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=reauthentication-latency-max, group=admin-client-metrics, description=The max latency observed due to re-authentication, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.258 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=reauthentication-latency-avg, group=admin-client-metrics, description=The average latency observed due to re-authentication, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.258 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent-received:
2020-05-03 02:20:36.259 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=network-io-total, group=admin-client-metrics, description=The total number of network operations (reads or writes) on all connections, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.260 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=network-io-rate, group=admin-client-metrics, description=The number of network operations (reads or writes) on all connections per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.260 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent:
2020-05-03 02:20:36.260 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=outgoing-byte-total, group=admin-client-metrics, description=The total number of outgoing bytes sent to all servers, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.261 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=outgoing-byte-rate, group=admin-client-metrics, description=The number of outgoing bytes sent to all servers per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.261 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=request-total, group=admin-client-metrics, description=The total number of requests sent, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.261 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=request-rate, group=admin-client-metrics, description=The number of requests sent per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.262 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=request-size-avg, group=admin-client-metrics, description=The average size of requests sent., tags={client-id=adminclient-1}]
2020-05-03 02:20:36.262 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=request-size-max, group=admin-client-metrics, description=The maximum size of any request sent., tags={client-id=adminclient-1}]
2020-05-03 02:20:36.262 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-received:
2020-05-03 02:20:36.263 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=incoming-byte-total, group=admin-client-metrics, description=The total number of bytes read off all sockets, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.263 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=incoming-byte-rate, group=admin-client-metrics, description=The number of bytes read off all sockets per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.264 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=response-total, group=admin-client-metrics, description=The total number of responses received, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.264 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=response-rate, group=admin-client-metrics, description=The number of responses received per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.264 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name select-time:
2020-05-03 02:20:36.264 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=select-total, group=admin-client-metrics, description=The total number of times the I/O layer checked for new I/O to perform, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.265 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=select-rate, group=admin-client-metrics, description=The number of times the I/O layer checked for new I/O to perform per second, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.265 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=io-wait-time-ns-avg, group=admin-client-metrics, description=The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds., tags={client-id=adminclient-1}]
2020-05-03 02:20:36.266 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=io-waittime-total, group=admin-client-metrics, description=The total time the I/O thread spent waiting, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.266 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=io-wait-ratio, group=admin-client-metrics, description=The fraction of time the I/O thread spent waiting, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.267 DEBUG 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name io-time:
2020-05-03 02:20:36.267 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=io-time-ns-avg, group=admin-client-metrics, description=The average length of time for I/O per select call in nanoseconds., tags={client-id=adminclient-1}]
2020-05-03 02:20:36.267 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=iotime-total, group=admin-client-metrics, description=The total time the I/O thread spent doing I/O, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.267 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=io-ratio, group=admin-client-metrics, description=The fraction of time the I/O thread spent doing I/O, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.268 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=connection-count, group=admin-client-metrics, description=The current number of active connections., tags={client-id=adminclient-1}]
2020-05-03 02:20:36.276 INFO 22603 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.3.1
2020-05-03 02:20:36.276 INFO 22603 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 18a913733fb71c01
2020-05-03 02:20:36.276 INFO 22603 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1588452636274
2020-05-03 02:20:36.279 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=version, group=app-info, description=Metric indicating version, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.279 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=commit-id, group=app-info, description=Metric indicating commit-id, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.279 TRACE 22603 --- [ main] org.apache.kafka.common.metrics.Metrics : Registered metric named MetricName [name=start-time-ms, group=app-info, description=Metric indicating start-time-ms, tags={client-id=adminclient-1}]
2020-05-03 02:20:36.280 DEBUG 22603 --- [ main] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Kafka admin client initialized
2020-05-03 02:20:36.280 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Thread starting
2020-05-03 02:20:36.281 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Trying to choose nodes for [] at 1588452636280
2020-05-03 02:20:36.283 TRACE 22603 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Found least loaded node 192.168.1.2:9092 (id: -1 rack: null) with no active connection
2020-05-03 02:20:36.283 DEBUG 22603 --- [ main] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1588452756281) with a timeout 120000 ms from now.
2020-05-03 02:20:36.283 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Assigned Call(callName=fetchMetadata, deadlineMs=1588452756280) to node 192.168.1.2:9092 (id: -1 rack: null)
2020-05-03 02:20:36.285 DEBUG 22603 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node 192.168.1.2:9092 (id: -1 rack: null) using address /192.168.1.2
2020-05-03 02:20:36.311 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Client is not ready to send to 192.168.1.2:9092 (id: -1 rack: null). Must delay 9223372036854775807 ms
2020-05-03 02:20:36.311 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Entering KafkaClient#poll(timeout=1200000)
2020-05-03 02:20:36.314 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] KafkaClient#poll retrieved 0 response(s)
2020-05-03 02:20:36.314 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Trying to choose nodes for [Call(callName=describeTopics, deadlineMs=1588452756281)] at 1588452636314
2020-05-03 02:20:36.315 TRACE 22603 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Metadata is not ready: we have not fetched metadata from the bootstrap nodes yet.
2020-05-03 02:20:36.315 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Unable to assign Call(callName=describeTopics, deadlineMs=1588452756281) to a node.
2020-05-03 02:20:36.315 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Client is not ready to send to 192.168.1.2:9092 (id: -1 rack: null). Must delay 9223372036854775807 ms
2020-05-03 02:20:36.315 TRACE 22603 --- [| adminclient-1] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Entering KafkaClient#poll(timeout=100)
Spring configurations
spring:
kafka:
consumer:
bootstrap-servers: 192.168.1.2:9092
group-id: group_id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: 192.168.1.2:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
Kafka Server File
broker.id=0
advertised.listeners=PLAINTEXT://192.168.1.2.com:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

Categories