Kafka template connection with broker - java

I've got a #KafkaListener method in my service which processes message and send it to another topic using KafkaTemplate and from time to time it completely stops working due to some reasons.
2022-10-04 16:53:18.218 ERROR 1 --- [pool-1-thread-2] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{"type":"INFORMATION","messageId":"f39fabfd-e560-499b-9850-440ad811657b","phoneNumber":"+100000000...' to topic ss.fb.processing-notifications.send:
org.apache.kafka.common.errors.TimeoutException: Topic ss.fb.processing-notifications.send not present in metadata after 60000 ms.
2022-10-04 16:53:33.013 INFO 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Disconnecting from node -2 due to socket connection setup timeout. The timeout value is 29794 ms.
2022-10-04 16:53:33.014 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker prd-mqueue-srv2.obi.ru:9092 (id: -2 rack: null) disconnected
2022-10-04 16:53:41.005 INFO 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-nepcNotificationsGroup-1, groupId=nepcNotificationsGroup] Disconnecting from node -3 due to socket connection setup timeout. The timeout value is 27831 ms.
2022-10-04 16:53:41.005 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-nepcNotificationsGroup-1, groupId=nepcNotificationsGroup] Bootstrap broker prd-mqueue-srv3.obi.ru:9092 (id: -3 rack: null) disconnected
There seems to be some network issues however after restarting the service everything works fine again. Anyway I wonder why eventually broker turns out to be disconnected? Is't producer supposed to infinitely try sending message to broker until it succedes?

You can wrap the KafkaTemplate call in a RetryTemplate or #Retryable method - see https://github.com/spring-projects/spring-retry - the RetryTemplate is already on the class path as a transitive dependency of spring-kafka.

Related

Kafka Spring boot automatically starting idempotent producer in the consumer application

We are experiencing the issue in dev environment which was working well previously. In the local environment, the application is running without starting an idempotent producer and that is the expected behavior here.
Issue: Sometimes an Idempotent producer is starting to run
automatically when compiling the spring boot application. And hence
the consumer fails to consume the message produced by the actual
producer.
Adding a snippet of relevant log info:
2022-07-05 15:17:54.449 WARN 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Destination resolver returned non-existent partition consumer-topic.DLT-1, KafkaProducer will determine partition to use for this topic
2022-07-05 15:17:54.853 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [kafka server urls]
buffer.memory = 33554432
.
.
.
2022-07-05 15:17:55.047 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Instantiated an idempotent producer.
2022-07-05 15:17:55.347 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.0.1
2022-07-05 15:17:55.348 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId:
2022-07-05 15:17:57.162 INFO 7 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: XFGlM9HVScGD-PafRlFH7g
2022-07-05 15:17:57.169 INFO 7 --- [ad | producer-1] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-1] ProducerId set to 6013 with epoch 0
2022-07-05 15:18:56.681 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='byte[63]' to topic consumer-topic.DLT:
org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
2022-07-05 15:18:56.748 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Dead-letter publication to consumer-topic.DLTfailed for: consumer-topic-1#28
org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:660) ~[spring-kafka-2.8.5.jar!/:2.8.5]
.
.
.
2022-07-05 15:18:56.751 ERROR 7 --- [ntainer#0-0-C-1] o.s.kafka.listener.DefaultErrorHandler : Recovery of record (consumer-topic-1#28) failed
2022-07-05 15:18:56.758 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-consumer-topic-group-test-1, groupId=consumer-topic-group-test] Seeking to offset 28 for partition c-1
2022-07-05 15:18:56.761 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : Error handler threw an exception
org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.support.serializer.DeserializationException: failed to deserialize; nested exception is java.lang.IllegalStateException: No type information in headers and no default type provided
As we can see on the logs that the application has started idempotent producer automatically and after starting it started throwing some errors.
Context: We have two microservices, one microservice publish the messages and contains the producer config. Second microservice only consuming the messages and does not contains any producer cofig.
The yml configurations for producer application:
kafka:
bootstrap-servers: "kafka urls"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.json.trusted.packages: "*"
acks: 1
YML configuration for consumer application:
kafka:
bootstrap-servers: "kafka URL, kafka url2"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
consumer:
enable-auto-commit: true
auto-offset-reset: latest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.trusted.packages: "*"
consumer-topic:
topic: consumer-topic
group-abc:
group-id: consumer-topic-group-abc
The kafka bean for default error handler
#Bean
public CommonErrorHandler errorHandler(KafkaOperations<Object, Object> kafkaOperations) {
return new DefaultErrorHandler(new DeadLetterPublishingRecoverer(kafkaOperations));
}
We know a temporary fix, if we delete the group id and recreate it then the application works successfully. But after some deployment, this issue is raising back and we don't know the root cause for it.
Please guide.

Spring Boot Kafka Connection Problem for bitnamiKafka docker service [duplicate]

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 9 months ago.
There is no problem everything is ok on the terminals; i can send message via producer and receive it on consumer. But am not able to get same result via KafkaProducer java.
kafka:
image: 'bitnami/kafka:latest'
ports:
- "9092:9092"
- '9093:9093'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:9093
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- "2181:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
microservicesnet:
driver: bridge
Java code for KafkaProducer
public Producer<String, String> setUpKafkaPropoerties() {
Properties properties = new Properties();
//Update the IP adress of Kafka server here//
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9093");
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
properties.put("acks", "all");
properties.put("retries", 0);
properties.put("linger.ms", 0);
properties.put("partitioner.class",
"org.apache.kafka.clients.producer.internals.DefaultPartitioner");
properties.put("request.timeout.ms", 30000);
properties.put("timeout.ms", 30000);
properties.put("max.in.flight.requests.per.connection", 5);
properties.put("retry.backoff.ms", 5);
//Instantiate Producer Object
Producer<String, String> producer = new KafkaProducer<String, String>(properties);
return producer;
ERROR LOG:
2022-05-21 14:55:21.240 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (localhost/127.0.0.1:9093) could not be established. Broker may not be available.
2022-05-21 14:55:21.240 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected
2022-05-21 14:55:22.254 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (localhost/127.0.0.1:9093) could not be established. Broker may not be available.
Can you try removing those quotes and double quotes in the port forwarding part
In my docker-compose file I don't use any quotes for port forwarding

Is there a "Circuit Breaker" for Spring Boot Kafka client?

In case that Kafka server is (temporarily) down, my Spring Boot application ReactiveKafkaConsumerTemplate keeps trying to connect unsuccessfully, thus causing unnecessary traffic and messing the log files:
2021-11-10 14:45:30.265 WARN 24984 --- [onsumer-group-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-group-1, groupId=consumer-group] Connection to node -1 (localhost/127.0.0.1:29092) could not be established. Broker may not be available.
2021-11-10 14:45:32.792 WARN 24984 --- [onsumer-group-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-group-1, groupId=consumer-group] Bootstrap broker localhost:29092 (id: -1 rack: null) disconnected
2021-11-10 14:45:34.845 WARN 24984 --- [onsumer-group-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-group-1, groupId=consumer-group] Connection to node -1 (localhost/127.0.0.1:29092) could not be established. Broker may not be available.
2021-11-10 14:45:34.845 WARN 24984 --- [onsumer-group-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-group-1, groupId=consumer-group] Bootstrap broker localhost:29092 (id: -1 rack: null) disconnected
Is it possible to use something like a circuit breaker (an inspiration here or here), so the Spring Boot Kafka client in case of a failure (or even better a few consecutive failures) slows down the pace of its connection attempts, and returns to the normal pace only after the server is up again?
Is there already a ready-made config parameter, or any other solution?
I am aware of the parameter reconnect.backoff.ms, this is how I create the ReactiveKafkaConsumerTemplate bean:
#Bean
public ReactiveKafkaConsumerTemplate<String, MyEvent> kafkaConsumer(KafkaProperties properties) {
final Map<String, Object> map = new HashMap<>(properties.buildConsumerProperties());
map.put(ConsumerConfig.GROUP_ID_CONFIG, "MyGroup");
map.put(ConsumerConfig.RECONNECT_BACKOFF_MS_CONFIG, 10_000L);
final JsonDeserializer<DisplayCurrencyEvent> jsonDeserializer = new JsonDeserializer<>();
jsonDeserializer.addTrustedPackages("com.example.myapplication");
return new ReactiveKafkaConsumerTemplate<>(
ReceiverOptions
.<String, MyEvent>create(map)
.withKeyDeserializer(new ErrorHandlingDeserializer<>(new StringDeserializer()))
.withValueDeserializer(new ErrorHandlingDeserializer<>(jsonDeserializer))
.subscription(List.of("MyTopic")));
}
And still the consumer is trying to connect every 3 seconds.
See https://kafka.apache.org/documentation/#consumerconfigs_retry.backoff.ms
The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.
and https://kafka.apache.org/documentation/#consumerconfigs_reconnect.backoff.max.ms
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.
and

problem with kafka client when broker is down

I am seeing this exception in my kafka client when the broker is down:
java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2452)
at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2436)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1217)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at com.actimize.infrastructure.config.KafkaAlertsDistributor$1.run(KafkaAlertsDistributor.java:71)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The problem is, I am not running a multi-threaded application. I am running an hello-world example with single thread and wanted to see how it behaves when the broker is down (because I want to start the broker later in unit tests).
Here's my code, give or take:
ExecutorService executor = Executors.newSingleThreadExecutor();
executor.execute (createRunnable());
...
// in the runnable's run method
Properties props = // create props
consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("test-topic"));
while (true) {
ConsumerRecords<String, String> records = null;
try {
System.out.println("going to poll");
records = consumer.poll(Duration.ofSeconds(1));
System.out.println("finished polling, got " + records.count() + " records");
} catch (WakeupException e) {
e.printStackTrace();
continue;
} catch (Throwable e) {
e.printStackTrace();
}
for (ConsumerRecord<String, String> record : records) {
Map<String, Object> data = new HashMap<>();
data.put("partition", record.partition());
data.put("offset", record.offset());
data.put("value", record.value());
System.out.println("consumer got: " + data);
}
}
When the broker is down, the poll() method works fine for the first 4 or 5 times. It returns zero records and it prints a warning to the log. By the 5th or 6th time it starts outputing this error.
Here is a full log. It shows that are two threads (pool-3 and pool-4) doing some work behind the scene, I am not sure why this is happening, it's not coming from my code.
2021-02-21 12:16:00,057 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:68) - going to poll
2021-02-21 12:16:00,404 WARN [pool-3-thread-1] clients.NetworkClient (NetworkClient.java:757) - [Consumer clientId=consumer-consumer-tutorial-1, groupId=consumer-tutorial] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-02-21 12:16:00,404 WARN [pool-3-thread-1] clients.NetworkClient$DefaultMetadataUpdater (NetworkClient.java:1033) - [Consumer clientId=consumer-consumer-tutorial-1, groupId=consumer-tutorial] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
2021-02-21 12:16:01,057 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:70) - finished polling, got 0 records
2021-02-21 12:16:01,057 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:68) - going to poll
2021-02-21 12:16:02,057 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:70) - finished polling, got 0 records
2021-02-21 12:16:02,057 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:68) - going to poll
2021-02-21 12:16:02,427 INFO [pool-4-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:68) - going to poll
2021-02-21 12:16:02,923 WARN [pool-3-thread-1] clients.NetworkClient (NetworkClient.java:757) - [Consumer clientId=consumer-consumer-tutorial-1, groupId=consumer-tutorial] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-02-21 12:16:02,924 WARN [pool-3-thread-1] clients.NetworkClient$DefaultMetadataUpdater (NetworkClient.java:1033) - [Consumer clientId=consumer-consumer-tutorial-1, groupId=consumer-tutorial] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
2021-02-21 12:16:03,058 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:70) - finished polling, got 0 records
2021-02-21 12:16:03,058 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:68) - going to poll
2021-02-21 12:16:03,061 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:75) - error
java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2452)
at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2436)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1217)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at com.actimize.infrastructure.config.KafkaConsumerSample$1.run(KafkaConsumerSample.java:69)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "pool-3-thread-1" java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2452)
at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:2335)
at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:2290)
at com.actimize.infrastructure.config.KafkaConsumerSample$1.run(KafkaConsumerSample.java:88)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-02-21 12:16:03,429 INFO [pool-4-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:70) - finished polling, got 0 records
2021-02-21 12:16:03,429 INFO [pool-4-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:68) - going to poll
Looking at the logs you've shared, two thread starting to poll almost at the same time:
2021-02-21 12:16:02,057 INFO [pool-3-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:68) - going to poll
2021-02-21 12:16:02,427 INFO [pool-4-thread-1] config.KafkaConsumerSample$1 (KafkaConsumerSample.java:68) - going to poll
There are extra measurements to be taken into consideration in order to implement multithreaded consumer.
The most important points that you may want to tackle are:
Ensure that records from the same partitions are processed only by one thread at a time
Commit offsets only after records are processed
Handle group rebalancing properly
Further reading: Kafka Consumer Multi Threaded Messaging

KafkaStorageException: Disk error when trying to access log file on the disk

Can anyone let me know what causes the error
WARN org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=KafkaExampleProducer1] Received invalid metadata error in produce request on partition my-example-topic1-1 due to org.apache.kafka.common.errors.KafkaStorageException: Disk error when trying to access log file on the disk.. Going to request metadata update now"
I am running a 3 broker Kafka setup on the windows environment, I have created 2 TOPICs with 5 partitions each, I am using 2 producer instances. One for each TOPIC and I am using a single Consumer instance to consume from both of these TOPICs. The setup worked fine for some time. Then one of the Producer and the Consumer stopped functioning, the following messages were printed on the Producer and Consumer Console,
my-example-topic1 & my-example-topic2 are the 2 TOPICs.
Producer Console:
74394 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74494 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74595 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74697 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74798 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74900 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
75001 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
Consumer Console:
17533 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
17636 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
17738 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
17840 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
17943 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
18044 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
18147 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
18248 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
18350 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]

Categories