Kafka Spring boot automatically starting idempotent producer in the consumer application - java

We are experiencing the issue in dev environment which was working well previously. In the local environment, the application is running without starting an idempotent producer and that is the expected behavior here.
Issue: Sometimes an Idempotent producer is starting to run
automatically when compiling the spring boot application. And hence
the consumer fails to consume the message produced by the actual
producer.
Adding a snippet of relevant log info:
2022-07-05 15:17:54.449 WARN 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Destination resolver returned non-existent partition consumer-topic.DLT-1, KafkaProducer will determine partition to use for this topic
2022-07-05 15:17:54.853 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [kafka server urls]
buffer.memory = 33554432
.
.
.
2022-07-05 15:17:55.047 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Instantiated an idempotent producer.
2022-07-05 15:17:55.347 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.0.1
2022-07-05 15:17:55.348 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId:
2022-07-05 15:17:57.162 INFO 7 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: XFGlM9HVScGD-PafRlFH7g
2022-07-05 15:17:57.169 INFO 7 --- [ad | producer-1] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-1] ProducerId set to 6013 with epoch 0
2022-07-05 15:18:56.681 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='byte[63]' to topic consumer-topic.DLT:
org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
2022-07-05 15:18:56.748 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Dead-letter publication to consumer-topic.DLTfailed for: consumer-topic-1#28
org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:660) ~[spring-kafka-2.8.5.jar!/:2.8.5]
.
.
.
2022-07-05 15:18:56.751 ERROR 7 --- [ntainer#0-0-C-1] o.s.kafka.listener.DefaultErrorHandler : Recovery of record (consumer-topic-1#28) failed
2022-07-05 15:18:56.758 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-consumer-topic-group-test-1, groupId=consumer-topic-group-test] Seeking to offset 28 for partition c-1
2022-07-05 15:18:56.761 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : Error handler threw an exception
org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.support.serializer.DeserializationException: failed to deserialize; nested exception is java.lang.IllegalStateException: No type information in headers and no default type provided
As we can see on the logs that the application has started idempotent producer automatically and after starting it started throwing some errors.
Context: We have two microservices, one microservice publish the messages and contains the producer config. Second microservice only consuming the messages and does not contains any producer cofig.
The yml configurations for producer application:
kafka:
bootstrap-servers: "kafka urls"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.json.trusted.packages: "*"
acks: 1
YML configuration for consumer application:
kafka:
bootstrap-servers: "kafka URL, kafka url2"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
consumer:
enable-auto-commit: true
auto-offset-reset: latest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.trusted.packages: "*"
consumer-topic:
topic: consumer-topic
group-abc:
group-id: consumer-topic-group-abc
The kafka bean for default error handler
#Bean
public CommonErrorHandler errorHandler(KafkaOperations<Object, Object> kafkaOperations) {
return new DefaultErrorHandler(new DeadLetterPublishingRecoverer(kafkaOperations));
}
We know a temporary fix, if we delete the group id and recreate it then the application works successfully. But after some deployment, this issue is raising back and we don't know the root cause for it.
Please guide.

Related

Kafka template connection with broker

I've got a #KafkaListener method in my service which processes message and send it to another topic using KafkaTemplate and from time to time it completely stops working due to some reasons.
2022-10-04 16:53:18.218 ERROR 1 --- [pool-1-thread-2] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{"type":"INFORMATION","messageId":"f39fabfd-e560-499b-9850-440ad811657b","phoneNumber":"+100000000...' to topic ss.fb.processing-notifications.send:
org.apache.kafka.common.errors.TimeoutException: Topic ss.fb.processing-notifications.send not present in metadata after 60000 ms.
2022-10-04 16:53:33.013 INFO 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Disconnecting from node -2 due to socket connection setup timeout. The timeout value is 29794 ms.
2022-10-04 16:53:33.014 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker prd-mqueue-srv2.obi.ru:9092 (id: -2 rack: null) disconnected
2022-10-04 16:53:41.005 INFO 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-nepcNotificationsGroup-1, groupId=nepcNotificationsGroup] Disconnecting from node -3 due to socket connection setup timeout. The timeout value is 27831 ms.
2022-10-04 16:53:41.005 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-nepcNotificationsGroup-1, groupId=nepcNotificationsGroup] Bootstrap broker prd-mqueue-srv3.obi.ru:9092 (id: -3 rack: null) disconnected
There seems to be some network issues however after restarting the service everything works fine again. Anyway I wonder why eventually broker turns out to be disconnected? Is't producer supposed to infinitely try sending message to broker until it succedes?
You can wrap the KafkaTemplate call in a RetryTemplate or #Retryable method - see https://github.com/spring-projects/spring-retry - the RetryTemplate is already on the class path as a transitive dependency of spring-kafka.

Kafka Consumer in spring can I re-assign partitions programmatically?

I'm new to Kafka, and using #KafkaListener (spring) to define kafka consumer.
I would like to check whether its possible to manually assign the partition to the consumer in runtime.
For example, when the application starts I don't want to "consume" any data. I'm using currently #KafkaListener(autoStartup=false ... ) for that purpose.
At some point, I'm supposed to get a notification (from another part of the application) that contains a partitionId to work on, so I would like to "skip" to the latest available offset of that partition because I don't need to consume the data that has happened to already exist there and "associate" the KafkaConsumer with the partitionId from that notification.
Later on I might get a notification to "Stop listening to this partition", despite the fact the the producer that exists somewhere else keeps writing to that topic and to that partition, so I should "unlink" the consumer from the partition and stop getting messages.
I saw there is a org.springframework.kafka.annotation.TopicPartition but it provides a way to specify a "static" association, so I'm looking for a "dynamic" way to do so.
I guess I could resort to the low-level Kafka Client API but I would really prefer to use spring here.
UPDATE
I use topic cnp_multi_partition_test_topic with 3 partitions.
My Current Code that tries to manage partitions dynamically from the consumer looks like this:
#Slf4j
public class SampleKafkaConsumer {
#KafkaListener(id = Constants.CONSUMER_ID, topics = Constants.TEST_TOPIC, autoStartup = "false")
public void consumePartition(#Payload String data, #Headers MessageHeaders messageHeaders) {
Object partitionId = messageHeaders.get(KafkaHeaders.RECEIVED_PARTITION_ID);
Object sessionId = messageHeaders.get(KafkaHeaders.RECEIVED_MESSAGE_KEY);
log.info("Consuming from partition: [ {} ] message: Key = [ {} ], content = [ {} ]",partitionId, sessionId, data);
}
}
#RequiredArgsConstructor
public class MultiPartitionKafkaConsumerManager {
private final KafkaListenerEndpointRegistry registry;
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final UUIDProvider uuidProvider;
private ConcurrentMessageListenerContainer<String, String> container;
public void assignPartitions(List<Integer> partitions) {
if(container != null) {
container.stop();
container = null;
}
if(partitions.isEmpty()) {
return;
}
var newTopicPartitionOffsets = prepareTopicPartitionOffsets(partitions);
container =
factory.createContainer(newTopicPartitionOffsets);
container.getContainerProperties().setMessageListener(
registry.getListenerContainer(Constants.CONSUMER_ID).getContainerProperties().getMessageListener());
// random group
container.getContainerProperties().setGroupId("sampleGroup-" + uuidProvider.getUUID().toString());
container.setConcurrency(1);
container.start();
}
private TopicPartitionOffset[] prepareTopicPartitionOffsets(List<Integer> partitions) {
return partitions.stream()
.map(p -> new TopicPartitionOffset(TEST_TOPIC, p, 0L, TopicPartitionOffset.SeekPosition.END))
.collect(Collectors.toList())
.toArray(new TopicPartitionOffset[] {});
}
}
Both are Spring beans (singletons) managed through java configuration.
The producer is generating 3 messages every second and sends it into 3 partitions of the test topic. I've used kafka UI tool to make sure that indeed all the messages arrive as expected I use an #EventListener and #Async to make it happen concurrently.
Here is how do I try to simulate the work:
#SpringBootTest // kafka is available, omitted for brevity
public class MyTest {
#Autowired
MultiPartitionKafkaConsumerManager manager;
#Test
public void test_create_kafka_consumer_with_manual_partition_management() throws InterruptedException {
log.info("Starting the test");
sleep(5_000);
log.info("Start listening on partition 0");
manager.assignPartitions(List.of(0));
sleep(10_000);
log.info("Start listening on partition 0,2");
manager.assignPartitions(List.of(0,2));
sleep(10_000);
log.info("Do not listen on partition 0 anymore");
manager.assignPartitions(List.of(2));
sleep(10_000);
log.info("Do not listen on partition 2 anymore - 0 partitions to listen");
manager.assignPartitions(Collections.emptyList());
sleep(10_000);
Logs show the following:
06:34:20.164 [main] INFO c.h.c.p.g.m.SamplePartitioningTest - Starting the test
06:34:25.169 [main] INFO c.h.c.p.g.m.SamplePartitioningTest - Start listening on partition 0
06:34:25.360 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version: 2.5.1
06:34:25.360 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId: 0efa8fb0f4c73d92
06:34:25.361 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1633664065360
06:34:25.405 [main] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-sampleGroup-96640bc4-e34f-4ade-9ff9-7a2d0bdf38c9-1, groupId=sampleGroup-96640bc4-e34f-4ade-9ff9-7a2d0bdf38c9] Subscribed to partition(s): cnp_multi_partition_test_topic-0
06:34:25.422 [main] INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService
06:34:25.429 [consumer-0-C-1] INFO o.a.k.c.c.i.SubscriptionState - [Consumer clientId=consumer-sampleGroup-96640bc4-e34f-4ade-9ff9-7a2d0bdf38c9-1, groupId=sampleGroup-96640bc4-e34f-4ade-9ff9-7a2d0bdf38c9] Seeking to LATEST offset of partition cnp_multi_partition_test_topic-0
06:34:35.438 [main] INFO c.h.c.p.g.m.SamplePartitioningTest - Start listening on partition 0,2
06:34:35.445 [consumer-0-C-1] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-sampleGroup-96640bc4-e34f-4ade-9ff9-7a2d0bdf38c9-1, groupId=sampleGroup-96640bc4-e34f-4ade-9ff9-7a2d0bdf38c9] Unsubscribed all topics or patterns and assigned partitions
06:34:35.445 [consumer-0-C-1] INFO o.s.s.c.ThreadPoolTaskScheduler - Shutting down ExecutorService
06:34:35.453 [consumer-0-C-1] INFO o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - sampleGroup-96640bc4-e34f-4ade-9ff9-7a2d0bdf38c9: Consumer stopped
06:34:35.467 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version: 2.5.1
06:34:35.467 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId: 0efa8fb0f4c73d92
06:34:35.467 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1633664075467
06:34:35.486 [main] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb-2, groupId=sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb] Subscribed to partition(s): cnp_multi_partition_test_topic-0, cnp_multi_partition_test_topic-2
06:34:35.487 [main] INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService
06:34:35.489 [consumer-0-C-1] INFO o.a.k.c.c.i.SubscriptionState - [Consumer clientId=consumer-sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb-2, groupId=sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb] Seeking to LATEST offset of partition cnp_multi_partition_test_topic-0
06:34:35.489 [consumer-0-C-1] INFO o.a.k.c.c.i.SubscriptionState - [Consumer clientId=consumer-sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb-2, groupId=sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb] Seeking to LATEST offset of partition cnp_multi_partition_test_topic-2
06:34:45.502 [main] INFO c.h.c.p.g.m.SamplePartitioningTest - Do not listen on partition 0 anymore
06:34:45.503 [consumer-0-C-1] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb-2, groupId=sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb] Unsubscribed all topics or patterns and assigned partitions
06:34:45.503 [consumer-0-C-1] INFO o.s.s.c.ThreadPoolTaskScheduler - Shutting down ExecutorService
06:34:45.510 [consumer-0-C-1] INFO o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - sampleGroup-05fb12f3-aba1-4918-bcf6-a1f840de13eb: Consumer stopped
06:34:45.527 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version: 2.5.1
06:34:45.527 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId: 0efa8fb0f4c73d92
06:34:45.527 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1633664085527
06:34:45.551 [main] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-sampleGroup-5e12d8c7-5900-434a-959f-98b14adda698-3, groupId=sampleGroup-5e12d8c7-5900-434a-959f-98b14adda698] Subscribed to partition(s): cnp_multi_partition_test_topic-2
06:34:45.551 [main] INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService
06:34:45.554 [consumer-0-C-1] INFO o.a.k.c.c.i.SubscriptionState - [Consumer clientId=consumer-sampleGroup-5e12d8c7-5900-434a-959f-98b14adda698-3, groupId=sampleGroup-5e12d8c7-5900-434a-959f-98b14adda698] Seeking to LATEST offset of partition cnp_multi_partition_test_topic-2
06:34:55.560 [main] INFO c.h.c.p.g.m.SamplePartitioningTest - Do not listen on partition 2 anymore - 0 partitions to listen
06:34:55.561 [consumer-0-C-1] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-sampleGroup-5e12d8c7-5900-434a-959f-98b14adda698-3, groupId=sampleGroup-5e12d8c7-5900-434a-959f-98b14adda698] Unsubscribed all topics or patterns and assigned partitions
06:34:55.562 [consumer-0-C-1] INFO o.s.s.c.ThreadPoolTaskScheduler - Shutting down ExecutorService
06:34:55.576 [consumer-0-C-1] INFO o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - sampleGroup-5e12d8c7-5900-434a-959f-98b14adda698: Consumer stopped
So I do see that the consumer is started, it even tries to poll the records internally, but I think I see the WakeupException thrown and "swallowed" by a proxy. I'm not sure I understand why does it happen?
You can't change manual assignments at runtime. There are several ways to achieve your desired result.
You can declare the listener in a prototype bean; see Can i add topics to my #kafkalistener at runtime
You can use the listener container factory to create a new container with the appropriate topic configuration and copy the listener from the statically declared container.
I can provide an example of the latter if needed.
...
EDIT
Here's an example for the second technique...
#SpringBootApplication
public class So69465733Application {
public static void main(String[] args) {
SpringApplication.run(So69465733Application.class, args);
}
#KafkaListener(id = "dummy", topics = "dummy", autoStartup = "false")
void listen(String in) {
System.out.println(in);
}
#Bean
ApplicationRunner runner(KafkaListenerEndpointRegistry registry,
ConcurrentKafkaListenerContainerFactory<String, String> factory) {
return args -> {
System.out.println("Hit Enter to create a container for topic1, partition0");
System.in.read();
ConcurrentMessageListenerContainer<String, String> container1 =
factory.createContainer(new TopicPartitionOffset("topic1", 0, SeekPosition.END));
container1.getContainerProperties().setMessageListener(
registry.getListenerContainer("dummy").getContainerProperties().getMessageListener());
container1.getContainerProperties().setGroupId("topic1-0-group2");
container1.start();
System.out.println("Hit Enter to create a container for topic2, partition0");
System.in.read();
ConcurrentMessageListenerContainer<String, String> container2 =
factory.createContainer(new TopicPartitionOffset("topic2", 0, SeekPosition.END));
container2.getContainerProperties().setMessageListener(
registry.getListenerContainer("dummy").getContainerProperties().getMessageListener());
container2.getContainerProperties().setGroupId("topic2-0-group2");
container2.start();
System.in.read();
container1.stop();
container2.stop();
};
}
}
EDIT
Log after sending records to topic1, topic2 from the command-line producer.
Hit Enter to create a container for topic1, partition0
ConsumerConfig values:
...
Kafka version: 2.7.1
Kafka commitId: 61dbce85d0d41457
Kafka startTimeMs: 1633622966736
[Consumer clientId=consumer-topic1-0-group2-1, groupId=topic1-0-group2] Subscribed to partition(s): topic1-0
Hit Enter to create a container for topic2, partition0
[Consumer clientId=consumer-topic1-0-group2-1, groupId=topic1-0-group2] Seeking to LATEST offset of partition topic1-0
[Consumer clientId=consumer-topic1-0-group2-1, groupId=topic1-0-group2] Cluster ID: ppGfIGsZTUWRTNmRXByfZg
[Consumer clientId=consumer-topic1-0-group2-1, groupId=topic1-0-group2] Resetting offset for partition topic1-0 to position FetchPosition{offset=2, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 0 rack: null)], epoch=0}}.
ConsumerConfig values:
...
Kafka version: 2.7.1
Kafka commitId: 61dbce85d0d41457
Kafka startTimeMs: 1633622969071
[Consumer clientId=consumer-topic2-0-group2-2, groupId=topic2-0-group2] Subscribed to partition(s): topic2-0
Hit Enter to stop containers
[Consumer clientId=consumer-topic2-0-group2-2, groupId=topic2-0-group2] Seeking to LATEST offset of partition topic2-0
[Consumer clientId=consumer-topic2-0-group2-2, groupId=topic2-0-group2] Cluster ID: ppGfIGsZTUWRTNmRXByfZg
[Consumer clientId=consumer-topic2-0-group2-2, groupId=topic2-0-group2] Resetting offset for partition topic2-0 to position FetchPosition{offset=2, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 0 rack: null)], epoch=0}}.
record from topic1
[Consumer clientId=consumer-topic1-0-group2-1, groupId=topic1-0-group2] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
record from topic2
[Consumer clientId=consumer-topic2-0-group2-2, groupId=topic2-0-group2] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
Application shutdown requested.

KafkaStorageException: Disk error when trying to access log file on the disk

Can anyone let me know what causes the error
WARN org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=KafkaExampleProducer1] Received invalid metadata error in produce request on partition my-example-topic1-1 due to org.apache.kafka.common.errors.KafkaStorageException: Disk error when trying to access log file on the disk.. Going to request metadata update now"
I am running a 3 broker Kafka setup on the windows environment, I have created 2 TOPICs with 5 partitions each, I am using 2 producer instances. One for each TOPIC and I am using a single Consumer instance to consume from both of these TOPICs. The setup worked fine for some time. Then one of the Producer and the Consumer stopped functioning, the following messages were printed on the Producer and Consumer Console,
my-example-topic1 & my-example-topic2 are the 2 TOPICs.
Producer Console:
74394 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74494 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74595 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74697 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74798 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
74900 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
75001 [kafka-producer-network-thread | KafkaExampleProducer1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=KafkaExampleProducer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
Consumer Console:
17533 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
17636 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
17738 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
17840 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
17943 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
18044 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
18147 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
18248 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]
18350 [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-KafkaExampleConsumer1-1, groupId=KafkaExampleConsumer1] 1 partitions have leader brokers without a matching listener, including [my-example-topic1-1]

Getting store for GlobalKTable crashes after upgrading to kafka-streams:5.5.0-css (Apache Kafka 2.5.0) [RESOLVED]

I have a Spring Boot App using GlobalKTable. It worked fine until the update to kafka-streams-5.5.0-css (Confluent Platform version compatible with Apache Kafka 2.5.0 ) from 5.3.2-css (
Apache Kafka 2.3.1).
So this is my configuration:
#Configuration
#EnableKafkaStreams
public class GlobalTableConfiguration {
public GlobalTableConfiguration() {
}
#Bean
public GlobalKTable<String, String> table(StreamsBuilder kStreamsBuilder) {
return kStreamsBuilder.globalTable("topic1", Consumed.with(null, null),
Materialized.as("topic1-store"));
}
}
I'm getting the store like this:
streamsBuilderFactoryBean.getKafkaStreams().
store("topic1-store", QueryableStoreTypes.keyValueStore());
this fails with:
Request processing failed; nested exception is java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR.
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR.
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)
Caused by: java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR.
at org.apache.kafka.streams.KafkaStreams.validateIsRunningOrRebalancing(KafkaStreams.java:316)
at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1182)
at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1169)
I can see in that the stream thread is shutting down before this:
2020-06-16 13:22:46.943 INFO 72423 --- [ Test worker] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.0
2020-06-16 13:22:46.944 INFO 72423 --- [ Test worker] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 66563e712b0b9f84
2020-06-16 13:22:46.944 INFO 72423 --- [ Test worker] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1592299366943
2020-06-16 13:22:46.946 INFO 72423 --- [ad | producer-2] org.apache.kafka.clients.Metadata : [Producer clientId=producer-2] Cluster ID: aKrIp_7wQcqF9OlSUoBgSQ
2020-06-16 13:22:47.496 INFO 72423 --- [ Test worker] org.apache.kafka.streams.KafkaStreams : stream-client [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4] State transition from ERROR to PENDING_SHUTDOWN
2020-06-16 13:22:47.497 INFO 72423 --- [ms-close-thread] o.a.k.s.p.internals.StreamThread : stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-StreamThread-1] Informed to shut down
2020-06-16 13:22:47.497 INFO 72423 --- [ms-close-thread] o.a.k.s.p.internals.GlobalStreamThread : global-stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-GlobalStreamThread] State transition from RUNNING to PENDING_SHUTDOWN
2020-06-16 13:22:47.557 INFO 72423 --- [balStreamThread] o.a.k.s.p.internals.GlobalStreamThread : global-stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-GlobalStreamThread] Shutting down
2020-06-16 13:22:47.571 INFO 72423 --- [balStreamThread] o.a.k.s.p.internals.GlobalStreamThread : global-stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-GlobalStreamThread] State transition from PENDING_SHUTDOWN to DEAD
2020-06-16 13:22:47.571 INFO 72423 --- [balStreamThread] o.a.k.s.p.internals.GlobalStreamThread : global-stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-GlobalStreamThread] Shutdown complete
After some experiments I made it work by adding to my configuration:
#Bean
public KStream kStream(StreamsBuilder kStreamsBuilder) {
return kStreamsBuilder.stream("some-topic", Consumed.with(null, null));
}
So basically when I have any KStream defined (consuming from any topic) the stream thread stays alive and everything works as before the upgrade.
My question is, what would be the correct way to do it without this useless bean (and topic).
EDIT
There was a similar issue discussed here: Kafka Streams 2.5.0 requires input topic
Looks like this will be fixed in kafka-streams 2.5.1 and util then setting num.stream.threads: 0 is nicer workaround than what declaring dummy stream.
This appears to have nothing to do with Spring and is caused by some internal changes in the kafka-streams classes.
This works fine with Boot 2.2.x (Kafka-streams 2.3.x).
#SpringBootApplication
#EnableKafkaStreams
public class So62406117Application {
public static void main(String[] args) {
SpringApplication.run(So62406117Application.class, args);
}
#Bean
public GlobalKTable<String, String> table(StreamsBuilder kStreamsBuilder) {
return kStreamsBuilder.globalTable("topic1", Consumed.with(null, null),
Materialized.as("topic1-store"));
}
#Bean
public ApplicationRunner runner(StreamsBuilderFactoryBean fb) {
return args -> {
ReadOnlyKeyValueStore<Object, Object> store =
fb.getKafkaStreams().store("topic1-store", QueryableStoreTypes.keyValueStore());
System.out.println(store);
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("topic1").partitions(1).replicas(1).build();
}
}
But fails with Boot 2.3 (Kafka-Streams 2.5.0).
We are definitely starting the KafkaStreams (in the factory bean start() method, but during that start() we get
java.lang.IllegalStateException: Consumer is not subscribed to any topics or assigned any partitions
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1228) ~[kafka-clients-2.5.0.jar:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216) ~[kafka-clients-2.5.0.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:853) ~[kafka-streams-2.5.0.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:753) ~[kafka-streams-2.5.0.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697) ~[kafka-streams-2.5.0.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670) ~[kafka-streams-2.5.0.jar:na]
2020-06-16 17:44:02.700 INFO 10635 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1] State transition from STARTING to PENDING_SHUTDOWN
2020-06-16 17:44:02.700 INFO 10635 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1] Shutting down
2020-06-16 17:44:02.700 INFO 10635 --- [-StreamThread-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2020-06-16 17:44:02.700 INFO 10635 --- [-StreamThread-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2020-06-16 17:44:02.704 INFO 10635 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
2020-06-16 17:44:02.704 INFO 10635 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [foo-235af8e6-6618-4e73-86ad-75307130004b] State transition from REBALANCING to ERROR
2020-06-16 17:44:02.704 ERROR 10635 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [foo-235af8e6-6618-4e73-86ad-75307130004b] All stream threads have died. The instance will be in error state and should be closed.
2020-06-16 17:44:02.704 INFO 10635 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1] Shutdown complete

Solace JMS Channel closing on it's own when running on a Kubernetes cluster (Spring Boot)

So I have a very simple JMS Listener running on Spring Boot, and running on a Kubernetes cluster on Google Cloud.
The only thing i've defined is the following in my configuration class:
#Bean
public DefaultJmsListenerContainerFactory cFactory(ConnectionFactory connectionFactory, JmsErrorHandler errorHandler) {
logger.info("cFactory called...");
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setErrorHandler(errorHandler);
factory.setTransactionManager(null);
factory.setSessionTransacted(false);
return factory;
}
My Application.properties:
solace.jms.host=tcps://[host]
solace.jms.clientUsername=[username]
solace.jms.clientPassword=[password]
solace.jms.msgVpn=[msgvpn]
solace.jms.queueName=[queuename]
server.port=8080
logging.level.com.solacesystems=INFO
JMS Listener:
#JmsListener(destination="${solace.jms.queueName}", containerFactory = "cFactory")
public void onMessage(Message message) {
[Do stuff with message]
}
I have this issue in the logs that says the following:
2020-02-28 21:08:40.247 INFO 1 --- [nio-8080-exec-6] c.s.j.protocol.impl.TcpClientChannel : Connecting to host 'orig=tcps://[host goes here], scheme=tcps://, host=[host], port=55443' (host 1 of 1, smfclient 294, attempt 1 of 1, this_host_attempt: 1 of 1)
2020-02-28 21:07:21.968 INFO 1 --- [enerContainer-1] c.m.a.notam.listener.JmsMessageListener : Message Received and processed
2020-02-28 21:07:20.572 INFO 1 --- [nio-8080-exec-8] c.s.jcsmp.protocol.smf.SSLSmfClient : closeOutbound() : isSslDowngradeEnabled: false, mSslEngineClosed: false
2020-02-28 21:07:20.572 INFO 1 --- [nio-8080-exec-8] c.s.j.protocol.impl.TcpClientChannel : Channel Closed (smfclient 262)
2020-02-28 21:07:20.535 INFO 1 --- [nio-8080-exec-8] c.s.j.protocol.impl.TcpClientChannel : Connected to host 'orig=tcps://[host]:55443, scheme=tcps://, host=[host], port=55443' (smfclient 262)
It basically loops like this all day long and I can't figure out why. When I run this locally on my development machine the connection remains open and messages are streaming in just fine.
There is no other log to give me a clue why the channel is closing on its own like this.
Any one have any ideas what the problem might be?

Categories