kafka Offset commit failing org.apache.kafka.clients.consumer.CommitFailedException - java

I have written a kafka consumer using spring-kafka library (spring-boot-starter-parent 2.3.4.RELEASE).
I have following consumer configuration in my code
/**
* configuration for kafka consumers at container level
*
* #return
*/
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(1); factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
/**
* default kafka consumer factory
*
* #return
*/
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
/**
* configuration for kafka consumer at thread level.
*
* #return
*/
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, localhost:9092);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, GenericDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
Following is my listener method :
#KafkaListener(id = "client", topics = "MyTopic", clientIdPrefix = "client")
public void listen(#Payload UserNotification data,Acknowledgment ack) {
// Business logic
ack.acknowledge();
}
Here I am reading 1 message at a time, apply business logic and use ack.acknowledge() to commit offset, but what I have seen, sometime offset commit succeed but many time I get org.apache.kafka.clients.consumer.CommitFailedException on line ack.acknowledge(). Here I can confirm that business logic is completed in 2 sec max. Following is the detailed exception:
2022-06-03|04:27:04.326|INSTANCEID_IS_UNDEFINED|xyz-856495f857-8nqx7|client-0-C-1|ERROR||||o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer|149|Error handler threw an exception
org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.xyz.listen(java.lang.String,org.springframework.kafka.support.Acknowledgment)' threw exception; nested exception is org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.; nested exception is org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.
at org.springframework.kafka.listener.SeekUtils.seekOrRecover(SeekUtils.java:157)
at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:113)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeErrorHandler(KafkaMessageListenerContainer.java:2012)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1911)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1838)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1735)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1465)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1128)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1031)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Can someone please help me why this is happening, because default poll interval is 5 min and group coordinator should not kickout the consumer if processing just takes 2 sec.

You must be able to process max.poll.records (default 500) in max.poll.interval.ms (default 300000 - 5 mins).
If it takes 2 seconds per record, it will take up to 16.6667 minutes to process the batch, and you will be kicked out of the group.
Reduce max.poll.records and/or increase max.poll.interval.ms.

Related

Non-blocking retries with Spring Kafka batch consumer

I'm using spring-kafka 2.8.0 and I'm trying to implement non-blocking retries for batch kafka consumer. Here are my config and consumer:
#Configuration
public class KafkaConfig {
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, GenericRecord>>
batchListenerFactory(ConsumerFactory<Object, Object> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, GenericRecord> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setBatchListener(true);
return factory;
}
}
#Component
public class MyConsumer {
#KafkaListener(
topics = "my-topic",
containerFactory = "batchListenerFactory"
)
#RetryableTopic(
backoff = #Backoff(delay = 1000, multiplier = 2.0),
attempts = "4",
topicSuffixingStrategy = SUFFIX_WITH_INDEX_VALUE,
autoCreateTopics = "false"
)
public void consume(List<ConsumerRecord<String, GenericRecord>> messages) {
// do some stuff
}
}
But on sturtup I'm getting the following exception:
java.lang.IllegalArgumentException: The provided class BatchMessagingMessageListenerAdapter is not assignable from AcknowledgingConsumerAwareMessageListener
My questions are:
Is there any way to combine batch consumer with #RetryableTopic?
Is there any another way to implement non-blocking retries for batch consumer? Is it possible to use RetryTemplate for this purpose?
#RetryableTopic is not supported with batch listeners.
The RecoveringBatchErrorHandler (DefaultErrorHandler for 2.8 and later) supports sending a failed record within a batch to a dead letter topic, with the help of the listener throwing a BatchListenerFailedException indicating which record failed.
You would then have to implement your own listener on that topic.

Queues in rabbit mq staying in idle state for longer time

In the rabbitmq service i have configured 8 queues, i am using spring client to send messages to rabbit MQ, i could be able to send messages to respective queues but at most of the times only a single queue is running and the rest of the queues are in idle state, to give turn to all queues i have reduced the configured the prefetch count to 20, so that all messages doesn't go to workers(consumers) and cause other queues to remain idle, in-spite of this i don't see multiple queues running parallel.
Below is the spring configuration i have used to set prefetch count
#Bean
public CachingConnectionFactory rabbitConnectionFactory() throws Exception
{
com.rabbitmq.client.ConnectionFactory factory = new com.rabbitmq.client.ConnectionFactory();
factory.setHost(host);
factory.setUsername(username);
factory.setPassword(password);
factory.setPort(5671);
factory.useSslProtocol();
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(factory);
return connectionFactory;
}
i am different multiple container factory for each queue, some of them are mentioned below.(not sure why we use factories, my assumption is to give configuration like prefetch to queues)
#Bean(name = "ordersimplecontainer")
public SimpleRabbitListenerContainerFactory simpleOrderListenerContainerFactory() throws Exception
{
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setConcurrentConsumers(6);
factory.setMaxConcurrentConsumers(8);
factory.setPrefetchCount(20);
return factory;
}
#Bean(name = "productsimplecontainer")
public SimpleRabbitListenerContainerFactory simpleProductListenerContainerFactory() throws Exception
{
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setConcurrentConsumers(3);
factory.setMaxConcurrentConsumers(4);
factory.setPrefetchCount(20);
return factory;
}
in the listener code i am passing the respective connection factory as shown below
#RabbitListener( queues = "<queueName>", containerFactory = "<factoryName>", autoStartup = "${autocreateworker}")
public void myListener(SftpStockDailySyncAsyncRequest sftpStockDailySyncRequest) {
}
The issue i had is at any given time only one queue is running and the other queues are in idle state and because of this important queues are waiting until they get chance to go to running state, the screen below depicts the scenario.
Please correct me on how to resolve this issue.
As of now I have two worker machines which are listening to RabbitMQ server.

How to destroy objects of Message listener in Spring Kafka?

I was following this
Creating KafkaListener without Annotation & without Spring Boot
I have an API, so whenever the API is called a new container is created with a new message listener. The listener will fetch messages of certain offsets and then it will be paused. After that, I don't need it. But when the API is hit many times lots of objects will be created. I want to destroy the useless objects.
I was thinking of when a listener is idle for some time, then I can destroy it.
How can I do this?
Thanks in advance
CODE
public class Listener extends AbstractConsumerSeekAware implements ConsumerAwareMessageListener<String, String> {
#Override
public void onMessage(ConsumerRecord<String, String> consumerRecord, Consumer<?, ?> consumer) {
// pause if certain offset received
}
}
This is the container
public KafkaMessageListenerContainer<String, String> getContainer(String topic, int partition, long offset) {
ContainerProperties containerProperties = new ContainerProperties(new TopicPartitionOffset(topic, partition, offset));
ConsumerFactory<String, String> consumerFactory = new DefaultKafkaConsumerFactory<>(consumerProperties());
KafkaMessageListenerContainer<String, String> kafkaMessageListenerContainer = new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
kafkaMessageListenerContainer.getContainerProperties().setGroupId(UUID.randomUUID().toString());
kafkaMessageListenerContainer.setAutoStartup(false);
kafkaMessageListenerContainer.getContainerProperties().setMessageListener(new Listener());
return kafkaMessageListenerContainer;
}
private Map<String, Object> consumerProperties(){
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"));
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return props;
}
Creating containers like that will disable many features (e.g. event publishing).
It would be better to use Boot's auto-configured ConcurrentKafkaListenerContainerFactory to create the containers.
/**
* Create and configure a container without a listener; used to create containers that
* are not used for KafkaListener annotations. Containers created using this method
* are not added to the listener endpoint registry.
* #param topicPartitions the topicPartitions to assign.
* #return the container.
* #since 2.3
*/
C createContainer(TopicPartitionOffset... topicPartitions);
When you no longer need the container, simply call container.stop().

Commit Asynchronously a message just after reading from topic

I'm trying to commit a message just after reading it from the topic. I've followed this link (https://www.confluent.io/blog/apache-kafka-spring-boot-application) to create a Kafka consumer with spring. Normally it works perfect and the consumer gets the message and waits till anotherone enters in the queue. But the problem is that when I process this messages it takes a lot of time (circa 10 minutes) the kafka queue thinks that the message is not consumed (commited) and the consumers reads it again and again. I have to say that when my process time is less than 5 minutes it works well but when it lastas longer it doesn't commit the message.
I've looked for some answers around but it doesn't help me because I'm not using the same source code (and of course a different structure). I've tried to send asynchronous methods and also to commit asynchronously the message but I've failed.
Some of the sources are:
Spring Boot Kafka: Commit cannot be completed since the group has already rebalanced
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
https://dzone.com/articles/kafka-clients-at-most-once-at-least-once-exactly-o
Kafka 0.10 Java consumer not reading message from topic
https://github.com/confluentinc/confluent-kafka-dotnet/issues/470
The main class is here :
#SpringBootApplication
#EnableAsync
public class SpringBootKafkaApp {
public static void main(String[] args) {
SpringApplication.run(SpringBootKafkaApp .class, args);
}
The consumer class (where I need to commit my message)
#Service
public class Consumer {
#Autowired
AppPropert prop;
Consumer cons;
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message) throws IOException {
/*HERE I MUST CONSUME THE MESSAGE AND COMMIT IT */
Properties props=prope.startProp();//just getting my properties from my config-file
ControllerPRO pro = new ControllerPRO();
List<Future<String>> async= new ArrayList<Future<String>>();//call this method asynchronous, doesn't help me
try {
CompletableFuture<String> ret=pro.processLaunch(message,props);//here I call the process method
/*This works fine when the processLaunch method takes less than 5 minutes,
if it takes longer the consumer will get the same message from the topic and start again with this operation
*/
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("End of consumer method ");
}
}
How can I commit the message just after I read it from the queue.
I want to be sure that when I receive the message I commit the message immediately. Right now the message is commited when I finish to execute the method just after the (System.out.println). So can anybody tell me how to do this?
----- update -------
Sorry for the late reply but as #GirishB suggested I've been looking to the configuration of GirishB but I don't see where I can define the topic I want to read/listen from my configuration file (applications.yml). All the examples that I see use a structure similar to this (http://tutorials.jenkov.com/java-util-concurrent/blockingqueue.html). Is there any option that I can read a topic that is declared in other server? Using something similar to this #KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
=========== SOLUTION 1 ========================================
I followed #victor gallet advice and included the declaration of the confumer porperties in oder to accomodate the "Acknowledgment" object in the consume method. I've also followed this link (https://www.programcreek.com/java-api-examples/?code=SpringOnePlatform2016/grussell-spring-kafka/grussell-spring-kafka-master/s1p-kafka/src/main/java/org/s1p/CommonConfiguration.java) to get all the methods that I've used to declare and set all the properties (consumerProperties, consumerFactory, kafkaListenerContainerFactory). The only problem I found is the
"new SeekToCurrentErrorHandler() " declaration because I'm getting an error and for the moment I'm not able to resolve it (would be great if someone explain it to me).
#Service
public class Consumer {
#Autowired
AppPropert prop;
Consumer cons;
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
//factory.setErrorHandler(new SeekToCurrentErrorHandler());//getting error here despite I've loaded the library
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerProperties());
}
#Bean
public Map<String, Object> consumerProperties() {
Map<String, Object> props = new HashMap<>();
Properties propsManu=prop.startProperties();// here I'm getting my porperties file where I retrive the configuration from a remote server (you have to trust that this method works)
//props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configProperties.getBrokerAddress());
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, propsManu.getProperty("bootstrap-servers"));
//props.put(ConsumerConfig.GROUP_ID_CONFIG, "s1pGroup");
props.put(ConsumerConfig.GROUP_ID_CONFIG, propsManu.getProperty("group-id"));
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
//props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, propsManu.getProperty("key-deserializer"));
//props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, propsManu.getProperty("value-deserializer"));
return props;
}
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message) throws IOException {
/*HERE I MUST CONSUME THE MESSAGE AND COMMIT IT */
acknowledgment.acknowledge();// commit immediately
Properties props=prop.startProp();//just getting my properties from my config-file
ControllerPRO pro = new ControllerPRO();
List<Future<String>> async= new ArrayList<Future<String>>();//call this method asynchronous, doesn't help me
try {
CompletableFuture<String> ret=pro.processLaunch(message,props);//here I call the process method
/*This works fine when the processLaunch method takes less than 5 minutes,
if it takes longer the consumer will get the same message from the topic and start again with this operation
*/
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("End of consumer method ");
}
}
``````````````````````````````````````````````````````````
You have to modify your consumer configuation with property enable.auto.commit set to false :
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
Then, you have to modify Spring Kafka Listener factory and set the ack-mode to MANUAL_IMMEDIATE. Here's an example of a ConcurrentKafkaListenerContainerFactory :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
As explained from documentation, MANUAL_IMMEDIATE means : Commit the offset immediately when the Acknowledgment.acknowledge() method is called by the listener.
You can find all committing methods here.
Then, in your listener code, you can commit the offset manually by adding an Acknowledgmentobject, for example:
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message, Acknowledgment acknowledgment) {
// commit immediately
acknowledgment.acknowledge();
}
You may use a java.util.concurrent.BlockingQueue to push the message as you consume and commit the Kafka offset. Then using another thread get the message from the blockingQueue and process. This way you don't have to wait till processing completes.
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
After setting above property , if you want to process in batch then you can follow followimg configurations.
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
// you can set either Manual or MANUAL_IMMEDIATE because //KafkaMessageListenerContainer invokes
//ConsumerBatchAcknowledgment for any kind of manual ackmode
factory.getContainerProperties().setAckOnError(true);
//specifying batch error handler because i have enabled to listen records in batch
factory.setBatchErrorHandler(new SeekToCurrentBatchErrorHandler());
factory.setBatchListener(true);
factory.getContainerProperties().setSyncCommits(false);

Kafka manual offset update using spring boot

I am new to Kafka, and created consumer using Spring boot #kafkalistener.
My use case is once read the message from kafka partition, i need to process and when any exception arise, need to re-process the message after sometime. On exception scenario, I should not update the offset and after server start, i need to process the message again.
Following are the configuration
#Configuration
#EnableKafka
public class ReceiverConfiguration {
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
factory.getContainerProperties().setSyncCommits(true);
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<String, String>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> propsMap = new HashMap();
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "<some broker configuration>");
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "6000");
propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, CustomDeserializer.class);
propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer-Group");
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return propsMap;
}
#Bean
public Listener listener() {
System.out.println("%%%%%%%%% Initializing Listener %%%%%%%");
return new Listener();
}
}
Following are the listener class
public class Listener {
private static final Logger logger = LoggerFactory.getLogger(Listener.class);
public CountDownLatch getCountDownLatch1() {
return countDownLatch1;
}
private CountDownLatch countDownLatch1 = new CountDownLatch(1);
#KafkaListener(topics = "topic")
public void listen(ConsumerRecord<String, CustomObject> record, Acknowledgment ack) throws Exception{
logger.info("******** 1 message: "+record);
//ack.acknowledge();
}
}
scenario 1: During Consumer service is running, when producer sending the message, Listener class reading the message and Not updating the offset, till this part looks good. If i Stop the consumer, offset is updated in the consumer group.
Problem: Should not update the offset during server stop scenario. Once my back-end processing issue is resolved, when i restart my consumer service, I need to consume the message again only when the offset is not committed. But here offset is committed and there is no chance i can consumer the message from partition again.
scenario 2: Assuming my consumer service is down, Producer sending message to Topic partition, can see offset is not incremented and lag is 1. Still service is not enabled with ack.acknowledge(), i.e. code is commented out only, even though offset is committed in the consumer group.
Problem: Till I am acknowledging the offset, should not commit the offset. Problem noticed in server start.
Please help me resolving the issue, was not able to find proper redirection.
Appreciate your help

Categories