Commit Asynchronously a message just after reading from topic - java

I'm trying to commit a message just after reading it from the topic. I've followed this link (https://www.confluent.io/blog/apache-kafka-spring-boot-application) to create a Kafka consumer with spring. Normally it works perfect and the consumer gets the message and waits till anotherone enters in the queue. But the problem is that when I process this messages it takes a lot of time (circa 10 minutes) the kafka queue thinks that the message is not consumed (commited) and the consumers reads it again and again. I have to say that when my process time is less than 5 minutes it works well but when it lastas longer it doesn't commit the message.
I've looked for some answers around but it doesn't help me because I'm not using the same source code (and of course a different structure). I've tried to send asynchronous methods and also to commit asynchronously the message but I've failed.
Some of the sources are:
Spring Boot Kafka: Commit cannot be completed since the group has already rebalanced
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
https://dzone.com/articles/kafka-clients-at-most-once-at-least-once-exactly-o
Kafka 0.10 Java consumer not reading message from topic
https://github.com/confluentinc/confluent-kafka-dotnet/issues/470
The main class is here :
#SpringBootApplication
#EnableAsync
public class SpringBootKafkaApp {
public static void main(String[] args) {
SpringApplication.run(SpringBootKafkaApp .class, args);
}
The consumer class (where I need to commit my message)
#Service
public class Consumer {
#Autowired
AppPropert prop;
Consumer cons;
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message) throws IOException {
/*HERE I MUST CONSUME THE MESSAGE AND COMMIT IT */
Properties props=prope.startProp();//just getting my properties from my config-file
ControllerPRO pro = new ControllerPRO();
List<Future<String>> async= new ArrayList<Future<String>>();//call this method asynchronous, doesn't help me
try {
CompletableFuture<String> ret=pro.processLaunch(message,props);//here I call the process method
/*This works fine when the processLaunch method takes less than 5 minutes,
if it takes longer the consumer will get the same message from the topic and start again with this operation
*/
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("End of consumer method ");
}
}
How can I commit the message just after I read it from the queue.
I want to be sure that when I receive the message I commit the message immediately. Right now the message is commited when I finish to execute the method just after the (System.out.println). So can anybody tell me how to do this?
----- update -------
Sorry for the late reply but as #GirishB suggested I've been looking to the configuration of GirishB but I don't see where I can define the topic I want to read/listen from my configuration file (applications.yml). All the examples that I see use a structure similar to this (http://tutorials.jenkov.com/java-util-concurrent/blockingqueue.html). Is there any option that I can read a topic that is declared in other server? Using something similar to this #KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
=========== SOLUTION 1 ========================================
I followed #victor gallet advice and included the declaration of the confumer porperties in oder to accomodate the "Acknowledgment" object in the consume method. I've also followed this link (https://www.programcreek.com/java-api-examples/?code=SpringOnePlatform2016/grussell-spring-kafka/grussell-spring-kafka-master/s1p-kafka/src/main/java/org/s1p/CommonConfiguration.java) to get all the methods that I've used to declare and set all the properties (consumerProperties, consumerFactory, kafkaListenerContainerFactory). The only problem I found is the
"new SeekToCurrentErrorHandler() " declaration because I'm getting an error and for the moment I'm not able to resolve it (would be great if someone explain it to me).
#Service
public class Consumer {
#Autowired
AppPropert prop;
Consumer cons;
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
//factory.setErrorHandler(new SeekToCurrentErrorHandler());//getting error here despite I've loaded the library
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerProperties());
}
#Bean
public Map<String, Object> consumerProperties() {
Map<String, Object> props = new HashMap<>();
Properties propsManu=prop.startProperties();// here I'm getting my porperties file where I retrive the configuration from a remote server (you have to trust that this method works)
//props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configProperties.getBrokerAddress());
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, propsManu.getProperty("bootstrap-servers"));
//props.put(ConsumerConfig.GROUP_ID_CONFIG, "s1pGroup");
props.put(ConsumerConfig.GROUP_ID_CONFIG, propsManu.getProperty("group-id"));
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
//props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, propsManu.getProperty("key-deserializer"));
//props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, propsManu.getProperty("value-deserializer"));
return props;
}
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message) throws IOException {
/*HERE I MUST CONSUME THE MESSAGE AND COMMIT IT */
acknowledgment.acknowledge();// commit immediately
Properties props=prop.startProp();//just getting my properties from my config-file
ControllerPRO pro = new ControllerPRO();
List<Future<String>> async= new ArrayList<Future<String>>();//call this method asynchronous, doesn't help me
try {
CompletableFuture<String> ret=pro.processLaunch(message,props);//here I call the process method
/*This works fine when the processLaunch method takes less than 5 minutes,
if it takes longer the consumer will get the same message from the topic and start again with this operation
*/
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("End of consumer method ");
}
}
``````````````````````````````````````````````````````````

You have to modify your consumer configuation with property enable.auto.commit set to false :
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
Then, you have to modify Spring Kafka Listener factory and set the ack-mode to MANUAL_IMMEDIATE. Here's an example of a ConcurrentKafkaListenerContainerFactory :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
As explained from documentation, MANUAL_IMMEDIATE means : Commit the offset immediately when the Acknowledgment.acknowledge() method is called by the listener.
You can find all committing methods here.
Then, in your listener code, you can commit the offset manually by adding an Acknowledgmentobject, for example:
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message, Acknowledgment acknowledgment) {
// commit immediately
acknowledgment.acknowledge();
}

You may use a java.util.concurrent.BlockingQueue to push the message as you consume and commit the Kafka offset. Then using another thread get the message from the blockingQueue and process. This way you don't have to wait till processing completes.

properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
After setting above property , if you want to process in batch then you can follow followimg configurations.
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
// you can set either Manual or MANUAL_IMMEDIATE because //KafkaMessageListenerContainer invokes
//ConsumerBatchAcknowledgment for any kind of manual ackmode
factory.getContainerProperties().setAckOnError(true);
//specifying batch error handler because i have enabled to listen records in batch
factory.setBatchErrorHandler(new SeekToCurrentBatchErrorHandler());
factory.setBatchListener(true);
factory.getContainerProperties().setSyncCommits(false);

Related

Embedded Kafka Spring test executes before embedded Kafka is ready

I have a Spring Boot project that has a Kafka listener that I want to test using Embedded Kafka. I have the Kafka Listener log out the message "record received". Which will only be be logged out if I add a Thread.sleep(1000) to the start of the method.
Test class:
#SpringBootTest
#DirtiesContext
#EnableKafka
#EmbeddedKafka(partitions = 1, topics = { "my-topic" }, ports = 7654)
class KafkaTest {
private static final String TOPIC = "my-topic";
#Autowired
EmbeddedKafkaBroker kafkaBroker;
#Test
void testSendEvent() throws ExecutionException, InterruptedException {
// Thread.sleep(1000); // I wont see the Listener log message unless I add this sleep
Producer<Integer, String> producer = configureProducer();
ProducerRecord<Integer, String> producerRecord = new ProducerRecord<>(TOPIC, "myMessage");
producer.send(producerRecord).get();
producer.close();
}
private Producer<Integer, String> configureProducer() {
Map<String, Object> producerProps = new HashMap<>(KafkaTestUtils.producerProps(kafkaBroker));
return new DefaultKafkaProducerFactory<Integer, String>(producerProps).createProducer();
}
}
I don't want to use the fickle Thread.sleep() The test is obviously executing before some setup processes have completed. I clearly need to wait on something, but I am not sure what nor how to do it.
Using:
Java 11
Spring Boot 2.5.6
JUnit 5
spring-kafka-test 2.7.8
Add an #EventListener bean to the test context and (for example) count down a CountDownLatch when a ConsumerStartedEvent is received; then in the test
assertThat(eventListner.getLatch().await(10, TimeUnit.SECONDS)).isTrue();
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#events
and
https://docs.spring.io/spring-kafka/docs/current/reference/html/#event-consumption
Or add a ConsumerRebalanceListener and wait for partition assignment.
I clearly need to wait on something, but I am not sure what nor how to do it.
You need to use a different method to give Kafka time to process and route the message ...
Look at this line ...
ConsumerRecord<String, String> message = records.poll(500, TimeUnit.MILLISECONDS);
When testing Kafka listeners we always specify a poll delay. This is because your message is given to kafka, which will then process it in another thread. And you need to wait for it.
Here's how it looks in context of the code its used in.
class UserKafkaProducerTest {
#Test
void testWriteToKafka() throws InterruptedException, JsonProcessingException {
// Create a user and write to Kafka
User user = new User("11111", "John", "Wick");
producer.writeToKafka(user);
// Read the message (John Wick user) with a test consumer from Kafka and assert its properties
ConsumerRecord<String, String> message = records.poll(500, TimeUnit.MILLISECONDS);
assertNotNull(message);
assertEquals("11111", message.key());
User result = objectMapper.readValue(message.value(), User.class);
assertNotNull(result);
assertEquals("John", result.getFirstName());
assertEquals("Wick", result.getLastName());
}
}
This is a code piece from this article, which makes stuff clear.
You can use this small library for testing.
All output records will be collected to blocking queue and you can poll them with timout:
#OutputQueue(topic = TOPIC_OUT, partitions = 1)
private BlockingQueue<ConsumerRecord<String, String>> consumerRecords;
#Test
void shouldFilterRecordWithoutHeader() throws ExecutionException, InterruptedException, TimeoutException {
final String messageIn = "hello world";
try (var producer = producer()) {
producer.send(new ProducerRecord<>(TOPIC_IN, messageIn)).get(5, TimeUnit.SECONDS);
}
ConsumerRecord<String, String> record = consumerRecords.poll(5, TimeUnit.SECONDS);
Assertions.assertThat(record).isNotNull();
}

Spring Boot AMQP based JmsListener fails on TextMessage

I have a Spring Boot application which has problems retrieving JMS messages of type TextMessage from an ActiveMQ broker.
If the consumer tries to retrieve messages from the broker it cannot automatically convert a message to TextMessage but treats it as ByteMessage. There is a JmsListener which should read the messages from the queue as TextMessage:
...
#JmsListener(destination = "foo")
public void jmsConsumer(TextMessage message) {
...
The JmsListener produces warnings like the following, and drops the messages:
org.springframework.jms.listener.adapter.ListenerExecutionFailedException: Listener method could not be invoked with incoming message
Endpoint handler details:
Method [public void net.aschemann.demo.springboot.jmsconsumer.JmsConsumer.jmsConsumer(javax.jms.TextMessage)]
Bean [net.aschemann.demo.springboot.jmsconsumer.JmsConsumer#4715f07]; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [javax.jms.TextMessage] for org.springframework.jms.listener.adapter.AbstractAdaptableMessageListener$MessagingMessageConverterAdapter$LazyResolutionMessage#7c49d298, failedMessage=org.springframework.jms.listener.adapter.AbstractAdaptableMessageListener$MessagingMessageConverterAdapter$LazyResolutionMessage#7c49d298
at org.springframework.jms.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:118) ~[spring-jms-5.1.4.RELEASE.jar:5.1.4.RELEASE]
I have extracted a small sample application
to debug the problem: https://github.com/ascheman/springboot-camel-jms
The producer in real life is a commercial application which makes use of Apache Camel. Hence, I can hardly change/customize the producer. I have tried to build a sample producer which shows the same behavior.
Could I somehow tweak the consumer to treat the message as TextMessage?
Besides: Is there any way to retrieve the additional AMQP properties from the message programmatically directly in Spring? Of course, I could still read the message as ByteMessage and try to parse properties away. But I am looking for a cleaner way which is backed by any Spring API. The Spring #Headers annotation didn't help so far.
I ever faced the same issue with the question owner, after I followed the comment from #AndyWilkinson by adding transport.transformer option on the transportConnector in activemq.xml as the following, the issue is solved.
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600&transport.transformer=jms"/>
I had the same error, and it was caused because LazyResolutionMessage is called from MessagingMessageConverter that is the default implementation to MessageConverter, which converts your message (actually it doesn't, since it's the default):
return ((org.springframework.messaging.Message) payload).getPayload();
I have accomplished what you want, at the end my consumer was working like:
#JmsListener(destination = "${someName}")
public void consumeSomeMessages(MyCustomEvent e) {
....
}
What I had to do was:
#Bean(name = "jmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory whateverNameYouWant(final ConnectionFactory genericCF) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setErrorHandler(t -> log.error("bad consumer, bad", t));
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setConnectionFactory(genericCF);
factory.setMessageConverter(
new MessageConverter() {
#Override
public Message toMessage(Object object, Session session) {
throw new UnsupportedOperationException("since it's only for consuming!");
}
#Override
public MyCustomEvent fromMessage(Message m) {
try {
// whatever transformation you want here...
// here you could print the message, try casting,
// building new objects with message's attributes, so on...
// example:
return (new ObjectMapper()).readValue(((TextMessage) m).getText(), MyCustomEvent.class);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
);
return factory;
}
A few keypoints:
If your DefaultJmsListenerContainerFactory method is also called jmsListenerContainerFactory you don't need name attribute at Bean annotation
Notice you can also implement an ErrorHandler to deal with exceptions when trying to convert/cast your message's type!
ConnectionFactory was a Spring managed bean with Amazon's SQSConnectionFactory since I wanted to consume from a SQS queue. Please provide your equivalent correctly. Mine was:
#Bean("connectionFactory")
public SQSConnectionFactory someOtherNome() {
return new SQSConnectionFactory(
new ProviderConfiguration(),
AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(
"keyAccess",
"keySecret"
)
)
)
.build()
);
}
If you have a problem with conversion from byte[] to String use:
.convertBodyTo(String.class)
Route example:
from(QUEUE_URL)
.routeId("consumer")
.convertBodyTo(String.class)
.log("${body}")
.to("mock:mockRoute");

Issue testing spring cloud SQS Listener

Environment
Spring Boot: 1.5.13.RELEASE
Cloud: Edgware.SR3
Cloud AWS: 1.2.2.RELEASE
Java 8
OSX 10.13.4
Problem
I am trying to write an integration test for SQS.
I have a local running localstack docker container with SQS running on TCP/4576
In my test code I define an SQS client with the endpoint set to local 4576 and can successfully connect and create a queue, send a message and delete a queue. I can also use the SQS client to receive messages and pick up the message that I sent.
My problem is that if I remove the code that is manually receiving the message in order to allow another component to get the message nothing seems to be happening. I have a spring component annotated as follows:
Listener
#Component
public class MyListener {
#SqsListener(value = "my_queue", deletionPolicy = ON_SUCCESS)
public void receive(final MyMsg msg) {
System.out.println("GOT THE MESSAGE: "+ msg.toString());
}
}
Test
#RunWith(SpringRunner.class)
#SpringBootTest(properties = "spring.profiles.active=test")
public class MyTest {
#Autowired
private AmazonSQSAsync amazonSQS;
#Autowired
private SimpleMessageListenerContainer container;
private String queueUrl;
#Before
public void setUp() {
queueUrl = amazonSQS.createQueue("my_queue").getQueueUrl();
}
#After
public void tearDown() {
amazonSQS.deleteQueue(queueUrl);
}
#Test
public void name() throws InterruptedException {
amazonSQS.sendMessage(new SendMessageRequest(queueUrl, "hello"));
System.out.println("isRunning:" + container.isRunning());
System.out.println("isActive:" + container.isActive());
System.out.println("isRunningOnQueue:" + container.isRunning("my_queue"));
Thread.sleep(30_000);
System.out.println("GOT MESSAGE: " + amazonSQS.receiveMessage(queueUrl).getMessages().size());
}
#TestConfiguration
#EnableSqs
public static class SQSConfiguration {
#Primary
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://127.0.0.1:4576", "eu-west-1");
return new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("key", "secret")))
.withEndpointConfiguration(endpoint)
.build());
}
}
}
In the test logs I see:
o.s.c.a.m.listener.QueueMessageHandler : 1 message handler methods found on class MyListener: {public void MyListener.receive(MyMsg)=org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a}
2018-05-31 22:50:39.582 INFO 16329 ---
o.s.c.a.m.listener.QueueMessageHandler : Mapped "org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a" onto public void MyListener.receive(MyMsg)
Followed by:
isRunning:true
isActive:true
isRunningOnQueue:false
GOT MESSAGE: 1
This demonstrates that in the 30 second pause between sending the message the container didn't pick it up and when I manually poll for the message it is there on the queue and I can consume it.
My question is, why isn't the listener being invoked and why is the isRunningOnQueue:false line suggesting that it's not auto started for that queue?
Note that I also tried setting my own SimpleMessageListenerContainer bean with autostart set to true explicitly (the default anyway) and observed no change in behaviour. I thought that the org.springframework.cloud.aws.messaging.config.annotation.SqsConfiguration#simpleMessageListenerContainer that is set up by #EnableSqs ought to configure an auto started SimpleMessageListenerContainer that should be polling for me message.
I have also set
logging.level.org.apache.http=DEBUG
logging.level.org.springframework.cloud=DEBUG
in my test properties and can see the HTTP calls create the queue, send a message and delete etc but no HTTP calls to receive (apart from my manual one at the end of the test).
I figured this out after some tinkering.
Even if the simple message container factory is set to not auto start, it seems to do its initialisation anyway, which involves determining whether the queue exists.
In this case, the queue is created in my test in the setup method - but sadly this is after the spring context is set up which means that an exception occurs.
I fixed this by simply moving the queue creation to the context creation of the SQS client (which happens before the message container is created). i.e.:
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://localhost:4576", "eu-west-1");
final AmazonSQSBufferedAsyncClient client = new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("dummyKey", "dummySecret")))
.withEndpointConfiguration(endpoint)
.build());
client.createQueue("test-queue");
return client;
}

How does spring.kafka.consumer.auto-offset-reset works in spring-kafka

KafkaProperties java doc:
/**
* What to do when there is no initial offset in Kafka or if the current offset
* does not exist any more on the server.
*/
private String autoOffsetReset;
I have hello world appllication which contains application.properties
spring.kafka.consumer.group-id=foo
spring.kafka.consumer.auto-offset-reset=latest
At this case #KafkaListener method is invoked for all entries. But expected result was that #KafkaListener method is invoked only for latest 3 options I send. I tried to use another option:
spring.kafka.consumer.auto-offset-reset=earlisest
But behaviour the same.
Can you explain this stuff?
P.S.
code sample:
#SpringBootApplication
public class Application implements CommandLineRunner {
public static Logger logger = LoggerFactory.getLogger(Application.class);
public static void main(String[] args) {
SpringApplication.run(Application.class, args).close();
}
#Autowired
private KafkaTemplate<String, String> template;
private final CountDownLatch latch = new CountDownLatch(3);
#Override
public void run(String... args) throws Exception {
this.template.send("spring_kafka_topic", "foo1");
this.template.send("spring_kafka_topic", "foo2");
this.template.send("spring_kafka_topic", "foo3");
latch.await(60, TimeUnit.SECONDS);
logger.info("All received");
}
#KafkaListener(topics = "spring_kafka_topic")
public void listen(ConsumerRecord<?, ?> cr) throws Exception {
logger.info(cr.toString());
latch.countDown();
}
}
Update:
Behaviour doesn't depends on
spring.kafka.consumer.auto-offset-reset
it is only depends on spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit
if I set spring.kafka.consumer.enable-auto-commit=false - I see all records.
if I set spring.kafka.consumer.enable-auto-commit=true - I see only 3 last records.
Please clarify menaning of spring.kafka.consumer.auto-offset-reset property
The KafkaProperties in Spring Boot does this:
public Map<String, Object> buildProperties() {
Map<String, Object> properties = new HashMap<String, Object>();
if (this.autoCommitInterval != null) {
properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,
this.autoCommitInterval);
}
if (this.autoOffsetReset != null) {
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
this.autoOffsetReset);
}
This buildProperties() is used from the buildConsumerProperties() which, in turn in the:
#Bean
#ConditionalOnMissingBean(ConsumerFactory.class)
public ConsumerFactory<?, ?> kafkaConsumerFactory() {
return new DefaultKafkaConsumerFactory<Object, Object>(
this.properties.buildConsumerProperties());
}
So, if you use your own ConsumerFactory bean definition be sure to reuse those KafkaProperties: https://docs.spring.io/spring-boot/docs/1.5.7.RELEASE/reference/htmlsingle/#boot-features-kafka-extra-props
UPDATE
OK. I see what's going on.
Try to add this property:
spring.kafka.consumer.enable-auto-commit=false
This way we won't have async auto-commits based on some commit interval.
The logic in our application is based on the exit fact after the latch.await(60, TimeUnit.SECONDS);. When we get 3 expected records we exit. This way the async auto-commit from the consumer might not happen yet. So, the next time you run the application the consumer polls data from the uncommited offset.
When we turn off auto-commit, we have an AckMode.BATCH, which is performed synchronously and we have an ability to see really latest recodrs in the topic for this foo consumer group.

Kafka manual offset update using spring boot

I am new to Kafka, and created consumer using Spring boot #kafkalistener.
My use case is once read the message from kafka partition, i need to process and when any exception arise, need to re-process the message after sometime. On exception scenario, I should not update the offset and after server start, i need to process the message again.
Following are the configuration
#Configuration
#EnableKafka
public class ReceiverConfiguration {
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
factory.getContainerProperties().setSyncCommits(true);
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<String, String>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> propsMap = new HashMap();
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "<some broker configuration>");
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "6000");
propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, CustomDeserializer.class);
propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer-Group");
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return propsMap;
}
#Bean
public Listener listener() {
System.out.println("%%%%%%%%% Initializing Listener %%%%%%%");
return new Listener();
}
}
Following are the listener class
public class Listener {
private static final Logger logger = LoggerFactory.getLogger(Listener.class);
public CountDownLatch getCountDownLatch1() {
return countDownLatch1;
}
private CountDownLatch countDownLatch1 = new CountDownLatch(1);
#KafkaListener(topics = "topic")
public void listen(ConsumerRecord<String, CustomObject> record, Acknowledgment ack) throws Exception{
logger.info("******** 1 message: "+record);
//ack.acknowledge();
}
}
scenario 1: During Consumer service is running, when producer sending the message, Listener class reading the message and Not updating the offset, till this part looks good. If i Stop the consumer, offset is updated in the consumer group.
Problem: Should not update the offset during server stop scenario. Once my back-end processing issue is resolved, when i restart my consumer service, I need to consume the message again only when the offset is not committed. But here offset is committed and there is no chance i can consumer the message from partition again.
scenario 2: Assuming my consumer service is down, Producer sending message to Topic partition, can see offset is not incremented and lag is 1. Still service is not enabled with ack.acknowledge(), i.e. code is commented out only, even though offset is committed in the consumer group.
Problem: Till I am acknowledging the offset, should not commit the offset. Problem noticed in server start.
Please help me resolving the issue, was not able to find proper redirection.
Appreciate your help

Categories