I have a Spring Boot project that has a Kafka listener that I want to test using Embedded Kafka. I have the Kafka Listener log out the message "record received". Which will only be be logged out if I add a Thread.sleep(1000) to the start of the method.
Test class:
#SpringBootTest
#DirtiesContext
#EnableKafka
#EmbeddedKafka(partitions = 1, topics = { "my-topic" }, ports = 7654)
class KafkaTest {
private static final String TOPIC = "my-topic";
#Autowired
EmbeddedKafkaBroker kafkaBroker;
#Test
void testSendEvent() throws ExecutionException, InterruptedException {
// Thread.sleep(1000); // I wont see the Listener log message unless I add this sleep
Producer<Integer, String> producer = configureProducer();
ProducerRecord<Integer, String> producerRecord = new ProducerRecord<>(TOPIC, "myMessage");
producer.send(producerRecord).get();
producer.close();
}
private Producer<Integer, String> configureProducer() {
Map<String, Object> producerProps = new HashMap<>(KafkaTestUtils.producerProps(kafkaBroker));
return new DefaultKafkaProducerFactory<Integer, String>(producerProps).createProducer();
}
}
I don't want to use the fickle Thread.sleep() The test is obviously executing before some setup processes have completed. I clearly need to wait on something, but I am not sure what nor how to do it.
Using:
Java 11
Spring Boot 2.5.6
JUnit 5
spring-kafka-test 2.7.8
Add an #EventListener bean to the test context and (for example) count down a CountDownLatch when a ConsumerStartedEvent is received; then in the test
assertThat(eventListner.getLatch().await(10, TimeUnit.SECONDS)).isTrue();
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#events
and
https://docs.spring.io/spring-kafka/docs/current/reference/html/#event-consumption
Or add a ConsumerRebalanceListener and wait for partition assignment.
I clearly need to wait on something, but I am not sure what nor how to do it.
You need to use a different method to give Kafka time to process and route the message ...
Look at this line ...
ConsumerRecord<String, String> message = records.poll(500, TimeUnit.MILLISECONDS);
When testing Kafka listeners we always specify a poll delay. This is because your message is given to kafka, which will then process it in another thread. And you need to wait for it.
Here's how it looks in context of the code its used in.
class UserKafkaProducerTest {
#Test
void testWriteToKafka() throws InterruptedException, JsonProcessingException {
// Create a user and write to Kafka
User user = new User("11111", "John", "Wick");
producer.writeToKafka(user);
// Read the message (John Wick user) with a test consumer from Kafka and assert its properties
ConsumerRecord<String, String> message = records.poll(500, TimeUnit.MILLISECONDS);
assertNotNull(message);
assertEquals("11111", message.key());
User result = objectMapper.readValue(message.value(), User.class);
assertNotNull(result);
assertEquals("John", result.getFirstName());
assertEquals("Wick", result.getLastName());
}
}
This is a code piece from this article, which makes stuff clear.
You can use this small library for testing.
All output records will be collected to blocking queue and you can poll them with timout:
#OutputQueue(topic = TOPIC_OUT, partitions = 1)
private BlockingQueue<ConsumerRecord<String, String>> consumerRecords;
#Test
void shouldFilterRecordWithoutHeader() throws ExecutionException, InterruptedException, TimeoutException {
final String messageIn = "hello world";
try (var producer = producer()) {
producer.send(new ProducerRecord<>(TOPIC_IN, messageIn)).get(5, TimeUnit.SECONDS);
}
ConsumerRecord<String, String> record = consumerRecords.poll(5, TimeUnit.SECONDS);
Assertions.assertThat(record).isNotNull();
}
Related
Consider a code:
#Configuration
public class MyConf {
#MessagingGateway(defaultRequestChannel = "channel")
public interface Sender {
void send(String out);
}
}
#Component
public class Consumer {
#ServiceActivator(inputChannel = "channel", poller = #Poller(fixedRate = "100"))
public void handle(String input) throws InterruptedException {
//
}
}
#Component
public class HistoricalTagRunner implements CommandLineRunner {
#Autowired
private Sender sender;
#Override
public void run(String... args) throws Exception {
List<String> input = ...
input.forEach(r -> sender.send(r));
//ok, now all input is send and application exit
//without waiting for message processing
}
}
So all message are sent to consumer, but application exits without wating that all messages are processed
Is there a way to tell spring wait until all messages in "channel" are processed?
The Spring application is really just Java application and it is really not a Spring responsibility to control how your application is going to live. You can take into a service any Java feature to block a main thread until some event happens.
For example in our samples we use a System.in.read() to block main thread:
System.out.println("Hit 'Enter' to terminate");
System.in.read();
ctx.close();
In this case end-user must enter something from the CLI to unblock that thread and exit from the program.
Another way is to wait for some CountDownLatch if you know a number of messages in advance. So, you flow must "count down" thast latch when a message is processed.
I have created a Kakfa consumer application (using spring kafka) and it seems to be working fine. Now, I am trying to add unit test cases for the same.
My consumer is batch acknowledgement consumer and it is started inside a container. The details of the consumer can be found in a below stack overflow post (which is asked by me)
Spring-Kafka Concurrency Property
I am able to write a test case for my consumer by doing some research, but here I had to replicate the creation of container and few other things in order to make it work. I was wondering,
if there is any other way for test cases to use the same container which is started during application startup (may by pointing test context) and the consumer which is started during startup to use the embedded kafka instance directly.
The solution I figured out is as below, but not sure this is the right approach or not.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = Launcher.class)
public class BatchConsumerTest {
#ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, "topic1");
#Autowired
private KafkaListenerEndpointRegistry registry;
#Autowired
private KafkaTemplate<String, String> template;
#MockBean
private RestService restService;
#Before
public void setup() {
Mockito.when(restService.invokeService("")).thenReturn("");
}
#Test
public void test() throws Exception {
ConcurrentMessageListenerContainer<?, ?> container = null;
for (MessageListenerContainer con : registry.getAllListenerContainers()) {
container = (ConcurrentMessageListenerContainer<?, ?>) con;
container.stop();
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group1", "false",
embeddedKafka.getEmbeddedKafka());
DefaultKafkaConsumerFactory<String, String> factory = new DefaultKafkaConsumerFactory<String, String>(
consumerProps);
ContainerProperties containerProps = new ContainerProperties("ibo-grocerybag-subscription");
containerProps.setAckMode(AckMode.MANUAL);
ConcurrentMessageListenerContainer<String, String> messageListContainer = new ConcurrentMessageListenerContainer<String, String>(
factory, containerProps);
BatchAcknowledgingConsumerAwareMessageListener<String, String> listener = new BatchConsumer();
CountDownLatch latch = new CountDownLatch(1);
// messageListContainer.setupMessageListener(listener);
messageListContainer.setupMessageListener(new BatchAcknowledgingConsumerAwareMessageListener<String, String>() {
#Override
public void onMessage(List<ConsumerRecord<String, String>> data, Acknowledgment acknowledgment,
Consumer<?, ?> consumer) {
System.out.println("*******Data : " + data.get(0).value());
listener.onMessage(data, acknowledgment, consumer);
latch.countDown();
}
});
messageListContainer.start();
ContainerTestUtils.waitForAssignment(messageListContainer,
embeddedKafka.getEmbeddedKafka().getPartitionsPerTopic());
template.send("topic1", "Hello");
latch.await(10000, TimeUnit.MILLISECONDS);
assertThat(latch.getCount()).isEqualTo(0);
}
#After
public void destroy() {
embeddedKafka.getEmbeddedKafka().destroy();
}
I'm trying to commit a message just after reading it from the topic. I've followed this link (https://www.confluent.io/blog/apache-kafka-spring-boot-application) to create a Kafka consumer with spring. Normally it works perfect and the consumer gets the message and waits till anotherone enters in the queue. But the problem is that when I process this messages it takes a lot of time (circa 10 minutes) the kafka queue thinks that the message is not consumed (commited) and the consumers reads it again and again. I have to say that when my process time is less than 5 minutes it works well but when it lastas longer it doesn't commit the message.
I've looked for some answers around but it doesn't help me because I'm not using the same source code (and of course a different structure). I've tried to send asynchronous methods and also to commit asynchronously the message but I've failed.
Some of the sources are:
Spring Boot Kafka: Commit cannot be completed since the group has already rebalanced
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
https://dzone.com/articles/kafka-clients-at-most-once-at-least-once-exactly-o
Kafka 0.10 Java consumer not reading message from topic
https://github.com/confluentinc/confluent-kafka-dotnet/issues/470
The main class is here :
#SpringBootApplication
#EnableAsync
public class SpringBootKafkaApp {
public static void main(String[] args) {
SpringApplication.run(SpringBootKafkaApp .class, args);
}
The consumer class (where I need to commit my message)
#Service
public class Consumer {
#Autowired
AppPropert prop;
Consumer cons;
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message) throws IOException {
/*HERE I MUST CONSUME THE MESSAGE AND COMMIT IT */
Properties props=prope.startProp();//just getting my properties from my config-file
ControllerPRO pro = new ControllerPRO();
List<Future<String>> async= new ArrayList<Future<String>>();//call this method asynchronous, doesn't help me
try {
CompletableFuture<String> ret=pro.processLaunch(message,props);//here I call the process method
/*This works fine when the processLaunch method takes less than 5 minutes,
if it takes longer the consumer will get the same message from the topic and start again with this operation
*/
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("End of consumer method ");
}
}
How can I commit the message just after I read it from the queue.
I want to be sure that when I receive the message I commit the message immediately. Right now the message is commited when I finish to execute the method just after the (System.out.println). So can anybody tell me how to do this?
----- update -------
Sorry for the late reply but as #GirishB suggested I've been looking to the configuration of GirishB but I don't see where I can define the topic I want to read/listen from my configuration file (applications.yml). All the examples that I see use a structure similar to this (http://tutorials.jenkov.com/java-util-concurrent/blockingqueue.html). Is there any option that I can read a topic that is declared in other server? Using something similar to this #KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
=========== SOLUTION 1 ========================================
I followed #victor gallet advice and included the declaration of the confumer porperties in oder to accomodate the "Acknowledgment" object in the consume method. I've also followed this link (https://www.programcreek.com/java-api-examples/?code=SpringOnePlatform2016/grussell-spring-kafka/grussell-spring-kafka-master/s1p-kafka/src/main/java/org/s1p/CommonConfiguration.java) to get all the methods that I've used to declare and set all the properties (consumerProperties, consumerFactory, kafkaListenerContainerFactory). The only problem I found is the
"new SeekToCurrentErrorHandler() " declaration because I'm getting an error and for the moment I'm not able to resolve it (would be great if someone explain it to me).
#Service
public class Consumer {
#Autowired
AppPropert prop;
Consumer cons;
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
//factory.setErrorHandler(new SeekToCurrentErrorHandler());//getting error here despite I've loaded the library
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerProperties());
}
#Bean
public Map<String, Object> consumerProperties() {
Map<String, Object> props = new HashMap<>();
Properties propsManu=prop.startProperties();// here I'm getting my porperties file where I retrive the configuration from a remote server (you have to trust that this method works)
//props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configProperties.getBrokerAddress());
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, propsManu.getProperty("bootstrap-servers"));
//props.put(ConsumerConfig.GROUP_ID_CONFIG, "s1pGroup");
props.put(ConsumerConfig.GROUP_ID_CONFIG, propsManu.getProperty("group-id"));
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
//props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, propsManu.getProperty("key-deserializer"));
//props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, propsManu.getProperty("value-deserializer"));
return props;
}
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message) throws IOException {
/*HERE I MUST CONSUME THE MESSAGE AND COMMIT IT */
acknowledgment.acknowledge();// commit immediately
Properties props=prop.startProp();//just getting my properties from my config-file
ControllerPRO pro = new ControllerPRO();
List<Future<String>> async= new ArrayList<Future<String>>();//call this method asynchronous, doesn't help me
try {
CompletableFuture<String> ret=pro.processLaunch(message,props);//here I call the process method
/*This works fine when the processLaunch method takes less than 5 minutes,
if it takes longer the consumer will get the same message from the topic and start again with this operation
*/
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("End of consumer method ");
}
}
``````````````````````````````````````````````````````````
You have to modify your consumer configuation with property enable.auto.commit set to false :
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
Then, you have to modify Spring Kafka Listener factory and set the ack-mode to MANUAL_IMMEDIATE. Here's an example of a ConcurrentKafkaListenerContainerFactory :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
As explained from documentation, MANUAL_IMMEDIATE means : Commit the offset immediately when the Acknowledgment.acknowledge() method is called by the listener.
You can find all committing methods here.
Then, in your listener code, you can commit the offset manually by adding an Acknowledgmentobject, for example:
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message, Acknowledgment acknowledgment) {
// commit immediately
acknowledgment.acknowledge();
}
You may use a java.util.concurrent.BlockingQueue to push the message as you consume and commit the Kafka offset. Then using another thread get the message from the blockingQueue and process. This way you don't have to wait till processing completes.
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
After setting above property , if you want to process in batch then you can follow followimg configurations.
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
// you can set either Manual or MANUAL_IMMEDIATE because //KafkaMessageListenerContainer invokes
//ConsumerBatchAcknowledgment for any kind of manual ackmode
factory.getContainerProperties().setAckOnError(true);
//specifying batch error handler because i have enabled to listen records in batch
factory.setBatchErrorHandler(new SeekToCurrentBatchErrorHandler());
factory.setBatchListener(true);
factory.getContainerProperties().setSyncCommits(false);
KafkaProperties java doc:
/**
* What to do when there is no initial offset in Kafka or if the current offset
* does not exist any more on the server.
*/
private String autoOffsetReset;
I have hello world appllication which contains application.properties
spring.kafka.consumer.group-id=foo
spring.kafka.consumer.auto-offset-reset=latest
At this case #KafkaListener method is invoked for all entries. But expected result was that #KafkaListener method is invoked only for latest 3 options I send. I tried to use another option:
spring.kafka.consumer.auto-offset-reset=earlisest
But behaviour the same.
Can you explain this stuff?
P.S.
code sample:
#SpringBootApplication
public class Application implements CommandLineRunner {
public static Logger logger = LoggerFactory.getLogger(Application.class);
public static void main(String[] args) {
SpringApplication.run(Application.class, args).close();
}
#Autowired
private KafkaTemplate<String, String> template;
private final CountDownLatch latch = new CountDownLatch(3);
#Override
public void run(String... args) throws Exception {
this.template.send("spring_kafka_topic", "foo1");
this.template.send("spring_kafka_topic", "foo2");
this.template.send("spring_kafka_topic", "foo3");
latch.await(60, TimeUnit.SECONDS);
logger.info("All received");
}
#KafkaListener(topics = "spring_kafka_topic")
public void listen(ConsumerRecord<?, ?> cr) throws Exception {
logger.info(cr.toString());
latch.countDown();
}
}
Update:
Behaviour doesn't depends on
spring.kafka.consumer.auto-offset-reset
it is only depends on spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit
if I set spring.kafka.consumer.enable-auto-commit=false - I see all records.
if I set spring.kafka.consumer.enable-auto-commit=true - I see only 3 last records.
Please clarify menaning of spring.kafka.consumer.auto-offset-reset property
The KafkaProperties in Spring Boot does this:
public Map<String, Object> buildProperties() {
Map<String, Object> properties = new HashMap<String, Object>();
if (this.autoCommitInterval != null) {
properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,
this.autoCommitInterval);
}
if (this.autoOffsetReset != null) {
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
this.autoOffsetReset);
}
This buildProperties() is used from the buildConsumerProperties() which, in turn in the:
#Bean
#ConditionalOnMissingBean(ConsumerFactory.class)
public ConsumerFactory<?, ?> kafkaConsumerFactory() {
return new DefaultKafkaConsumerFactory<Object, Object>(
this.properties.buildConsumerProperties());
}
So, if you use your own ConsumerFactory bean definition be sure to reuse those KafkaProperties: https://docs.spring.io/spring-boot/docs/1.5.7.RELEASE/reference/htmlsingle/#boot-features-kafka-extra-props
UPDATE
OK. I see what's going on.
Try to add this property:
spring.kafka.consumer.enable-auto-commit=false
This way we won't have async auto-commits based on some commit interval.
The logic in our application is based on the exit fact after the latch.await(60, TimeUnit.SECONDS);. When we get 3 expected records we exit. This way the async auto-commit from the consumer might not happen yet. So, the next time you run the application the consumer polls data from the uncommited offset.
When we turn off auto-commit, we have an AckMode.BATCH, which is performed synchronously and we have an ability to see really latest recodrs in the topic for this foo consumer group.
I am new to Kafka, and created consumer using Spring boot #kafkalistener.
My use case is once read the message from kafka partition, i need to process and when any exception arise, need to re-process the message after sometime. On exception scenario, I should not update the offset and after server start, i need to process the message again.
Following are the configuration
#Configuration
#EnableKafka
public class ReceiverConfiguration {
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
factory.getContainerProperties().setSyncCommits(true);
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<String, String>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> propsMap = new HashMap();
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "<some broker configuration>");
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "6000");
propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, CustomDeserializer.class);
propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer-Group");
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return propsMap;
}
#Bean
public Listener listener() {
System.out.println("%%%%%%%%% Initializing Listener %%%%%%%");
return new Listener();
}
}
Following are the listener class
public class Listener {
private static final Logger logger = LoggerFactory.getLogger(Listener.class);
public CountDownLatch getCountDownLatch1() {
return countDownLatch1;
}
private CountDownLatch countDownLatch1 = new CountDownLatch(1);
#KafkaListener(topics = "topic")
public void listen(ConsumerRecord<String, CustomObject> record, Acknowledgment ack) throws Exception{
logger.info("******** 1 message: "+record);
//ack.acknowledge();
}
}
scenario 1: During Consumer service is running, when producer sending the message, Listener class reading the message and Not updating the offset, till this part looks good. If i Stop the consumer, offset is updated in the consumer group.
Problem: Should not update the offset during server stop scenario. Once my back-end processing issue is resolved, when i restart my consumer service, I need to consume the message again only when the offset is not committed. But here offset is committed and there is no chance i can consumer the message from partition again.
scenario 2: Assuming my consumer service is down, Producer sending message to Topic partition, can see offset is not incremented and lag is 1. Still service is not enabled with ack.acknowledge(), i.e. code is commented out only, even though offset is committed in the consumer group.
Problem: Till I am acknowledging the offset, should not commit the offset. Problem noticed in server start.
Please help me resolving the issue, was not able to find proper redirection.
Appreciate your help